diff --git a/deploy/index.html b/deploy/index.html index 37962af03..92358e4f2 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -49,8 +49,8 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve

Then, the ingress controller can be installed like this:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml
 

Warning

For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

Proxy-protocol is supported in GCE check the Official Documentations on how to enable.

Azure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml
 

More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.

Digital Ocean

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/do/deploy.yaml
-

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/scw/deploy.yaml
-
Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
+

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/scw/deploy.yaml
+

Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
 

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml
 

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

OVHcloud

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
 helm repo update
@@ -59,7 +59,7 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
 

For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.

Miscellaneous

Checking ingress controller version

Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec:

POD_NAMESPACE=ingress-nginx
 POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
 kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-

Scope

By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace.

See also “How to easily install multiple instances of the Ingress NGINX controller in the same cluster” for more details.

Webhook network access

Warning

The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.

Certificate generation

Attention

The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook.

This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

You can wait until it is ready to run the next command:

 kubectl wait --namespace ingress-nginx \
+

Scope

By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. Although the use of this flag is not popular, one important fact to note is that the secret containing the default-ssl-certificate needs to also be present in the watched namespace(s).

See also “How to easily install multiple instances of the Ingress NGINX controller in the same cluster” for more details.

Webhook network access

Warning

The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.

Certificate generation

Attention

The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook.

This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

You can wait until it is ready to run the next command:

 kubectl wait --namespace ingress-nginx \
   --for=condition=ready pod \
   --selector=app.kubernetes.io/component=controller \
   --timeout=120s
diff --git a/search/search_index.json b/search/search_index.json
index 16d1fd88c..7c9d03288 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

This is the documentation for the Ingress NGINX Controller.

It is built around the Kubernetes Ingress resource, using a ConfigMap to store the controller configuration.

You can learn more about using Ingress in the official Kubernetes documentation.

"},{"location":"#getting-started","title":"Getting Started","text":"

See Deployment for a whirlwind tour that will get you started.

"},{"location":"e2e-tests/","title":"E2e tests","text":""},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","title":"e2e test suite for Ingress NGINX Controller","text":""},{"location":"e2e-tests/#admission-admission-controller","title":"[Admission] admission controller","text":"
  • should not allow overlaps of host and paths without canary annotations
  • should allow overlaps of host and paths with canary annotation
  • should block ingress with invalid path
  • should return an error if there is an error validating the ingress definition
  • should return an error if there is an invalid value in some annotation
  • should return an error if there is a forbidden value in some annotation
  • should return an error if there is an invalid path and wrong pathType is set
  • should not return an error if the Ingress V1 definition is valid with Ingress Class
  • should not return an error if the Ingress V1 definition is valid with IngressClass annotation
  • should return an error if the Ingress V1 definition contains invalid annotations
  • should not return an error for an invalid Ingress when it has unknown class
"},{"location":"e2e-tests/#affinity-session-cookie-name","title":"affinity session-cookie-name","text":"
  • should set sticky cookie SERVERID
  • should change cookie name on ingress definition change
  • should set the path to /something on the generated cookie
  • does not set the path to / on the generated cookie if there's more than one rule referring to the same backend
  • should set cookie with expires
  • should set cookie with domain
  • should not set cookie without domain annotation
  • should work with use-regex annotation and session-cookie-path
  • should warn user when use-regex is true and session-cookie-path is not set
  • should not set affinity across all server locations when using separate ingresses
  • should set sticky cookie without host
  • should work with server-alias annotation
  • should set secure in cookie with provided true annotation on http
  • should not set secure in cookie with provided false annotation on http
  • should set secure in cookie with provided false annotation on https
"},{"location":"e2e-tests/#affinitymode","title":"affinitymode","text":"
  • Balanced affinity mode should balance
  • Check persistent affinity mode
"},{"location":"e2e-tests/#server-alias","title":"server-alias","text":"
  • should return status code 200 for host 'foo' and 404 for 'bar'
  • should return status code 200 for host 'foo' and 'bar'
  • should return status code 200 for hosts defined in two ingresses, different path with one alias
"},{"location":"e2e-tests/#app-root","title":"app-root","text":"
  • should redirect to /foo
"},{"location":"e2e-tests/#auth-","title":"auth-*","text":"
  • should return status code 200 when no authentication is configured
  • should return status code 503 when authentication is configured with an invalid secret
  • should return status code 401 when authentication is configured but Authorization header is not configured
  • should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials
  • should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured
  • should return status code 200 when authentication is configured and Authorization header is sent
  • should return status code 200 when authentication is configured with a map and Authorization header is sent
  • should return status code 401 when authentication is configured with invalid content and Authorization header is sent
  • proxy_set_header My-Custom-Header 42;
  • proxy_set_header My-Custom-Header 42;
  • proxy_set_header 'My-Custom-Header' '42';
  • user retains cookie by default
  • user does not retain cookie if upstream returns error status code
  • user with annotated ingress retains cookie if upstream returns error status code
  • should return status code 200 when signed in
  • should redirect to signin url when not signed in
  • keeps processing new ingresses even if one of the existing ingresses is misconfigured
  • should overwrite Foo header with auth response
  • should return status code 200 when signed in
  • should redirect to signin url when not signed in
  • keeps processing new ingresses even if one of the existing ingresses is misconfigured
  • should return status code 200 when signed in after auth backend is deleted
  • should deny login for different location on same server
  • should deny login for different servers
  • should redirect to signin url when not signed in
  • should return 503 (location was denied)
  • should add error to the config
"},{"location":"e2e-tests/#auth-tls-","title":"auth-tls-*","text":"
  • should set sslClientCertificate, sslVerifyClient and sslVerifyDepth with auth-tls-secret
  • should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2
  • should 302 redirect to error page instead of 400 when auth-tls-error-page is set
  • should pass URL-encoded certificate to upstream
  • should validate auth-tls-verify-client
  • should return 403 using auth-tls-match-cn with no matching CN from client
  • should return 200 using auth-tls-match-cn with matching CN from client
  • should reload the nginx config when auth-tls-match-cn is updated
  • should return 200 using auth-tls-match-cn where atleast one of the regex options matches CN from client
"},{"location":"e2e-tests/#backend-protocol","title":"backend-protocol","text":"
  • should set backend protocol to https:// and use proxy_pass
  • should set backend protocol to https:// and use proxy_pass with lowercase annotation
  • should set backend protocol to $scheme:// and use proxy_pass
  • should set backend protocol to grpc:// and use grpc_pass
  • should set backend protocol to grpcs:// and use grpc_pass
  • should set backend protocol to '' and use fastcgi_pass
"},{"location":"e2e-tests/#canary-","title":"canary-*","text":"
  • should response with a 200 status from the mainline upstream when requests are made to the mainline ingress
  • should return 404 status for requests to the canary if no matching ingress is found
  • should return the correct status codes when endpoints are unavailable
  • should route requests to the correct upstream if mainline ingress is created before the canary ingress
  • should route requests to the correct upstream if mainline ingress is created after the canary ingress
  • should route requests to the correct upstream if the mainline ingress is modified
  • should route requests to the correct upstream if the canary ingress is modified
  • should route requests to the correct upstream
  • should route requests to the correct upstream
  • should route requests to the correct upstream
  • should route requests to the correct upstream
  • should routes to mainline upstream when the given Regex causes error
  • should route requests to the correct upstream
  • respects always and never values
  • should route requests only to mainline if canary weight is 0
  • should route requests only to canary if canary weight is 100
  • should route requests only to canary if canary weight is equal to canary weight total
  • should route requests split between mainline and canary if canary weight is 50
  • should route requests split between mainline and canary if canary weight is 100 and weight total is 200
  • should not use canary as a catch-all server
  • should not use canary with domain as a server
  • does not crash when canary ingress has multiple paths to the same non-matching backend
  • always routes traffic to canary if first request was affinitized to canary (default behavior)
  • always routes traffic to canary if first request was affinitized to canary (explicit sticky behavior)
  • routes traffic to either mainline or canary backend (legacy behavior)
"},{"location":"e2e-tests/#client-body-buffer-size","title":"client-body-buffer-size","text":"
  • should set client_body_buffer_size to 1000
  • should set client_body_buffer_size to 1K
  • should set client_body_buffer_size to 1k
  • should set client_body_buffer_size to 1m
  • should set client_body_buffer_size to 1M
  • should not set client_body_buffer_size to invalid 1b
"},{"location":"e2e-tests/#connection-proxy-header","title":"connection-proxy-header","text":"
  • set connection header to keep-alive
"},{"location":"e2e-tests/#cors-","title":"cors-*","text":"
  • should enable cors
  • should set cors methods to only allow POST, GET
  • should set cors max-age
  • should disable cors allow credentials
  • should allow origin for cors
  • should allow headers for cors
  • should expose headers for cors
  • should allow - single origin for multiple cors values
  • should not allow - single origin for multiple cors values
  • should allow correct origins - single origin for multiple cors values
  • should not break functionality
  • should not break functionality - without *
  • should not break functionality with extra domain
  • should not match
  • should allow - single origin with required port
  • should not allow - single origin with port and origin without port
  • should not allow - single origin without port and origin with required port
  • should allow - matching origin with wildcard origin (2 subdomains)
  • should not allow - unmatching origin with wildcard origin (2 subdomains)
  • should allow - matching origin+port with wildcard origin
  • should not allow - portless origin with wildcard origin
  • should allow correct origins - missing subdomain + origin with wildcard origin and correct origin
  • should allow - missing origins (should allow all origins)
  • should allow correct origin but not others - cors allow origin annotations contain trailing comma
"},{"location":"e2e-tests/#custom-headers-","title":"custom-headers-*","text":"
  • should return status code 200 when no custom-headers is configured
  • should return status code 503 when custom-headers is configured with an invalid secret
  • more_set_headers 'My-Custom-Header' '42';
"},{"location":"e2e-tests/#custom-http-errors","title":"custom-http-errors","text":"
  • configures Nginx correctly
"},{"location":"e2e-tests/#default-backend","title":"default-backend","text":"
  • should use a custom default backend as upstream
"},{"location":"e2e-tests/#disable-access-log-disable-http-access-log-disable-stream-access-log","title":"disable-access-log disable-http-access-log disable-stream-access-log","text":"
  • disable-access-log set access_log off
  • disable-http-access-log set access_log off
  • disable-stream-access-log set access_log off
"},{"location":"e2e-tests/#disable-proxy-intercept-errors","title":"disable-proxy-intercept-errors","text":"
  • configures Nginx correctly
"},{"location":"e2e-tests/#backend-protocol-fastcgi","title":"backend-protocol - FastCGI","text":"
  • should use fastcgi_pass in the configuration file
  • should add fastcgi_index in the configuration file
  • should add fastcgi_param in the configuration file
  • should return OK for service with backend protocol FastCGI
"},{"location":"e2e-tests/#force-ssl-redirect","title":"force-ssl-redirect","text":"
  • should redirect to https
"},{"location":"e2e-tests/#from-to-www-redirect","title":"from-to-www-redirect","text":"
  • should redirect from www HTTP to HTTP
  • should redirect from www HTTPS to HTTPS
"},{"location":"e2e-tests/#backend-protocol-grpc","title":"backend-protocol - GRPC","text":"
  • should use grpc_pass in the configuration file
  • should return OK for service with backend protocol GRPC
  • authorization metadata should be overwritten by external auth response headers
  • should return OK for service with backend protocol GRPCS
  • should return OK when request not exceed timeout
  • should return Error when request exceed timeout
"},{"location":"e2e-tests/#http2-push-preload","title":"http2-push-preload","text":"
  • enable the http2-push-preload directive
"},{"location":"e2e-tests/#allowlist-source-range","title":"allowlist-source-range","text":"
  • should set valid ip allowlist range
"},{"location":"e2e-tests/#denylist-source-range","title":"denylist-source-range","text":"
  • only deny explicitly denied IPs, allow all others
  • only allow explicitly allowed IPs, deny all others
"},{"location":"e2e-tests/#annotation-limit-connections","title":"Annotation - limit-connections","text":"
  • should limit-connections
"},{"location":"e2e-tests/#limit-rate","title":"limit-rate","text":"
  • Check limit-rate annotation
"},{"location":"e2e-tests/#enable-access-log-enable-rewrite-log","title":"enable-access-log enable-rewrite-log","text":"
  • set access_log off
  • set rewrite_log on
"},{"location":"e2e-tests/#mirror-","title":"mirror-*","text":"
  • should set mirror-target to http://localhost/mirror
  • should set mirror-target to https://test.env.com/$request_uri
  • should disable mirror-request-body
"},{"location":"e2e-tests/#modsecurity-owasp","title":"modsecurity owasp","text":"
  • should enable modsecurity
  • should enable modsecurity with transaction ID and OWASP rules
  • should disable modsecurity
  • should enable modsecurity with snippet
  • should enable modsecurity without using 'modsecurity on;'
  • should disable modsecurity using 'modsecurity off;'
  • should enable modsecurity with snippet and block requests
  • should enable modsecurity globally and with modsecurity-snippet block requests
  • should enable modsecurity when enable-owasp-modsecurity-crs is set to true
  • should enable modsecurity through the config map
  • should enable modsecurity through the config map but ignore snippet as disabled by admin
  • should disable default modsecurity conf setting when modsecurity-snippet is specified
"},{"location":"e2e-tests/#preserve-trailing-slash","title":"preserve-trailing-slash","text":"
  • should allow preservation of trailing slashes
"},{"location":"e2e-tests/#proxy-","title":"proxy-*","text":"
  • should set proxy_redirect to off
  • should set proxy_redirect to default
  • should set proxy_redirect to hello.com goodbye.com
  • should set proxy client-max-body-size to 8m
  • should not set proxy client-max-body-size to incorrect value
  • should set valid proxy timeouts
  • should not set invalid proxy timeouts
  • should turn on proxy-buffering
  • should turn off proxy-request-buffering
  • should build proxy next upstream
  • should setup proxy cookies
  • should change the default proxy HTTP version
"},{"location":"e2e-tests/#proxy-ssl-","title":"proxy-ssl-*","text":"
  • should set valid proxy-ssl-secret
  • should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on
  • should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES
  • should set valid proxy-ssl-secret, proxy-ssl-protocols
  • proxy-ssl-location-only flag should change the nginx config server part
"},{"location":"e2e-tests/#permanent-redirect-permanent-redirect-code","title":"permanent-redirect permanent-redirect-code","text":"
  • should respond with a standard redirect code
  • should respond with a custom redirect code
"},{"location":"e2e-tests/#rewrite-target-use-regex-enable-rewrite-log","title":"rewrite-target use-regex enable-rewrite-log","text":"
  • should write rewrite logs
  • should use correct longest path match
  • should use ~* location modifier if regex annotation is present
  • should fail to use longest match for documented warning
  • should allow for custom rewrite parameters
"},{"location":"e2e-tests/#satisfy","title":"satisfy","text":"
  • should configure satisfy directive correctly
  • should allow multiple auth with satisfy any
"},{"location":"e2e-tests/#server-snippet","title":"server-snippet","text":""},{"location":"e2e-tests/#service-upstream","title":"service-upstream","text":"
  • should use the Service Cluster IP and Port
  • should use the Service Cluster IP and Port
  • should not use the Service Cluster IP and Port
"},{"location":"e2e-tests/#configuration-snippet","title":"configuration-snippet","text":"
  • set snippet more_set_headers in all locations
  • drops snippet more_set_header in all locations if disabled by admin
"},{"location":"e2e-tests/#ssl-ciphers","title":"ssl-ciphers","text":"
  • should change ssl ciphers
  • should keep ssl ciphers
"},{"location":"e2e-tests/#stream-snippet","title":"stream-snippet","text":"
  • should add value of stream-snippet to nginx config
  • should add stream-snippet and drop annotations per admin config
"},{"location":"e2e-tests/#upstream-hash-by-","title":"upstream-hash-by-*","text":"
  • should connect to the same pod
  • should connect to the same subset of pods
"},{"location":"e2e-tests/#upstream-vhost","title":"upstream-vhost","text":"
  • set host to upstreamvhost.bar.com
"},{"location":"e2e-tests/#x-forwarded-prefix","title":"x-forwarded-prefix","text":"
  • should set the X-Forwarded-Prefix to the annotation value
  • should not add X-Forwarded-Prefix if the annotation value is empty
"},{"location":"e2e-tests/#cgroups-cgroups","title":"[CGroups] cgroups","text":"
  • detects cgroups version v1
  • detect cgroups version v2
"},{"location":"e2e-tests/#debug-cli","title":"Debug CLI","text":"
  • should list the backend servers
  • should get information for a specific backend server
  • should produce valid JSON for /dbg general
"},{"location":"e2e-tests/#default-backend-custom-service","title":"[Default Backend] custom service","text":"
  • uses custom default backend that returns 200 as status code
"},{"location":"e2e-tests/#default-backend_1","title":"[Default Backend]","text":"
  • should return 404 sending requests when only a default backend is running
  • enables access logging for default backend
  • disables access logging for default backend
"},{"location":"e2e-tests/#default-backend-ssl","title":"[Default Backend] SSL","text":"
  • should return a self generated SSL certificate
"},{"location":"e2e-tests/#default-backend-change-default-settings","title":"[Default Backend] change default settings","text":"
  • should apply the annotation to the default backend
"},{"location":"e2e-tests/#disable-leader-routing-works-when-leader-election-was-disabled","title":"[Disable Leader] Routing works when leader election was disabled","text":"
  • should create multiple ingress routings rules when leader election has disabled
"},{"location":"e2e-tests/#endpointslices-long-service-name","title":"[Endpointslices] long service name","text":"
  • should return 200 when service name has max allowed number of characters 63
"},{"location":"e2e-tests/#topologyhints-topology-aware-routing","title":"[TopologyHints] topology aware routing","text":"
  • should return 200 when service has topology hints
"},{"location":"e2e-tests/#shutdown-grace-period-shutdown","title":"[Shutdown] Grace period shutdown","text":"
  • /healthz should return status code 500 during shutdown grace period
"},{"location":"e2e-tests/#shutdown-ingress-controller","title":"[Shutdown] ingress controller","text":"
  • should shutdown in less than 60 seconds without pending connections
"},{"location":"e2e-tests/#shutdown-graceful-shutdown-with-pending-request","title":"[Shutdown] Graceful shutdown with pending request","text":"
  • should let slow requests finish before shutting down
"},{"location":"e2e-tests/#ingress-deepinspection","title":"[Ingress] DeepInspection","text":"
  • should drop whole ingress if one path matches invalid regex
"},{"location":"e2e-tests/#single-ingress-multiple-hosts","title":"single ingress - multiple hosts","text":"
  • should set the correct $service_name NGINX variable
"},{"location":"e2e-tests/#ingress-pathtype-exact","title":"[Ingress] [PathType] exact","text":"
  • should choose exact location for /exact
"},{"location":"e2e-tests/#ingress-pathtype-mix-exact-and-prefix-paths","title":"[Ingress] [PathType] mix Exact and Prefix paths","text":"
  • should choose the correct location
"},{"location":"e2e-tests/#ingress-pathtype-prefix-checks","title":"[Ingress] [PathType] prefix checks","text":"
  • should return 404 when prefix /aaa does not match request /aaaccc
  • should test prefix path using simple regex pattern for /id/{int}
  • should test prefix path using regex pattern for /id/{int} ignoring non-digits characters at end of string
  • should test prefix path using fixed path size regex pattern /id/{int}{3}
  • should correctly route multi-segment path patterns
"},{"location":"e2e-tests/#ingress-definition-without-host","title":"[Ingress] definition without host","text":"
  • should set ingress details variables for ingresses without a host
  • should set ingress details variables for ingresses with host without IngressRuleValue, only Backend
"},{"location":"e2e-tests/#memory-leak-dynamic-certificates","title":"[Memory Leak] Dynamic Certificates","text":"
  • should not leak memory from ingress SSL certificates or configuration updates
"},{"location":"e2e-tests/#load-balancer-load-balance","title":"[Load Balancer] load-balance","text":"
  • should apply the configmap load-balance setting
"},{"location":"e2e-tests/#load-balancer-ewma","title":"[Load Balancer] EWMA","text":"
  • does not fail requests
"},{"location":"e2e-tests/#load-balancer-round-robin","title":"[Load Balancer] round-robin","text":"
  • should evenly distribute requests with round-robin (default algorithm)
"},{"location":"e2e-tests/#lua-dynamic-certificates","title":"[Lua] dynamic certificates","text":"
  • picks up the certificate when we add TLS spec to existing ingress
  • picks up the previously missing secret for a given ingress without reloading
  • supports requests with domain with trailing dot
  • picks up the updated certificate without reloading
  • falls back to using default certificate when secret gets deleted without reloading
  • picks up a non-certificate only change
  • removes HTTPS configuration when we delete TLS spec
"},{"location":"e2e-tests/#lua-dynamic-configuration","title":"[Lua] dynamic configuration","text":"
  • configures balancer Lua middleware correctly
  • handles endpoints only changes
  • handles endpoints only changes (down scaling of replicas)
  • handles endpoints only changes consistently (down scaling of replicas vs. empty service)
  • handles an annotation change
"},{"location":"e2e-tests/#metrics-exported-prometheus-metrics","title":"[metrics] exported prometheus metrics","text":"
  • exclude socket request metrics are absent
  • exclude socket request metrics are present
"},{"location":"e2e-tests/#nginx-configuration","title":"nginx-configuration","text":"
  • start nginx with default configuration
  • fails when using alias directive
  • fails when using root directive
"},{"location":"e2e-tests/#security-request-smuggling","title":"[Security] request smuggling","text":"
  • should not return body content from error_page
"},{"location":"e2e-tests/#service-backend-status-code-503","title":"[Service] backend status code 503","text":"
  • should return 503 when backend service does not exist
  • should return 503 when all backend service endpoints are unavailable
"},{"location":"e2e-tests/#service-type-externalname","title":"[Service] Type ExternalName","text":"
  • works with external name set to incomplete fqdn
  • should return 200 for service type=ExternalName without a port defined
  • should return 200 for service type=ExternalName with a port defined
  • should return status 502 for service type=ExternalName with an invalid host
  • should return 200 for service type=ExternalName using a port name
  • should return 200 for service type=ExternalName using FQDN with trailing dot
  • should update the external name after a service update
  • should sync ingress on external name service addition/deletion
"},{"location":"e2e-tests/#service-nil-service-backend","title":"[Service] Nil Service Backend","text":"
  • should return 404 when backend service is nil
"},{"location":"e2e-tests/#access-log","title":"access-log","text":"
  • use the default configuration
  • use the specified configuration
  • use the specified configuration
  • use the specified configuration
  • use the specified configuration
"},{"location":"e2e-tests/#aio-write","title":"aio-write","text":"
  • should be enabled by default
  • should be enabled when setting is true
  • should be disabled when setting is false
"},{"location":"e2e-tests/#bad-annotation-values","title":"Bad annotation values","text":"
  • [BAD_ANNOTATIONS] should drop an ingress if there is an invalid character in some annotation
  • [BAD_ANNOTATIONS] should drop an ingress if there is a forbidden word in some annotation
  • [BAD_ANNOTATIONS] should allow an ingress if there is a default blocklist config in place
  • [BAD_ANNOTATIONS] should drop an ingress if there is a custom blocklist config in place and allow others to pass
"},{"location":"e2e-tests/#brotli","title":"brotli","text":"
  • should only compress responses that meet the brotli-min-length condition
"},{"location":"e2e-tests/#configmap-change","title":"Configmap change","text":"
  • should reload after an update in the configuration
"},{"location":"e2e-tests/#add-headers","title":"add-headers","text":"
  • Add a custom header
  • Add multiple custom headers
"},{"location":"e2e-tests/#ssl-flag-default-ssl-certificate","title":"[SSL] [Flag] default-ssl-certificate","text":"
  • uses default ssl certificate for catch-all ingress
  • uses default ssl certificate for host based ingress when configured certificate does not match host
"},{"location":"e2e-tests/#flag-disable-catch-all","title":"[Flag] disable-catch-all","text":"
  • should ignore catch all Ingress with backend
  • should ignore catch all Ingress with backend and rules
  • should delete Ingress updated to catch-all
  • should allow Ingress with rules
"},{"location":"e2e-tests/#flag-disable-service-external-name","title":"[Flag] disable-service-external-name","text":"
  • should ignore services of external-name type
"},{"location":"e2e-tests/#flag-disable-sync-events","title":"[Flag] disable-sync-events","text":"
  • should create sync events (default)
  • should create sync events
  • should not create sync events
"},{"location":"e2e-tests/#enable-real-ip","title":"enable-real-ip","text":"
  • trusts X-Forwarded-For header only when setting is true
  • should not trust X-Forwarded-For header when setting is false
"},{"location":"e2e-tests/#use-forwarded-headers","title":"use-forwarded-headers","text":"
  • should trust X-Forwarded headers when setting is true
  • should not trust X-Forwarded headers when setting is false
"},{"location":"e2e-tests/#geoip2","title":"Geoip2","text":"
  • should include geoip2 line in config when enabled and db file exists
  • should only allow requests from specific countries
  • should up and running nginx controller using autoreload flag
"},{"location":"e2e-tests/#security-block-","title":"[Security] block-*","text":"
  • should block CIDRs defined in the ConfigMap
  • should block User-Agents defined in the ConfigMap
  • should block Referers defined in the ConfigMap
"},{"location":"e2e-tests/#security-global-auth-url","title":"[Security] global-auth-url","text":"
  • should return status code 401 when request any protected service
  • should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service
  • should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service
  • should still return status code 200 after auth backend is deleted using cache
  • user retains cookie by default
  • user does not retain cookie if upstream returns error status code
  • user with global-auth-always-set-cookie key in configmap retains cookie if upstream returns error status code
"},{"location":"e2e-tests/#global-options","title":"global-options","text":"
  • should have worker_rlimit_nofile option
  • should have worker_rlimit_nofile option and be independent on amount of worker processes
"},{"location":"e2e-tests/#grpc","title":"GRPC","text":"
  • should set the correct GRPC Buffer Size
"},{"location":"e2e-tests/#gzip","title":"gzip","text":"
  • should be disabled by default
  • should be enabled with default settings
  • should set gzip_comp_level to 4
  • should set gzip_disable to msie6
  • should set gzip_min_length to 100
  • should set gzip_types to text/html
"},{"location":"e2e-tests/#hash-size","title":"hash size","text":"
  • should set server_names_hash_bucket_size
  • should set server_names_hash_max_size
  • should set proxy-headers-hash-bucket-size
  • should set proxy-headers-hash-max-size
  • should set variables-hash-bucket-size
  • should set variables-hash-max-size
  • should set vmap-hash-bucket-size
"},{"location":"e2e-tests/#flag-ingress-class","title":"[Flag] ingress-class","text":"
  • should ignore Ingress with a different class annotation
  • should ignore Ingress with different controller class
  • should accept both Ingresses with default IngressClassName and IngressClass annotation
  • should ignore Ingress without IngressClass configuration
  • should delete Ingress when class is removed
  • should serve Ingress when class is added
  • should serve Ingress when class is updated between annotation and ingressClassName
  • should ignore Ingress with no class and accept the correctly configured Ingresses
  • should watch Ingress with no class and ignore ingress with a different class
  • should watch Ingress that uses the class name even if spec is different
  • should watch Ingress with correct annotation
  • should ignore Ingress with only IngressClassName
"},{"location":"e2e-tests/#keep-alive-keep-alive-requests","title":"keep-alive keep-alive-requests","text":"
  • should set keepalive_timeout
  • should set keepalive_requests
  • should set keepalive connection to upstream server
  • should set keep alive connection timeout to upstream server
  • should set keepalive time to upstream server
  • should set the request count to upstream server through one keep alive connection
"},{"location":"e2e-tests/#configmap-limit-rate","title":"Configmap - limit-rate","text":"
  • Check limit-rate config
"},{"location":"e2e-tests/#flag-custom-http-and-https-ports","title":"[Flag] custom HTTP and HTTPS ports","text":"
  • should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port
  • should set X-Forwarded-Port header to 443
  • should set the X-Forwarded-Port header to 443
"},{"location":"e2e-tests/#log-format-","title":"log-format-*","text":"
  • should not configure log-format escape by default
  • should enable the log-format-escape-json
  • should disable the log-format-escape-json
  • should enable the log-format-escape-none
  • should disable the log-format-escape-none
  • log-format-escape-json enabled
  • log-format default escape
  • log-format-escape-none enabled
"},{"location":"e2e-tests/#lua-lua-shared-dicts","title":"[Lua] lua-shared-dicts","text":"
  • configures lua shared dicts
"},{"location":"e2e-tests/#main-snippet","title":"main-snippet","text":"
  • should add value of main-snippet setting to nginx config
"},{"location":"e2e-tests/#security-modsecurity-snippet","title":"[Security] modsecurity-snippet","text":"
  • should add value of modsecurity-snippet setting to nginx config
"},{"location":"e2e-tests/#enable-multi-accept","title":"enable-multi-accept","text":"
  • should be enabled by default
  • should be enabled when set to true
  • should be disabled when set to false
"},{"location":"e2e-tests/#flag-watch-namespace-selector","title":"[Flag] watch namespace selector","text":"
  • should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar
"},{"location":"e2e-tests/#security-no-auth-locations","title":"[Security] no-auth-locations","text":"
  • should return status code 401 when accessing '/' unauthentication
  • should return status code 200 when accessing '/' authentication
  • should return status code 200 when accessing '/noauth' unauthenticated
"},{"location":"e2e-tests/#add-no-tls-redirect-locations","title":"Add no tls redirect locations","text":"
  • Check no tls redirect locations config
"},{"location":"e2e-tests/#ocsp","title":"OCSP","text":"
  • should enable OCSP and contain stapling information in the connection
"},{"location":"e2e-tests/#configure-opentelemetry","title":"Configure Opentelemetry","text":"
  • should not exists opentelemetry directive
  • should exists opentelemetry directive when is enabled
  • should include opentelemetry_trust_incoming_spans on directive when enabled
  • should not exists opentelemetry_operation_name directive when is empty
  • should exists opentelemetry_operation_name directive when is configured
"},{"location":"e2e-tests/#proxy-connect-timeout","title":"proxy-connect-timeout","text":"
  • should set valid proxy timeouts using configmap values
  • should not set invalid proxy timeouts using configmap values
"},{"location":"e2e-tests/#dynamic-proxy_host","title":"Dynamic $proxy_host","text":"
  • should exist a proxy_host
  • should exist a proxy_host using the upstream-vhost annotation value
"},{"location":"e2e-tests/#proxy-next-upstream","title":"proxy-next-upstream","text":"
  • should build proxy next upstream using configmap values
"},{"location":"e2e-tests/#use-proxy-protocol","title":"use-proxy-protocol","text":"
  • should respect port passed by the PROXY Protocol
  • should respect proto passed by the PROXY Protocol server port
  • should enable PROXY Protocol for HTTPS
  • should enable PROXY Protocol for TCP
"},{"location":"e2e-tests/#proxy-read-timeout","title":"proxy-read-timeout","text":"
  • should set valid proxy read timeouts using configmap values
  • should not set invalid proxy read timeouts using configmap values
"},{"location":"e2e-tests/#proxy-send-timeout","title":"proxy-send-timeout","text":"
  • should set valid proxy send timeouts using configmap values
  • should not set invalid proxy send timeouts using configmap values
"},{"location":"e2e-tests/#reuse-port","title":"reuse-port","text":"
  • reuse port should be enabled by default
  • reuse port should be disabled
  • reuse port should be enabled
"},{"location":"e2e-tests/#configmap-server-snippet","title":"configmap server-snippet","text":"
  • should add value of server-snippet setting to all ingress config
  • should add global server-snippet and drop annotations per admin config
"},{"location":"e2e-tests/#server-tokens","title":"server-tokens","text":"
  • should not exists Server header in the response
  • should exists Server header in the response when is enabled
"},{"location":"e2e-tests/#ssl-ciphers_1","title":"ssl-ciphers","text":"
  • Add ssl ciphers
"},{"location":"e2e-tests/#flag-enable-ssl-passthrough","title":"[Flag] enable-ssl-passthrough","text":""},{"location":"e2e-tests/#with-enable-ssl-passthrough-enabled","title":"With enable-ssl-passthrough enabled","text":"
  • should enable ssl-passthrough-proxy-port on a different port
  • should pass unknown traffic to default backend and handle known traffic
"},{"location":"e2e-tests/#configmap-stream-snippet","title":"configmap stream-snippet","text":"
  • should add value of stream-snippet via config map to nginx config
"},{"location":"e2e-tests/#ssl-tls-protocols-ciphers-and-headers","title":"[SSL] TLS protocols, ciphers and headers)","text":"
  • setting cipher suite
  • setting max-age parameter
  • setting includeSubDomains parameter
  • setting preload parameter
  • overriding what's set from the upstream
  • should not use ports during the HTTP to HTTPS redirection
  • should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection
"},{"location":"e2e-tests/#annotation-validations","title":"annotation validations","text":"
  • should allow ingress based on their risk on webhooks
  • should allow ingress based on their risk on webhooks
"},{"location":"e2e-tests/#ssl-redirect-to-https","title":"[SSL] redirect to HTTPS","text":"
  • should redirect from HTTP to HTTPS when secret is missing
"},{"location":"e2e-tests/#ssl-secret-update","title":"[SSL] secret update","text":"
  • should not appear references to secret updates not used in ingress rules
  • should return the fake SSL certificate if the secret is invalid
"},{"location":"e2e-tests/#status-status-update","title":"[Status] status update","text":"
  • should update status field after client-go reconnection
"},{"location":"e2e-tests/#tcp-tcp-services","title":"[TCP] tcp-services","text":"
  • should expose a TCP service
  • should expose an ExternalName TCP service
  • should reload after an update in the configuration
"},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#multiple-controller-in-one-cluster","title":"Multiple controller in one cluster","text":"

Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?

You can install them in different namespaces.

  • Create a new namespace
kubectl create namespace ingress-nginx-2\n
  • Use Helm to install the additional instance of the ingress controller
  • Ensure you have Helm working (refer to the Helm documentation)
  • We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config;
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\n
  • Make sure you have updated the helm repo data;
helm repo update\n
  • Now, install an additional instance of the ingress-nginx controller like this:
helm install ingress-nginx-2 ingress-nginx/ingress-nginx  \\\n--namespace ingress-nginx-2 \\\n--set controller.ingressClassResource.name=nginx-two \\\n--set controller.ingressClass=nginx-two \\\n--set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\\n--set controller.ingressClassResource.enabled=true \\\n--set controller.ingressClassByName=true\n

If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.

Note that controller.ingressClassResource.name and controller.ingressClass have to be set correctly. The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.

"},{"location":"faq/#i-cant-use-multiple-namespaces-what-should-i-do","title":"I can't use multiple namespaces, what should I do?","text":"

If you need to install all instances in the same namespace, then you need to specify a different election id, like this:

helm install ingress-nginx-2 ingress-nginx/ingress-nginx  \\\n--namespace kube-system \\\n--set controller.electionID=nginx-two-leader \\\n--set controller.ingressClassResource.name=nginx-two \\\n--set controller.ingressClass=nginx-two \\\n--set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\\n--set controller.ingressClassResource.enabled=true \\\n--set controller.ingressClassByName=true\n
"},{"location":"faq/#retaining-client-ipaddress","title":"Retaining Client IPAddress","text":"

Question - How to obtain the real-client-ipaddress ?

The goto solution for retaining the real-client IPaddress is to enable PROXY protocol.

Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.

The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.

Enabling proxy-protocol on the controller is documented here .

For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.

Some more info available here

Some more info on proxy-protocol is here

"},{"location":"faq/#client-ipaddress-on-single-node-cluster","title":"client-ipaddress on single-node cluster","text":"

Single node clusters are created for dev & test uses with tools like \"kind\" or \"minikube\". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.

After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to \"Local\" then the real-ipaddress of a client is visible to the controller.

% kubectl explain service.spec.externalTrafficPolicy\nKIND:       Service\nVERSION:    v1\n\nFIELD: externalTrafficPolicy <string>\n\nDESCRIPTION:\n    externalTrafficPolicy describes how nodes distribute service traffic they\n    receive on one of the Service's \"externally-facing\" addresses (NodePorts,\n    ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will\n    configure the service in a way that assumes that external load balancers\n    will take care of balancing the service traffic between nodes, and so each\n    node will deliver traffic only to the node-local endpoints of the service,\n    without masquerading the client source IP. (Traffic mistakenly sent to a\n    node with no endpoints will be dropped.) The default value, \"Cluster\", uses\n    the standard behavior of routing to all endpoints evenly (possibly modified\n    by topology and other features). Note that traffic sent to an External IP or\n    LoadBalancer IP from within the cluster will always get \"Cluster\" semantics,\n    but clients sending to a NodePort from within the cluster may need to take\n    traffic policy into account when picking a node.\n\n    Possible enum values:\n     - `\"Cluster\"` routes traffic to all endpoints.\n     - `\"Local\"` preserves the source IP of the traffic by routing only to\n    endpoints on the same node as the traffic was received on (dropping the\n    traffic if there are no local endpoints).\n
"},{"location":"faq/#client-ipaddress-l7","title":"client-ipaddress L7","text":"

The solution is to get the real client IPaddress from the \"X-Forward-For\" HTTP header

Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.

  • First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented here

  • Next, edit nginx.conf file inside your app pod, to contain the directives shown below:

set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production)\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n\nlog_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n                '$status $body_bytes_sent \"$http_referer\" '\n                '\"$http_user_agent\" '\n                'host=$host x-forwarded-for=$http_x_forwarded_for';\n\naccess_log /var/log/nginx/access.log main;\n
"},{"location":"faq/#kubernetes-v122-migration","title":"Kubernetes v1.22 Migration","text":"

If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.

"},{"location":"faq/#validation-of-path","title":"Validation Of path","text":"
  • For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.

  • This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.

  • When \"ingress.spec.rules.http.pathType=Exact\" or \"pathType=Prefix\", this validation will limit the characters accepted on the field \"ingress.spec.rules.http.paths.path\", to \"alphanumeric characters\", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\"

  • When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \"ImplementationSpecific\".

  • API Spec on pathType is documented here

  • When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, \"/,\",\"_\",\"-\", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.

  • The cluster admin should establish validation rules using mechanisms like \"Open Policy Agent\", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here

  • A complete example of an Openpolicyagent gatekeeper rule is available here

  • If you have any issues or concerns, please do one of the following:

  • Open a GitHub issue
  • Comment in our Dev Slack Channel
  • Open a thread in our Google Group ingress-nginx-dev@kubernetes.io
"},{"location":"faq/#why-is-chunking-not-working-since-controller-v110","title":"Why is chunking not working since controller v1.10 ?","text":"
  • If your code is setting the HTTP header \"Transfer-Encoding: chunked\" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e

  • More details are available in this issue https://github.com/kubernetes/ingress-nginx/issues/11162

"},{"location":"how-it-works/","title":"How it works","text":"

The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.

"},{"location":"how-it-works/#nginx-configuration","title":"NGINX configuration","text":"

The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

"},{"location":"how-it-works/#nginx-model","title":"NGINX model","text":"

Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

"},{"location":"how-it-works/#building-the-nginx-model","title":"Building the NGINX model","text":"

Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

Operations to build the model:

  • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

  • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

  • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
  • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

  • Create a list of NGINX Servers (per hostname)

  • Create a list of NGINX Upstreams
  • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
  • Annotations are applied to all the paths in the Ingress.
  • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
"},{"location":"how-it-works/#when-a-reload-is-required","title":"When a reload is required","text":"

The next list describes the scenarios when a reload is required:

  • New Ingress Resource Created.
  • TLS section is added to existing Ingress.
  • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
  • A path is added/removed from an Ingress.
  • An Ingress, Service, Secret is removed.
  • Some missing referenced object from the Ingress is available, like a Service or Secret.
  • A Secret is updated.
"},{"location":"how-it-works/#avoiding-reloads","title":"Avoiding reloads","text":"

In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","title":"Avoiding reloads on Endpoints changes","text":"

On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","title":"Avoiding outage from wrong configuration","text":"

Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

"},{"location":"kubectl-plugin/","title":"kubectl plugin","text":""},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","title":"The ingress-nginx kubectl plugin","text":""},{"location":"kubectl-plugin/#installation","title":"Installation","text":"

Install krew, then run

kubectl krew install ingress-nginx\n

to install the plugin. Then run

kubectl ingress-nginx --help\n

to make sure the plugin is properly installed and to get a list of commands:

kubectl ingress-nginx --help\nA kubectl plugin for inspecting your ingress-nginx deployments\n\nUsage:\n  ingress-nginx [command]\n\nAvailable Commands:\n  backends    Inspect the dynamic backend information of an ingress-nginx instance\n  certs       Output the certificate data stored in an ingress-nginx pod\n  conf        Inspect the generated nginx.conf\n  exec        Execute a command inside an ingress-nginx pod\n  general     Inspect the other dynamic ingress-nginx information\n  help        Help about any command\n  info        Show information about the ingress-nginx service\n  ingresses   Provide a short summary of all of the ingress definitions\n  lint        Inspect kubernetes resources for possible issues\n  logs        Get the kubernetes logs for an ingress-nginx pod\n  ssh         ssh into a running ingress-nginx pod\n\nFlags:\n      --as string                      Username to impersonate for the operation\n      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --cache-dir string               Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\")\n      --certificate-authority string   Path to a cert file for the certificate authority\n      --client-certificate string      Path to a client certificate file for TLS\n      --client-key string              Path to a client key file for TLS\n      --cluster string                 The name of the kubeconfig cluster to use\n      --context string                 The name of the kubeconfig context to use\n  -h, --help                           help for ingress-nginx\n      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.\n  -n, --namespace string               If present, the namespace scope for this CLI request\n      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n  -s, --server string                  The address and port of the Kubernetes API server\n      --token string                   Bearer token for authentication to the API server\n      --user string                    The name of the kubeconfig user to use\n\nUse \"ingress-nginx [command] --help\" for more information about a command.\n
"},{"location":"kubectl-plugin/#common-flags","title":"Common Flags","text":"
  • Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
  • Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment>, --pod <pod>, and --container <container> flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller, and the --container flag defaults to controller.
  • Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.
"},{"location":"kubectl-plugin/#subcommands","title":"Subcommands","text":"

Note that backends, general, certs, and conf require ingress-nginx version 0.23.0 or higher.

"},{"location":"kubectl-plugin/#backends","title":"backends","text":"

Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about:

$ kubectl ingress-nginx backends -n ingress-nginx\n[\n  {\n    \"name\": \"default-apple-service-5678\",\n    \"service\": {\n      \"metadata\": {\n        \"creationTimestamp\": null\n      },\n      \"spec\": {\n        \"ports\": [\n          {\n            \"protocol\": \"TCP\",\n            \"port\": 5678,\n            \"targetPort\": 5678\n          }\n        ],\n        \"selector\": {\n          \"app\": \"apple\"\n        },\n        \"clusterIP\": \"10.97.230.121\",\n        \"type\": \"ClusterIP\",\n        \"sessionAffinity\": \"None\"\n      },\n      \"status\": {\n        \"loadBalancer\": {}\n      }\n    },\n    \"port\": 0,\n    \"sslPassthrough\": false,\n    \"endpoints\": [\n      {\n        \"address\": \"10.1.3.86\",\n        \"port\": \"5678\"\n      }\n    ],\n    \"sessionAffinityConfig\": {\n      \"name\": \"\",\n      \"cookieSessionAffinity\": {\n        \"name\": \"\"\n      }\n    },\n    \"upstreamHashByConfig\": {\n      \"upstream-hash-by-subset-size\": 3\n    },\n    \"noServer\": false,\n    \"trafficShapingPolicy\": {\n      \"weight\": 0,\n      \"header\": \"\",\n      \"headerValue\": \"\",\n      \"cookie\": \"\"\n    }\n  },\n  {\n    \"name\": \"default-echo-service-8080\",\n    ...\n  },\n  {\n    \"name\": \"upstream-default-backend\",\n    ...\n  }\n]\n

Add the --list option to show only the backend names. Add the --backend <backend> option to show only the backend with the given name.

"},{"location":"kubectl-plugin/#certs","title":"certs","text":"

Use kubectl ingress-nginx certs --host <hostname> to dump the SSL cert/key information for a given host.

WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere.

$ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\n-----BEGIN RSA PRIVATE KEY-----\n<REDACTED! DO NOT SHARE THIS!>\n-----END RSA PRIVATE KEY-----\n
"},{"location":"kubectl-plugin/#conf","title":"conf","text":"

Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host <hostname> option to view only the server block for that host:

kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local\n\n    server {\n        server_name testaddr.local ;\n\n        listen 80;\n\n        set $proxy_upstream_name \"-\";\n        set $pass_access_scheme $scheme;\n        set $pass_server_port $server_port;\n        set $best_http_host $http_host;\n        set $pass_port $pass_server_port;\n\n        location / {\n\n            set $namespace      \"\";\n            set $ingress_name   \"\";\n            set $service_name   \"\";\n            set $service_port   \"0\";\n            set $location_path  \"/\";\n\n...\n
"},{"location":"kubectl-plugin/#exec","title":"exec","text":"

kubectl ingress-nginx exec is exactly the same as kubectl exec, with the same command flags. It will automatically choose an ingress-nginx pod to run the command in.

$ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx\nfastcgi_params\ngeoip\nlua\nmime.types\nmodsecurity\nmodules\nnginx.conf\nopentracing.json\nopentelemetry.toml\nowasp-modsecurity-crs\ntemplate\n
"},{"location":"kubectl-plugin/#info","title":"info","text":"

Shows the internal and external IP/CNAMES for an ingress-nginx service.

$ kubectl ingress-nginx info -n ingress-nginx\nService cluster IP address: 10.187.253.31\nLoadBalancer IP|CNAME: 35.123.123.123\n

Use the --service <service> flag if your ingress-nginx LoadBalancer service is not named ingress-nginx.

"},{"location":"kubectl-plugin/#ingresses","title":"ingresses","text":"

kubectl ingress-nginx ingresses, alternately kubectl ingress-nginx ing, shows a more detailed view of the ingress definitions in a namespace.

Compare:

$ kubectl get ingresses --all-namespaces\nNAMESPACE   NAME               HOSTS                            ADDRESS     PORTS   AGE\ndefault     example-ingress1   testaddr.local,testaddr2.local   localhost   80      5d\ndefault     test-ingress-2     *                                localhost   80      5d\n

vs.

$ kubectl ingress-nginx ingresses --all-namespaces\nNAMESPACE   INGRESS NAME       HOST+PATH                        ADDRESSES   TLS   SERVICE         SERVICE PORT   ENDPOINTS\ndefault     example-ingress1   testaddr.local/etameta           localhost   NO    pear-service    5678           5\ndefault     example-ingress1   testaddr2.local/otherpath        localhost   NO    apple-service   5678           1\ndefault     example-ingress1   testaddr2.local/otherotherpath   localhost   NO    pear-service    5678           5\ndefault     test-ingress-2     *                                localhost   NO    echo-service    8080           2\n
"},{"location":"kubectl-plugin/#lint","title":"lint","text":"

kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions.

$ kubectl ingress-nginx lint --all-namespaces --verbose\nChecking ingresses...\n\u2717 anamespace/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https://github.com/kubernetes/ingress-nginx/issues/3743\n\u2717 othernamespace/ingress-definition-blah\n  - The rewrite-target annotation value does not reference a capture group\n      Lint added for version 0.22.0\n      https://github.com/kubernetes/ingress-nginx/issues/3174\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n  - Uses removed config flag --sort-backends\n      Lint added for version 0.22.0\n      https://github.com/kubernetes/ingress-nginx/issues/3655\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https://github.com/kubernetes/ingress-nginx/issues/3808\n

To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags:

$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0\nChecking ingresses...\n\u2717 anamespace/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https://github.com/kubernetes/ingress-nginx/issues/3743\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https://github.com/kubernetes/ingress-nginx/issues/3808\n
"},{"location":"kubectl-plugin/#logs","title":"logs","text":"

kubectl ingress-nginx logs is almost the same as kubectl logs, with fewer flags. It will automatically choose an ingress-nginx pod to read logs from.

$ kubectl ingress-nginx logs -n ingress-nginx\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    dev\n  Build:      git-48dc3a867\n  Repository: git@github.com:kubernetes/ingress-nginx.git\n-------------------------------------------------------------------------------\n\nW0405 16:53:46.061589       7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)\nnginx version: nginx/1.15.9\nW0405 16:53:46.070093       7 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0405 16:53:46.070499       7 main.go:205] Creating API client for https://10.96.0.1:443\nI0405 16:53:46.077784       7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64\nI0405 16:53:46.183359       7 nginx.go:265] Starting NGINX Ingress controller\nI0405 16:53:46.193913       7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services\n...\n
"},{"location":"kubectl-plugin/#ssh","title":"ssh","text":"

kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.

$ kubectl ingress-nginx ssh -n ingress-nginx\nwww-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$\n
"},{"location":"lua_tests/","title":"Lua Tests","text":""},{"location":"lua_tests/#running-the-lua-tests","title":"Running the Lua Tests","text":"

To run the Lua tests you can run the following from the root directory:

make lua-test\n

This command makes use of docker hence does not need any dependency installations besides docker

"},{"location":"lua_tests/#where-are-the-lua-tests","title":"Where are the Lua Tests?","text":"

Lua Tests can be found in the rootfs/etc/nginx/lua/test directory

"},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"troubleshooting/#ingress-controller-logs-and-events","title":"Ingress-Controller Logs and Events","text":"

There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.

"},{"location":"troubleshooting/#check-the-ingress-resource-events","title":"Check the Ingress Resource Events","text":"
$ kubectl get ing -n <namespace-of-ingress-resource>\nNAME           HOSTS      ADDRESS     PORTS     AGE\ncafe-ingress   cafe.com   10.0.2.15   80        25s\n\n$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>\nName:             cafe-ingress\nNamespace:        default\nAddress:          10.0.2.15\nDefault backend:  default-http-backend:80 (172.17.0.5:8080)\nRules:\n  Host      Path  Backends\n  ----      ----  --------\n  cafe.com\n            /tea      tea-svc:80 (<none>)\n            /coffee   coffee-svc:80 (<none>)\nAnnotations:\n  kubectl.kubernetes.io/last-applied-configuration:  {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}}\n\nEvents:\n  Type    Reason  Age   From                      Message\n  ----    ------  ----  ----                      -------\n  Normal  CREATE  1m    ingress-nginx-controller  Ingress default/cafe-ingress\n  Normal  UPDATE  58s   ingress-nginx-controller  Ingress default/cafe-ingress\n
"},{"location":"troubleshooting/#check-the-ingress-controller-logs","title":"Check the Ingress Controller Logs","text":"
$ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1/1       Running   0          1m\n\n$ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    0.14.0\n  Build:      git-734361d\n  Repository: https://github.com/kubernetes/ingress-nginx\n-------------------------------------------------------------------------------\n....\n
"},{"location":"troubleshooting/#check-the-nginx-configuration","title":"Check the Nginx Configuration","text":"
$ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1/1       Running   0          1m\n\n$ kubectl exec -it -n <namespace-of-ingress-controller> ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf\ndaemon off;\nworker_processes 2;\npid /run/nginx.pid;\nworker_rlimit_nofile 523264;\nworker_shutdown_timeout 240s;\nevents {\n    multi_accept        on;\n    worker_connections  16384;\n    use                 epoll;\n}\nhttp {\n....\n
"},{"location":"troubleshooting/#check-if-used-services-exist","title":"Check if used Services Exist","text":"
$ kubectl get svc --all-namespaces\nNAMESPACE     NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE\ndefault       coffee-svc             ClusterIP   10.106.154.35    <none>        80/TCP          18m\ndefault       kubernetes             ClusterIP   10.96.0.1        <none>        443/TCP         30m\ndefault       tea-svc                ClusterIP   10.104.172.12    <none>        80/TCP          18m\nkube-system   default-http-backend   NodePort    10.108.189.236   <none>        80:30001/TCP    30m\nkube-system   kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   30m\nkube-system   kubernetes-dashboard   NodePort    10.103.128.17    <none>        80:30000/TCP    30m\n
"},{"location":"troubleshooting/#debug-logging","title":"Debug Logging","text":"

Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment.

$ kubectl get deploy -n <namespace-of-ingress-controller>\nNAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\ndefault-http-backend       1         1         1            1           35m\ningress-nginx-controller   1         1         1            1           35m\n\n$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller\n# Add --v=X to \"- args\", where X is an integer\n
  • --v=2 shows details using diff about the changes in the configuration in nginx
  • --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
  • --v=5 configures NGINX in debug mode
"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","title":"Authentication to the Kubernetes API Server","text":"

A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.

Both authentications must work:

+-------------+   service          +------------+\n|             |   authentication   |            |\n+  apiserver  +<-------------------+  ingress   |\n|             |                    | controller |\n+-------------+                    +------------+\n

Service authentication

The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:

  • Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.

  • Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host. The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.

  • Using the flag --apiserver-host: Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy. Please do not use this approach in production.

In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side.

Kubernetes                                                  Workstation\n+---------------------------------------------------+     +------------------+\n|                                                   |     |                  |\n|  +-----------+   apiserver        +------------+  |     |  +------------+  |\n|  |           |   proxy            |            |  |     |  |            |  |\n|  | apiserver |                    |  ingress   |  |     |  |  ingress   |  |\n|  |           |                    | controller |  |     |  | controller |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |  service account/  |            |  |     |  |            |  |\n|  |           |  kubeconfig        |            |  |     |  |            |  |\n|  |           +<-------------------+            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  +------+----+      kubeconfig    +------+-----+  |     |  +------+-----+  |\n|         |<--------------------------------------------------------|        |\n|                                                   |     |                  |\n+---------------------------------------------------+     +------------------+\n
"},{"location":"troubleshooting/#service-account","title":"Service Account","text":"

If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server.

Verify with the following commands:

# start a container that contains curl\n$ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh\n\n# check if secret exists\n/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt     namespace  token\n/ $\n\n# check base connectivity from cluster inside\n/ $ curl -k https://kubernetes.default.svc.cluster.local\n{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n\n  },\n  \"status\": \"Failure\",\n  \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n\n  },\n  \"code\": 403\n}/ $\n\n# connect using tokens\n}/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local\n&& echo\n{\n  \"paths\": [\n    \"/api\",\n    \"/api/v1\",\n    \"/apis\",\n    \"/apis/\",\n    ... TRUNCATED\n    \"/readyz/shutdown\",\n    \"/version\"\n  ]\n}\n/ $\n\n# when you type `exit` or `^D` the test pod will be deleted.\n

If it is not working, there are two possible reasons:

  1. The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret <name>. It will automatically be recreated.

  2. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter.

    Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.

More information:

  • User Guide: Service Accounts
  • Cluster Administrator Guide: Managing Service Accounts
"},{"location":"troubleshooting/#kube-config","title":"Kube-Config","text":"

If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.

"},{"location":"troubleshooting/#using-gdb-with-nginx","title":"Using GDB with Nginx","text":"

Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.

Note: The below is based on the nginx documentation.

  1. SSH into the worker

    $ ssh user@workerIP\n
  2. Obtain the Docker Container Running nginx

    $ docker ps | grep ingress-nginx-controller\nCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES\nd9e1d243156a        registry.k8s.io/ingress-nginx/controller   \"/usr/bin/dumb-init \u2026\"   19 minutes ago      Up 19 minutes                                                                            k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0\n
  3. Exec into the container

    $ docker exec -it --user=0 --privileged d9e1d243156a bash\n
  4. Make sure nginx is running in --with-debug

    $ nginx -V 2>&1 | grep -- '--with-debug'\n
  5. Get list of processes running on container

    $ ps -ef\nUID        PID  PPID  C STIME TTY          TIME CMD\nroot         1     0  0 20:23 ?        00:00:00 /usr/bin/dumb-init /nginx-ingres\nroot         5     1  0 20:23 ?        00:00:05 /ingress-nginx-controller --defa\nroot        21     5  0 20:23 ?        00:00:00 nginx: master process /usr/sbin/\nnobody     106    21  0 20:23 ?        00:00:00 nginx: worker process\nnobody     107    21  0 20:23 ?        00:00:00 nginx: worker process\nroot       172     0  0 20:43 pts/0    00:00:00 bash\n
  6. Attach gdb to the nginx master process

    $ gdb -p 21\n....\nAttaching to process 21\nReading symbols from /usr/sbin/nginx...done.\n....\n(gdb)\n
  7. Copy and paste the following:

    set $cd = ngx_cycle->config_dump\nset $nelts = $cd.nelts\nset $elts = (ngx_conf_dump_t*)($cd.elts)\nwhile ($nelts-- > 0)\nset $name = $elts[$nelts]->name.data\nprintf \"Dumping %s to nginx_conf.txt\\n\", $name\nappend memory nginx_conf.txt \\\n        $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end\nend\n
  8. Quit GDB by pressing CTRL+D

  9. Open nginx_conf.txt

    cat nginx_conf.txt\n
"},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)","text":"
  1. Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider )

    Warning  Failed     5m5s (x4 over 6m34s)   kubelet            Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF\n
    Then please follow the below steps.

  2. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details

    a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null

    (\u2388 |myprompt)\u279c  ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null\n                    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                                    Dload  Upload   Total   Spent    Left  Speed\n                    0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\n (\u2388 |myprompt)\u279c  ~\n
    b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
    (\u2388 |myprompt)\u279c  ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                    HTTP/2 200\n                                    docker-distribution-api-version: registry/2.0\n                                    content-type: application/vnd.docker.distribution.manifest.list.v2+json\n                                    docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                    content-length: 1384\n                                    date: Wed, 28 Sep 2022 16:46:28 GMT\n                                    server: Docker Registry\n                                    x-xss-protection: 0\n                                    x-frame-options: SAMEORIGIN\n                                    alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"\n\n  (\u2388 |myprompt)\u279c  ~\n
    Redirection in the proxy is implemented to ensure the pulling of the images.

  3. This is the solution recommended to whitelist the below image repositories :

    *.appspot.com    \n*.k8s.io        \n*.pkg.dev\n*.gcr.io\n
    More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.

"},{"location":"troubleshooting/#unable-to-listen-on-port-80443","title":"Unable to listen on port (80/443)","text":"

One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.

If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.

"},{"location":"troubleshooting/#create-a-test-pod","title":"Create a test pod","text":"

The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example:

apiVersion: v1\nkind: Pod\nmetadata:\n  name: ingress-nginx-sleep\n  namespace: default\n  labels:\n    app: nginx\nspec:\n  containers:\n    - name: nginx\n      image: ##_CONTROLLER_IMAGE_##\n      resources:\n        requests:\n          memory: \"512Mi\"\n          cpu: \"500m\"\n        limits:\n          memory: \"1Gi\"\n          cpu: \"1\"\n      command: [\"sleep\"]\n      args: [\"3600\"]\n      ports:\n      - containerPort: 80\n        name: http\n        protocol: TCP\n      - containerPort: 443\n        name: https\n        protocol: TCP\n      securityContext:\n        allowPrivilegeEscalation: true\n        capabilities:\n          add:\n          - NET_BIND_SERVICE\n          drop:\n          - ALL\n        runAsUser: 101\n  restartPolicy: Never\n  nodeSelector:\n    kubernetes.io/hostname: ##_NODE_NAME_##\n  tolerations:\n  - key: \"node.kubernetes.io/unschedulable\"\n    operator: \"Exists\"\n    effect: NoSchedule\n
* update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster

Apply the YAML and open a shell into the pod. Try to manually run the controller process:

$ /nginx-ingress-controller\n
You should get the same error as from the ingress controller pod logs.

Confirm the capabilities are properly surfacing into the pod:

$ grep CapBnd /proc/1/status\nCapBnd: 0000000000000400\n
The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container.
$ capsh --decode=0000000000000400\n0x0000000000000400=cap_net_bind_service\n

"},{"location":"troubleshooting/#create-a-test-pod-as-root","title":"Create a test pod as root","text":"

(Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities.

Some things to try after shelling into this container:

Try running the controller as the www-data (101) user:

$ chmod 4755 /nginx-ingress-controller\n$ /nginx-ingress-controller\n
Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.

Install the libcap package and check capabilities on the file:

$ apk add libcap\n(1/1) Installing libcap (2.50-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 26 MiB in 41 packages\n$ getcap /nginx-ingress-controller\n/nginx-ingress-controller cap_net_bind_service=ep\n
(if missing, see above about purging image on the server and re-pulling)

Strace the executable to see what system calls are being executed when it fails:

$ apk add strace\n(1/1) Installing strace (5.12-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 28 MiB in 42 packages\n$ strace /nginx-ingress-controller\nexecve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0\narch_prctl(ARCH_SET_FS, 0x29ea690)      = 0\n...\n

"},{"location":"deploy/","title":"Installation Guide","text":"

There are multiple ways to install the Ingress-Nginx Controller:

  • with Helm, using the project repository chart;
  • with kubectl apply, using YAML manifests;
  • with specific addons (e.g. for minikube or MicroK8s).

On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.

"},{"location":"deploy/#contents","title":"Contents","text":"
  • Quick start

  • Environment-specific instructions

  • ... Docker Desktop
  • ... Rancher Desktop
  • ... minikube
  • ... MicroK8s
  • ... AWS
  • ... GCE - GKE
  • ... Azure
  • ... Digital Ocean
  • ... Scaleway
  • ... Exoscale
  • ... Oracle Cloud Infrastructure
  • ... OVHcloud
  • ... Bare-metal
  • Miscellaneous
"},{"location":"deploy/#quick-start","title":"Quick start","text":"

If you have Helm, you can deploy the ingress controller with the following command:

helm upgrade --install ingress-nginx ingress-nginx \\\n  --repo https://kubernetes.github.io/ingress-nginx \\\n  --namespace ingress-nginx --create-namespace\n

It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.

Info

This command is idempotent:

  • if the ingress controller is not installed, it will install it,
  • if the ingress controller is already installed, it will upgrade it.

If you want a full list of values that you can set, while installing with Helm, then run:

helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx\n

Helm install on AWS/GCP/Azure/Other providers

The ingress-nginx-controller helm-chart is a generic install out of the box. The default set of helm values is not configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users. See AWS LB Controller. Examples of some annotations needed for the service resource of --type LoadBalancer on AWS are below:

  annotations:\n    service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\"\n    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp\n    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: \"ip\"\n    service.beta.kubernetes.io/aws-load-balancer-type: nlb\n    service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-security-groups: \"sg-something1 sg-something2\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: \"somebucket\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: \"ingress-nginx\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: \"5\"\n

If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

Info

The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.

Attention

If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.

"},{"location":"deploy/#firewall-configuration","title":"Firewall configuration","text":"

To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml. In general, you need:

  • Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller.
  • Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
"},{"location":"deploy/#pre-flight-check","title":"Pre-flight check","text":"

A few pods should start in the ingress-nginx namespace:

kubectl get pods --namespace=ingress-nginx\n

After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:

kubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io/component=controller \\\n  --timeout=120s\n
"},{"location":"deploy/#local-testing","title":"Local testing","text":"

Let's create a simple web server and the associated service:

kubectl create deployment demo --image=httpd --port=80\nkubectl expose deployment demo\n

Then create an ingress resource. The following example uses a host that maps to localhost:

kubectl create ingress demo-localhost --class=nginx \\\n  --rule=\"demo.localdev.me/*=demo:80\"\n

Now, forward a local port to the ingress controller:

kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80\n

Info

A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.

This issue shows a typical DNS problem and its solution.

At this point, you can access your deployment using curl ;

curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080\n

You should see a HTML response containing text like \"It works!\".

"},{"location":"deploy/#online-testing","title":"Online testing","text":"

If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller.

You can see that IP address or FQDN with the following command:

kubectl get service ingress-nginx-controller --namespace=ingress-nginx\n

It will be the EXTERNAL-IP field. If that field shows <pending>, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer).

Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io:

kubectl create ingress demo --class=nginx \\\n  --rule=\"www.demo.io/*=demo:80\"\n

Alternatively, the above command can be rewritten as follows for the --rule command and below.

kubectl create ingress demo --class=nginx \\\n  --rule www.demo.io/=demo:80\n

You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89

"},{"location":"deploy/#environment-specific-instructions","title":"Environment-specific instructions","text":""},{"location":"deploy/#local-development-clusters","title":"Local development clusters","text":""},{"location":"deploy/#minikube","title":"minikube","text":"

The ingress controller can be installed through minikube's addons system:

minikube addons enable ingress\n
"},{"location":"deploy/#microk8s","title":"MicroK8s","text":"

The ingress controller can be installed through MicroK8s's addons system:

microk8s enable ingress\n

Please check the MicroK8s documentation page for details.

"},{"location":"deploy/#docker-desktop","title":"Docker Desktop","text":"

Kubernetes is available in Docker Desktop:

  • Mac, from version 18.06.0-ce
  • Windows, from version 18.06.0-ce

First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.

The ingress controller can be installed on Docker Desktop using the default quick start instructions.

On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.

"},{"location":"deploy/#rancher-desktop","title":"Rancher Desktop","text":"

Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.

Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.

Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.

"},{"location":"deploy/#cloud-deployments","title":"Cloud deployments","text":"

If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.

Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.

In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.

"},{"location":"deploy/#aws","title":"AWS","text":"

In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.

Info

The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

"},{"location":"deploy/#network-load-balancer-nlb","title":"Network Load Balancer (NLB)","text":"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/aws/deploy.yaml\n
"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","title":"TLS termination in AWS Load Balancer (NLB)","text":"

By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.

  1. Download the deploy.yaml template
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml\n
  1. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
proxy-real-ip-cidr: XXX.XXX.XXX/XX\n
  1. Change the AWS Certificate Manager (ACM) ID as well:
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\n
  1. Deploy the manifest:
kubectl apply -f deploy.yaml\n
"},{"location":"deploy/#nlb-idle-timeouts","title":"NLB Idle Timeouts","text":"

Idle timeout value for TCP flows is 350 seconds and cannot be modified.

For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.

By default, NGINX keepalive_timeout is set to 75s.

More information with regard to timeouts can be found in the official AWS documentation

"},{"location":"deploy/#gce-gke","title":"GCE-GKE","text":"

First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command:

kubectl create clusterrolebinding cluster-admin-binding \\\n  --clusterrole cluster-admin \\\n  --user $(gcloud config get-value account)\n

Then, the ingress controller can be installed like this:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

Warning

For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

Proxy-protocol is supported in GCE check the Official Documentations on how to enable.

"},{"location":"deploy/#azure","title":"Azure","text":"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.

"},{"location":"deploy/#digital-ocean","title":"Digital Ocean","text":"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/do/deploy.yaml\n
  • By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
"},{"location":"deploy/#scaleway","title":"Scaleway","text":"

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/scw/deploy.yaml\n
Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.

"},{"location":"deploy/#exoscale","title":"Exoscale","text":"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml\n

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

"},{"location":"deploy/#oracle-cloud-infrastructure","title":"Oracle Cloud Infrastructure","text":"
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

"},{"location":"deploy/#ovhcloud","title":"OVHcloud","text":"
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\nhelm repo update\nhelm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace\n

You can find the complete tutorial.

"},{"location":"deploy/#bare-metal-clusters","title":"Bare metal clusters","text":"

This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)

For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml\n

For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.

"},{"location":"deploy/#miscellaneous","title":"Miscellaneous","text":""},{"location":"deploy/#checking-ingress-controller-version","title":"Checking ingress controller version","text":"

Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec:

POD_NAMESPACE=ingress-nginx\nPOD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)\nkubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version\n
"},{"location":"deploy/#scope","title":"Scope","text":"

By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace.

See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.

"},{"location":"deploy/#webhook-network-access","title":"Webhook network access","text":"

Warning

The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.

"},{"location":"deploy/#certificate-generation","title":"Certificate generation","text":"

Attention

The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook.

This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

You can wait until it is ready to run the next command:

 kubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io/component=controller \\\n  --timeout=120s\n
"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","title":"Running on Kubernetes versions older than 1.19","text":"

Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1, then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1.

Here is how these Ingress versions are supported in Kubernetes:

  • before Kubernetes 1.19, only v1beta1 Ingress resources are supported
  • from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported
  • in Kubernetes 1.22 and above, only v1 Ingress resources are supported

And here is how these Ingress versions are supported in Ingress-Nginx Controller:

  • before version 1.0, only v1beta1 Ingress resources are supported
  • in version 1.0 and above, only v1 Ingress resources are

As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49).

The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).

"},{"location":"deploy/baremetal/","title":"Bare-metal considerations","text":"

In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.

"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","title":"A pure software solution: MetalLB","text":"

MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.

Note

The description of other supported configuration modes is off-scope for this document.

Warning

MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.

MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.

MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.

---\napiVersion: metallb.io/v1beta1\nkind: IPAddressPool\nmetadata:\n  name: default\n  namespace: metallb-system\nspec:\n  addresses:\n  - 203.0.113.10-203.0.113.15\n  autoAssign: true\n---\napiVersion: metallb.io/v1beta1\nkind: L2Advertisement\nmetadata:\n  name: default\n  namespace: metallb-system\nspec:\n  ipAddressPools:\n  - default\n
$ kubectl -n ingress-nginx get svc\nNAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)\ndefault-http-backend   ClusterIP     10.0.64.249    <none>       80/TCP\ningress-nginx          LoadBalancer  10.0.220.217   203.0.113.10  80:30100/TCP,443:30101/TCP\n

As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:

$ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com'\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n

Tip

In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.

"},{"location":"deploy/baremetal/#over-a-nodeport-service","title":"Over a NodePort Service","text":"

Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.

Info

A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.

In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.

Example

Given the NodePort 30100 allocated to the ingress-nginx Service

$ kubectl -n ingress-nginx get svc\nNAME                   TYPE        CLUSTER-IP     PORT(S)\ndefault-http-backend   ClusterIP   10.0.64.249    80/TCP\ningress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP\n

and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)

$ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.

Impact on the host system

While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.

This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.

This approach has a few other limitations one ought to be aware of:

  • Source IP address

Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.

The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).

Warning

This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.

Example

In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)

$ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

with a ingress-nginx-controller Deployment composed of 2 replicas

$ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP           NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1   host-2\ningress-nginx-controller-cf9ff8c96-8vvf8   1/1     Running   172.17.0.3   host-3\ningress-nginx-controller-cf9ff8c96-pxsds   1/1     Running   172.17.1.4   host-2\n

Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.

  • Ingress status

Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.

$ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n

Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.

Warning

There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

one could edit the ingress-nginx Service and add the following field to the object spec

spec:\n  externalIPs:\n  - 203.0.113.1\n  - 203.0.113.2\n  - 203.0.113.3\n

which would in turn be reflected on Ingress objects as follows:

$ kubectl get ingress -o wide\nNAME           HOSTS               ADDRESS                               PORTS\ntest-ingress   myapp.example.com   203.0.113.1,203.0.113.2,203.0.113.3   80\n
  • Redirects

As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.

Example

Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:

$ curl -D- http://myapp.example.com:30100`\nHTTP/1.1 308 Permanent Redirect\nServer: nginx/1.15.2\nLocation: https://myapp.example.com/  #-> missing NodePort in HTTPS redirect\n
"},{"location":"deploy/baremetal/#via-the-host-network","title":"Via the host network","text":"

In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.

Note

This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it.

This can be achieved by enabling the hostNetwork option in the Pods' spec.

template:\n  spec:\n    hostNetwork: true\n

Security considerations

Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.

Example

Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.

$ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP            NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3\ningress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2\n

One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:

$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>\n...\nEvents:\n  Type     Reason            From               Message\n  ----     ------            ----               -------\n  Warning  FailedScheduling  default-scheduler  0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n

One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.

Info

A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.

Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.

Like with NodePorts, this approach has a few quirks it is important to be aware of.

  • DNS resolution

Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.

  • Ingress status

Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.

$ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n

Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.

Example

Given a ingress-nginx-controller DaemonSet composed of 2 replicas

$ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP            NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3\ningress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2\n

the controller sets the status of all Ingress objects it manages to the following value:

$ kubectl get ingress -o wide\nNAME           HOSTS               ADDRESS                   PORTS\ntest-ingress   myapp.example.com   203.0.113.2,203.0.113.3   80\n

Note

Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments.

"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","title":"Using a self-provisioned edge","text":"

Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.

Such deployment builds upon the NodePort Service described above in Over a NodePort Service, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.

On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:

"},{"location":"deploy/baremetal/#external-ips","title":"External IPs","text":"

Source IP address

This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity.

The externalIPs Service option was previously mentioned in the NodePort section.

As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node.

Example

Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

$ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

and the following ingress-nginx NodePort Service

$ kubectl -n ingress-nginx get svc\nNAME                   TYPE        CLUSTER-IP     PORT(S)\ningress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP\n

One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:

spec:\n  externalIPs:\n  - 203.0.113.2\n  - 203.0.113.3\n
$ curl -D- http://myapp.example.com:30100\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n\n$ curl -D- http://myapp.example.com\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n

We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.

"},{"location":"deploy/hardening-guide/","title":"Hardening Guide","text":""},{"location":"deploy/hardening-guide/#overview","title":"Overview","text":"

There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:

  • nginx CIS Benchmark
  • cipherlist.eu (one of many forks of the now dead project cipherli.st)

This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.

Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.

This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself

"},{"location":"deploy/hardening-guide/#configuration-guide","title":"Configuration Guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends"},{"location":"deploy/rbac/","title":"Role Based Access Control (RBAC)","text":""},{"location":"deploy/rbac/#overview","title":"Overview","text":"

This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled.

Role Based Access Control is comprised of four layers:

  1. ClusterRole - permissions assigned to a role that apply to an entire cluster
  2. ClusterRoleBinding - binding a ClusterRole to a specific account
  3. Role - permissions assigned to a role that apply to a specific namespace
  4. RoleBinding - binding a Role to a specific account

In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the ingress-nginx-controller.

"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","title":"Service Accounts created in this example","text":"

One ServiceAccount is created in this example, ingress-nginx.

"},{"location":"deploy/rbac/#permissions-granted-in-this-example","title":"Permissions Granted in this example","text":"

There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx, and namespace specific permissions defined by the Role named ingress-nginx.

"},{"location":"deploy/rbac/#cluster-permissions","title":"Cluster Permissions","text":"

These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx

  • configmaps, endpoints, nodes, pods, secrets: list, watch
  • nodes: get
  • services, ingresses, ingressclasses, endpointslices: get, list, watch
  • events: create, patch
  • ingresses/status: update
  • leases: list, watch
"},{"location":"deploy/rbac/#namespace-permissions","title":"Namespace Permissions","text":"

These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx

  • configmaps, pods, secrets: get
  • endpoints: get

Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader

Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body).

  • leases: get, update (for resourceName ingress-controller-leader)
  • leases: create

This resourceName is the election-id defined by the ingress-controller, which defaults to:

  • election-id: ingress-controller-leader
  • resourceName : <election-id>

Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.

"},{"location":"deploy/rbac/#bindings","title":"Bindings","text":"

The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx.

The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.

"},{"location":"deploy/upgrade/","title":"Upgrading","text":"

Important

No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

"},{"location":"deploy/upgrade/#without-helm","title":"Without Helm","text":"

To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

I.e. if your deployment resource looks like (partial example):

kind: Deployment\nmetadata:\n  name: ingress-nginx-controller\n  namespace: ingress-nginx\nspec:\n  replicas: 1\n  selector: ...\n  template:\n    metadata: ...\n    spec:\n      containers:\n        - name: ingress-nginx-controller\n          image: registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef\n          args: ...\n

simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):

kubectl set image deployment/ingress-nginx-controller \\\n  controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\\n  -n ingress-nginx\n

For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx.

"},{"location":"deploy/upgrade/#with-helm","title":"With Helm","text":"

If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx, you should be able to upgrade using

helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx\n
"},{"location":"deploy/upgrade/#migrating-from-stablenginx-ingress","title":"Migrating from stable/nginx-ingress","text":"

See detailed steps in the upgrading section of the ingress-nginx chart README.

"},{"location":"developer-guide/code-overview/","title":"Ingress NGINX - Code Overview","text":"

This document provides an overview of Ingress NGINX code.

"},{"location":"developer-guide/code-overview/#core-golang-code","title":"Core Golang code","text":"

This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.

"},{"location":"developer-guide/code-overview/#core-sync-logics","title":"Core Sync Logics:","text":"

Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that:

  1. One copy is the currently running configuration model
  2. Second copy is the one generated in response to some changes in the cluster

The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one.

There are static and dynamic configuration changes.

All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.

The following parts of the code can be found:

"},{"location":"developer-guide/code-overview/#entrypoint","title":"Entrypoint","text":"

The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory.

"},{"location":"developer-guide/code-overview/#version","title":"Version","text":"

Is the package of the code responsible for adding version subcommand, and can be found in version directory.

"},{"location":"developer-guide/code-overview/#internal-code","title":"Internal code","text":"

This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:

"},{"location":"developer-guide/code-overview/#admission-controller","title":"Admission Controller","text":"

Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it.

This code can be found in internal/admission/controller directory.

"},{"location":"developer-guide/code-overview/#file-functions","title":"File functions","text":"

Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories.

This code can be found in internal/file directory.

"},{"location":"developer-guide/code-overview/#ingress-functions","title":"Ingress functions","text":"

Contains all the logics from Ingress-Nginx Controller, with some examples being:

  • Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go.
  • supported annotations and its parsing logics - internal/ingress/annotations.
  • reconciliation loops and logics - internal/ingress/controller
  • defaults - define the default struct - internal/ingress/defaults.
  • Error interface and types implementation - internal/ingress/errors
  • Metrics collectors for Prometheus exporting - internal/ingress/metric.
  • Resolver - Extracts information from a controller - internal/ingress/resolver.
  • Ingress Object status publisher - internal/ingress/status.

And other parts of the code that will be written in this document in a future.

"},{"location":"developer-guide/code-overview/#k8s-functions","title":"K8s functions","text":"

Contains helper functions for parsing Kubernetes objects.

This part of the code can be found in internal/k8s directory.

"},{"location":"developer-guide/code-overview/#networking-functions","title":"Networking functions","text":"

Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc.

This part of the code can be found in internal/net directory.

"},{"location":"developer-guide/code-overview/#nginx-functions","title":"NGINX functions","text":"

Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts.

This part of the code can be found in internal/nginx directory.

"},{"location":"developer-guide/code-overview/#tasks-queue","title":"Tasks / Queue","text":"

Contains the functions responsible for the sync queue part of the controller.

This part of the code can be found in internal/task directory.

"},{"location":"developer-guide/code-overview/#other-parts-of-internal","title":"Other parts of internal","text":"

Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.

"},{"location":"developer-guide/code-overview/#e2e-test","title":"E2E Test","text":"

The e2e tests code is in test directory.

"},{"location":"developer-guide/code-overview/#other-programs","title":"Other programs","text":"

Describe here kubectl plugin, dbg, waitshutdown and cover the hack scripts.

"},{"location":"developer-guide/code-overview/#kubectl-plugin","title":"kubectl plugin","text":"

It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin

"},{"location":"developer-guide/code-overview/#deploy-files","title":"Deploy files","text":"

This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components.

Those files are in deploy directory.

"},{"location":"developer-guide/code-overview/#helm-chart","title":"Helm Chart","text":"

Used to generate the Helm chart published.

Code is in charts/ingress-nginx.

"},{"location":"developer-guide/code-overview/#documentationwebsite","title":"Documentation/Website","text":"

The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/

This code is available in docs and it's main \"language\" is Markdown, used by mkdocs file to generate static pages.

"},{"location":"developer-guide/code-overview/#container-images","title":"Container Images","text":"

Container images used to run ingress-nginx, or to build the final image.

"},{"location":"developer-guide/code-overview/#base-images","title":"Base Images","text":"

Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.

There are other images inside this directory.

"},{"location":"developer-guide/code-overview/#ingress-controller-image","title":"Ingress Controller Image","text":"

The image used to build the final ingress controller, used in deploy scripts and Helm charts.

This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.

The files are in rootfs directory and contains:

  • The Dockerfile
  • nginx config
"},{"location":"developer-guide/code-overview/#ingress-nginx-lua-scripts","title":"Ingress NGINX Lua Scripts","text":"

Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper.

The directory containing Lua scripts is rootfs/etc/nginx/lua.

"},{"location":"developer-guide/code-overview/#nginx-go-template-file","title":"Nginx Go template file","text":"

One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file.

To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.

"},{"location":"developer-guide/getting-started/","title":"Getting Started","text":"

Developing for Ingress-Nginx Controller

This document explains how to get started with developing for Ingress-Nginx Controller.

For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide. This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any)

"},{"location":"developer-guide/getting-started/#prerequisites","title":"Prerequisites","text":"

Install Go 1.14 or later.

Note

The project uses Go Modules

Install Docker (v19.03.0 or later with experimental feature on)

Install kubectl (1.24.0 or higher)

Install Kind

Important

The majority of make tasks run as docker containers

"},{"location":"developer-guide/getting-started/#quick-start","title":"Quick Start","text":"
  1. Fork the repository
  2. Clone the repository to any location in your work station
  3. Add a GO111MODULE environment variable with export GO111MODULE=on
  4. Run go mod download to install dependencies
"},{"location":"developer-guide/getting-started/#local-build","title":"Local build","text":"

Start a local Kubernetes cluster using kind, build and deploy the ingress controller

make dev-env\n
- If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind, and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.

"},{"location":"developer-guide/getting-started/#testing","title":"Testing","text":"

Run go unit tests

make test\n

Run unit-tests for lua code

make lua-test\n

Lua tests are located in the directory rootfs/etc/nginx/lua/test

Important

Test files must follow the naming convention <mytest>_test.lua or it will be ignored

Run e2e test suite

make kind-e2e-test\n

To limit the scope of the tests to execute, we can use the environment variable FOCUS

FOCUS=\"no-auth-locations\" make kind-e2e-test\n

Note

The variable FOCUS defines Ginkgo Focused Specs

Valid values are defined in the describe definition of the e2e tests like Default Backend

The complete list of tests can be found here

"},{"location":"developer-guide/getting-started/#custom-docker-image","title":"Custom docker image","text":"

In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

This can be done setting two environment variables, REGISTRY and TAG

export TAG=\"dev\"\nexport REGISTRY=\"$USER\"\n\nmake build image\n

and then publish such version with

docker push $REGISTRY/controller:$TAG\n
"},{"location":"enhancements/","title":"Kubernetes Enhancement Proposals (KEPs)","text":"

A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

"},{"location":"enhancements/#quick-start-for-the-kep-process","title":"Quick start for the KEP process","text":"

Follow the process outlined in the KEP template

"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","title":"Do I have to use the KEP process?","text":"

No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

KEPs are only required when the changes are wide ranging and impact most of the project.

"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","title":"Why would I want to use the KEP process?","text":"

Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

Benefits to KEP users (in the limit):

  • Exposure on a kubernetes blessed web site that is findable via web search engines.
  • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
  • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.

"},{"location":"enhancements/20190724-only-dynamic-ssl/","title":"Remove static SSL configuration mode","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","title":"Table of Contents","text":"
  • Summary
  • Motivation
  • Goals
  • Non-Goals
  • Proposal
  • Implementation Details/Notes/Constraints
  • Drawbacks
  • Alternatives
"},{"location":"enhancements/20190724-only-dynamic-ssl/#summary","title":"Summary","text":"

Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

"},{"location":"enhancements/20190724-only-dynamic-ssl/#motivation","title":"Motivation","text":"

The static configuration implies reloads, something that affects the majority of the users.

"},{"location":"enhancements/20190724-only-dynamic-ssl/#goals","title":"Goals","text":"
  • Deprecation of the flag --enable-dynamic-certificates.
  • Cleanup of the codebase.
"},{"location":"enhancements/20190724-only-dynamic-ssl/#non-goals","title":"Non-Goals","text":"
  • Features related to certificate authentication are not changed in any way.
"},{"location":"enhancements/20190724-only-dynamic-ssl/#proposal","title":"Proposal","text":"
  • Remove static SSL configuration
"},{"location":"enhancements/20190724-only-dynamic-ssl/#implementation-detailsnotesconstraints","title":"Implementation Details/Notes/Constraints","text":"
  • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
  • Remove any action of the flag --enable-dynamic-certificates
"},{"location":"enhancements/20190724-only-dynamic-ssl/#drawbacks","title":"Drawbacks","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#alternatives","title":"Alternatives","text":"

Keep both implementations

"},{"location":"enhancements/20190815-zone-aware-routing/","title":"Availability zone aware routing","text":""},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","title":"Table of Contents","text":"
  • Availability zone aware routing
  • Table of Contents
  • Summary
  • Motivation
    • Goals
    • Non-Goals
  • Proposal
  • Implementation History
  • Drawbacks [optional]
"},{"location":"enhancements/20190815-zone-aware-routing/#summary","title":"Summary","text":"

Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

"},{"location":"enhancements/20190815-zone-aware-routing/#motivation","title":"Motivation","text":"

When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

Arguably inter-zone network latency should also be better than cross-zone.

"},{"location":"enhancements/20190815-zone-aware-routing/#goals","title":"Goals","text":"
  • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
  • This should not impact canary feature
  • ingress-nginx should be able to operate successfully if there are no zonal endpoints
"},{"location":"enhancements/20190815-zone-aware-routing/#non-goals","title":"Non-Goals","text":"
  • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
  • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
"},{"location":"enhancements/20190815-zone-aware-routing/#proposal","title":"Proposal","text":"

The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

"},{"location":"enhancements/20190815-zone-aware-routing/#implementation-history","title":"Implementation History","text":"
  • initial version of KEP is shipped
  • proposal and implementation details are done
"},{"location":"enhancements/20190815-zone-aware-routing/#drawbacks-optional","title":"Drawbacks [optional]","text":"

More load on the Kubernetes API server.

"},{"location":"enhancements/20231001-split-containers/","title":"Proposal to split containers","text":"
  • All the NGINX files should live on one container
  • No file other than NGINX files should exist on this container
  • This includes not mounting the service account
  • All the controller files should live on a different container
  • Controller container should have bare minimum to work (just go program)
  • ServiceAccount should be mounted just on controller

  • Inside nginx container, there should be a really small http listener just able to start, stop and reload NGINX

"},{"location":"enhancements/20231001-split-containers/#roadmap-what-needs-to-be-done","title":"Roadmap (what needs to be done)","text":"
  • Map what needs to be done to mount the SA just on controller container
  • Map all the required files for NGINX to work
  • Map all the required network calls between controller and NGINX
  • eg.: Dynamic lua reconfiguration
  • Map problematic features that will need attention
  • SSLPassthrough today happens on controller process and needs to happen on NGINX
"},{"location":"enhancements/20231001-split-containers/#ports-and-endpoints-on-nginx-container","title":"Ports and endpoints on NGINX container","text":"
  • Public HTTP/HTTPs port - 80 and 443
  • Lua configuration port - 10246 (HTTP) and 10247 (Stream)
  • 3333 (temp) - Dataplane controller http server
  • /reload - (POST) Reloads the configuration.
    • \"config\" argument is the location of temporary file that should be used / moved to nginx.conf
  • /test - (POST) Test the configuration of a given file location
    • \"config\" argument is the location of temporary file that should be tested
"},{"location":"enhancements/20231001-split-containers/#mounting-empty-sa-on-controller-container","title":"Mounting empty SA on controller container","text":"
kind: Pod\napiVersion: v1\nmetadata:\n  name: test\nspec:\n  containers:\n  - name: nginx\n    image: nginx:latest\n    ports:\n    - containerPort: 80\n  - name: othernginx\n    image: alpine:latest\n    command: [\"/bin/sh\"]\n    args: [\"-c\", \"while true; do date; sleep 3; done\"]\n    volumeMounts:\n    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n      name: emptysecret\n  volumes:\n  - name: emptysecret\n    emptyDir:\n      sizeLimit: 1Mi\n
"},{"location":"enhancements/20231001-split-containers/#mapped-folders-on-nginx-configuration","title":"Mapped folders on NGINX configuration","text":"

WARNING We need to be aware of inter mount containers and inode problems. If we mount a file instead of a directory, it may take time to reflect the file value on the target container

  • \"/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;\"; - Lua scripts
  • \"/var/log/nginx\" - NGINX logs
  • \"/tmp/nginx (nginx.pid)\" - NGINX pid directory / file, fcgi socket, etc
  • \" /etc/nginx/geoip\" - GeoIP database directory - OK - /etc/ingress-controller/geoip
  • /etc/nginx/mime.types - Mime types
  • /etc/ingress-controller/ssl - SSL directory (fake cert, auth cert)
  • /etc/ingress-controller/auth - Authentication files
  • /etc/nginx/modsecurity - Modsecurity configuration
  • /etc/nginx/owasp-modsecurity-crs - Modsecurity rules
  • /etc/nginx/tickets.key - SSL tickets - OK - /etc/ingress-controller/tickets.key
  • /etc/nginx/opentelemetry.toml - OTEL config - OK - /etc/ingress-controller/telemetry
  • /etc/nginx/opentracing.json - Opentracing config - OK - /etc/ingress-controller/telemetry
  • /etc/nginx/modules - NGINX modules
  • /etc/nginx/fastcgi_params (maybe) - fcgi params
  • /etc/nginx/template - Template, may be used by controller only
"},{"location":"enhancements/20231001-split-containers/#list-of-modules","title":"List of modules","text":"
ngx_http_auth_digest_module.so    ngx_http_modsecurity_module.so\nngx_http_brotli_filter_module.so  ngx_http_opentracing_module.so\nngx_http_brotli_static_module.so  ngx_stream_geoip2_module.so\nngx_http_geoip2_module.so\n
"},{"location":"enhancements/20231001-split-containers/#list-of-files-that-may-be-removed","title":"List of files that may be removed","text":"
-rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf\n-rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf.default\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:34 geoip\n-rw-r--r--    1 www-data www-data      2837 Jun 23 19:44 koi-utf\n-rw-r--r--    1 www-data www-data      2223 Jun 23 19:44 koi-win\ndrwxr-xr-x    6 www-data www-data      4096 Sep 19 14:13 lua\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modsecurity\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modules\n-rw-r--r--    1 www-data www-data     18275 Oct  1 21:28 nginx.conf\n-rw-r--r--    1 www-data www-data      2656 Jun 23 19:44 nginx.conf.default\n-rwx------    1 www-data www-data       420 Oct  1 21:28 opentelemetry.toml\n-rw-r--r--    1 www-data www-data         2 Oct  1 21:28 opentracing.json\ndrwxr-xr-x    7 www-data www-data      4096 Jun 23 19:44 owasp-modsecurity-crs\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Sep 19 14:13 template\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params.default\n-rw-r--r--    1 www-data www-data      3610 Jun 23 19:44 win-utf\n
"},{"location":"enhancements/YYYYMMDD-kep-template/","title":"Title","text":"

This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

The title should be lowercased and spaces/punctuation should be replaced with -.

To get started with this template:

  1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
  2. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
  3. Create a PR. Assign it to folks that are sponsoring this process.
  4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
  5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

The canonical place for the latest set of instructions (and the likely source of this file) is here.

The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

"},{"location":"enhancements/YYYYMMDD-kep-template/#table-of-contents","title":"Table of Contents","text":"

A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

  • Summary
  • Motivation
  • Goals
  • Non-Goals
  • Proposal
  • User Stories [optional]
    • Story 1
    • Story 2
  • Implementation Details/Notes/Constraints [optional]
  • Risks and Mitigations
  • Design Details
  • Test Plan
    • Removing a deprecated flag
  • Implementation History
  • Drawbacks [optional]
  • Alternatives [optional]
"},{"location":"enhancements/YYYYMMDD-kep-template/#summary","title":"Summary","text":"

The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

A good summary is probably at least a paragraph in length.

"},{"location":"enhancements/YYYYMMDD-kep-template/#motivation","title":"Motivation","text":"

This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

"},{"location":"enhancements/YYYYMMDD-kep-template/#goals","title":"Goals","text":"

List the specific goals of the KEP. How will we know that this has succeeded?

"},{"location":"enhancements/YYYYMMDD-kep-template/#non-goals","title":"Non-Goals","text":"

What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

"},{"location":"enhancements/YYYYMMDD-kep-template/#proposal","title":"Proposal","text":"

This is where we get down to the nitty gritty of what the proposal actually is.

"},{"location":"enhancements/YYYYMMDD-kep-template/#user-stories-optional","title":"User Stories [optional]","text":"

Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down.

"},{"location":"enhancements/YYYYMMDD-kep-template/#story-1","title":"Story 1","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#story-2","title":"Story 2","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#implementation-detailsnotesconstraints-optional","title":"Implementation Details/Notes/Constraints [optional]","text":"

What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

"},{"location":"enhancements/YYYYMMDD-kep-template/#risks-and-mitigations","title":"Risks and Mitigations","text":"

What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

How will security be reviewed and by whom? How will UX be reviewed and by whom?

Consider including folks that also work outside project.

"},{"location":"enhancements/YYYYMMDD-kep-template/#design-details","title":"Design Details","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#test-plan","title":"Test Plan","text":"

Note: Section not required until targeted at a release.

Consider the following in developing a test plan for this enhancement:

  • Will there be e2e and integration tests, in addition to unit tests?
  • How will it be tested in isolation vs with other components?

No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

"},{"location":"enhancements/YYYYMMDD-kep-template/#removing-a-deprecated-flag","title":"Removing a deprecated flag","text":"
  • Announce deprecation and support policy of the existing flag
  • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
  • Address feedback on usage/changed behavior, provided on GitHub issues
  • Deprecate the flag
"},{"location":"enhancements/YYYYMMDD-kep-template/#implementation-history","title":"Implementation History","text":"

Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

  • the Summary and Motivation sections being merged signaling acceptance
  • the Proposal section being merged signaling agreement on a proposed design
  • the date implementation started
  • the first Kubernetes release where an initial version of the KEP was available
  • the version of Kubernetes where the KEP graduated to general availability
  • when the KEP was retired or superseded
"},{"location":"enhancements/YYYYMMDD-kep-template/#drawbacks-optional","title":"Drawbacks [optional]","text":"

Why should this KEP not be implemented.

"},{"location":"enhancements/YYYYMMDD-kep-template/#alternatives-optional","title":"Alternatives [optional]","text":"

Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

"},{"location":"examples/","title":"Ingress examples","text":"

This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them.

The examples on these pages include the spec.ingressClassName field which replaces the deprecated kubernetes.io/ingress.class: nginx annotation. Users of ingress-nginx < 1.0.0 (Helm chart < 4.0.0) should use the legacy documentation.

For more information, check out the Migration to apiVersion networking.k8s.io/v1 guide.

Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Features Canary Deployments weighted canary routing to a separate deployment Intermediate Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO"},{"location":"examples/PREREQUISITES/","title":"Prerequisites","text":"

Many of the examples in this directory have common prerequisites.

"},{"location":"examples/PREREQUISITES/#tls-certificates","title":"TLS certificates","text":"

Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows

$ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\"\nGenerating a 2048 bit RSA private key\n................+++\n................+++\nwriting new private key to 'tls.key'\n-----\n\n$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt\nsecret \"tls-secret\" created\n

Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.

"},{"location":"examples/PREREQUISITES/#client-certificate-authentication","title":"Client Certificate Authentication","text":"

CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA.

We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate.

These instructions are based on the following blog

Generate the CA Key and Certificate:

openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority'\n

Generate the Server Key, and Certificate and Sign with the CA Certificate:

openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com'\nopenssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt\n

Generate the Client Key, and Certificate and Sign with the CA Certificate:

openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client'\nopenssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt\n

Once this is complete you can continue to follow the instructions here

"},{"location":"examples/PREREQUISITES/#test-http-service","title":"Test HTTP Service","text":"

All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows

$ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml\nservice \"http-svc\" created\nreplicationcontroller \"http-svc\" created\n\n$ kubectl get po\nNAME             READY     STATUS    RESTARTS   AGE\nhttp-svc-p1t3t   1/1       Running   0          1d\n\n$ kubectl get svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301/TCP       1d\n

You can test that the HTTP Service works by exposing it temporarily

$ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\"http-svc\" patched\n\n$ kubectl get svc http-svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301/TCP       1d\n\n$ kubectl describe svc http-svc\nName:                   http-svc\nNamespace:              default\nLabels:                 app=http-svc\nSelector:               app=http-svc\nType:                   LoadBalancer\nIP:                     10.0.122.116\nLoadBalancer Ingress:   108.59.87.136\nPort:                   http    80/TCP\nNodePort:               http    30301/TCP\nEndpoints:              10.180.1.6:8080\nSession Affinity:       None\nEvents:\n  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message\n  --------- --------    -----   ----            -------------   --------    ------          -------\n  1m        1m      1   {service-controller }           Normal      Type            ClusterIP -> LoadBalancer\n  1m        1m      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer\n  16s       16s     1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer\n\n$ curl 108.59.87.136\nCLIENT VALUES:\nclient_address=10.240.0.3\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://108.59.87.136:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nhost=108.59.87.136\nuser-agent=curl/7.46.0\nBODY:\n-no body in request-\n\n$ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}'\n\"http-svc\" patched\n
"},{"location":"examples/affinity/cookie/","title":"Sticky sessions","text":"

This example demonstrates how to achieve session affinity using cookies.

"},{"location":"examples/affinity/cookie/#deployment","title":"Deployment","text":"

Session affinity can be configured using the following annotations:

Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie) nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent nginx.ingress.kubernetes.io/affinity-canary-behavior Defines session affinity behavior of canaries. By default the behavior is sticky, and canaries respect session affinity configuration. Set this to legacy to restore original canary behavior, when session affinity parameters were not respected. sticky (default) or legacy nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-secure Set the cookie as secure regardless the protocol of the incoming request \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path) nginx.ingress.kubernetes.io/session-cookie-domain Domain that will be set on the cookie string nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)

You can create the session affinity example Ingress to test this:

kubectl create -f ingress.yaml\n
"},{"location":"examples/affinity/cookie/#validation","title":"Validation","text":"

You can confirm that the Ingress works:

$ kubectl describe ing nginx-test\nName:           nginx-test\nNamespace:      default\nAddress:\nDefault backend:    default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)\nRules:\n  Host                          Path    Backends\n  ----                          ----    --------\n  stickyingress.example.com\n                                /        nginx-service:80 (<none>)\nAnnotations:\n  affinity: cookie\n  session-cookie-name:      INGRESSCOOKIE\n  session-cookie-expires: 172800\n  session-cookie-max-age: 172800\nEvents:\n  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason  Message\n  --------- --------    -----   ----                -------------   --------    ------  -------\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  default/nginx-test\n\n\n$ curl -I http://stickyingress.example.com\nHTTP/1.1 200 OK\nServer: nginx/1.11.9\nDate: Fri, 10 Feb 2017 14:11:12 GMT\nContent-Type: text/html\nContent-Length: 612\nConnection: keep-alive\nSet-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly\nLast-Modified: Tue, 24 Jan 2017 14:02:19 GMT\nETag: \"58875e6b-264\"\nAccept-Ranges: bytes\n

In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by the Ingress-Nginx Controller, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing) and has an Expires directive. If a client sends a cookie that doesn't correspond to an upstream, NGINX selects an upstream and creates a corresponding cookie.

If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.

When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.

"},{"location":"examples/affinity/cookie/#caveats","title":"Caveats","text":"

When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.

"},{"location":"examples/auth/basic/","title":"Basic Authentication","text":"

This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.

"},{"location":"examples/auth/basic/#create-htpasswd-file","title":"Create htpasswd file","text":"
$ htpasswd -c auth foo\nNew password: <bar>\nNew password:\nRe-type new password:\nAdding password for user foo\n
"},{"location":"examples/auth/basic/#convert-htpasswd-into-a-secret","title":"Convert htpasswd into a secret","text":"
$ kubectl create secret generic basic-auth --from-file=auth\nsecret \"basic-auth\" created\n
"},{"location":"examples/auth/basic/#examine-secret","title":"Examine secret","text":"
$ kubectl get secret basic-auth -o yaml\napiVersion: v1\ndata:\n  auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK\nkind: Secret\nmetadata:\n  name: basic-auth\n  namespace: default\ntype: Opaque\n
"},{"location":"examples/auth/basic/#using-kubectl-create-an-ingress-tied-to-the-basic-auth-secret","title":"Using kubectl, create an ingress tied to the basic-auth secret","text":"
$ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-with-auth\n  annotations:\n    # type of authentication\n    nginx.ingress.kubernetes.io/auth-type: basic\n    # name of the secret that contains the user/password definitions\n    nginx.ingress.kubernetes.io/auth-secret: basic-auth\n    # message to display with an appropriate context why the authentication is required\n    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: foo.bar.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n
"},{"location":"examples/auth/basic/#use-curl-to-confirm-authorization-is-required-by-the-ingress","title":"Use curl to confirm authorization is required by the ingress","text":"
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 05:27:23 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Authentication Required - foo\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.10.0</center>\n</body>\n</html>\n* Connection #0 to host 10.2.29.4 left intact\n
"},{"location":"examples/auth/basic/#use-curl-with-the-correct-credentials-to-connect-to-the-ingress","title":"Use curl with the correct credentials to connect to the ingress","text":"
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n* Server auth using Basic with user 'foo'\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> Authorization: Basic Zm9vOmJhcg==\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 06:05:26 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n< Vary: Accept-Encoding\n<\nCLIENT VALUES:\nclient_address=10.2.29.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.43.0\nx-request-id=e426c7829ef9f3b18d40730857c3eddb\nx-forwarded-for=10.2.29.1\nx-forwarded-host=foo.bar.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.29.1\nx-scheme=http\nBODY:\n* Connection #0 to host 10.2.29.4 left intact\n-no body in request-\n
"},{"location":"examples/auth/client-certs/","title":"Client Certificate Authentication","text":"

It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource.

Before getting started you must have the following Certificates configured:

  1. CA certificate and Key (Intermediate Certs need to be in CA)
  2. Server Certificate (Signed by CA) and Key (CN should be equal the hostname you will use)
  3. Client Certificate (Signed by CA) and Key

For more details on the generation process, checkout the Prerequisite docs.

You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:

openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n

Then, you can concatenate them all into one file, named 'ca.crt' with the following:

cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n

Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm (Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.

"},{"location":"examples/auth/client-certs/#creating-certificate-secrets","title":"Creating Certificate Secrets","text":"

There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.

  • You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA.

    kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt\nkubectl create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key\n
  • You can create a secret containing CA certificate along with the Server Certificate that can be used for both TLS and Client Auth.

    kubectl create secret generic ca-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crt\n
  • If you want to also enable Certificate Revocation List verification you can create the secret also containing the CRL file in PEM format:

    kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --from-file=ca.crl=ca.crl\n

Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.

"},{"location":"examples/auth/client-certs/#setup-instructions","title":"Setup Instructions","text":"
  1. Add the annotations as provided in the ingress.yaml example to your own ingress resources as required.
  2. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400.
  3. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.
"},{"location":"examples/auth/external-auth/","title":"External Basic Authentication","text":""},{"location":"examples/auth/external-auth/#example-1","title":"Example 1","text":"

Use an external service (Basic Auth) located in https://httpbin.org

$ kubectl create -f ingress.yaml\ningress \"external-auth\" created\n\n$ kubectl get ing external-auth\nNAME            HOSTS                         ADDRESS       PORTS     AGE\nexternal-auth   external-auth-01.sample.com   172.17.4.99   80        13s\n\n$ kubectl get ing external-auth -o yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd\n  creationTimestamp: 2016-10-03T13:50:35Z\n  generation: 1\n  name: external-auth\n  namespace: default\n  resourceVersion: \"2068378\"\n  selfLink: /apis/networking/v1/namespaces/default/ingresses/external-auth\n  uid: 5c388f1d-8970-11e6-9004-080027d2dc94\nspec:\n  rules:\n  - host: external-auth-01.sample.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\nstatus:\n  loadBalancer:\n    ingress:\n    - ip: 172.17.4.99\n$\n
"},{"location":"examples/auth/external-auth/#test-1-no-usernamepassword-expect-code-401","title":"Test 1: no username/password (expect code 401)","text":"
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:08 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.11.3</center>\n</body>\n</html>\n* Connection #0 to host 172.17.4.99 left intact\n
"},{"location":"examples/auth/external-auth/#test-2-valid-usernamepassword-expect-code-200","title":"Test 2: valid username/password (expect code 200)","text":"
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjpwYXNzd2Q=\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:50 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n<\nCLIENT VALUES:\nclient_address=10.2.60.2\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://external-auth-01.sample.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nauthorization=Basic dXNlcjpwYXNzd2Q=\nconnection=close\nhost=external-auth-01.sample.com\nuser-agent=curl/7.50.1\nx-forwarded-for=10.2.60.1\nx-forwarded-host=external-auth-01.sample.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.60.1\nBODY:\n* Connection #0 to host 172.17.4.99 left intact\n-no body in request-\n
"},{"location":"examples/auth/external-auth/#test-3-invalid-usernamepassword-expect-code-401","title":"Test 3: invalid username/password (expect code 401)","text":"
curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjp1c2Vy\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:53:04 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n* Authentication problem. Ignoring this.\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.11.3</center>\n</body>\n</html>\n* Connection #0 to host 172.17.4.99 left intact\n
"},{"location":"examples/auth/oauth-external-auth/","title":"External OAUTH Authentication","text":""},{"location":"examples/auth/oauth-external-auth/#overview","title":"Overview","text":"

The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

Important

This annotation requires ingress-nginx-controller v0.9.0 or greater.

"},{"location":"examples/auth/oauth-external-auth/#key-detail","title":"Key Detail","text":"

This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

Sample:

...\nmetadata:\n  name: application\n  annotations:\n    nginx.ingress.kubernetes.io/auth-url: \"https://$host/oauth2/auth\"\n    nginx.ingress.kubernetes.io/auth-signin: \"https://$host/oauth2/start?rd=$escaped_request_uri\"\n...\n
"},{"location":"examples/auth/oauth-external-auth/#example-oauth2-proxy-kubernetes-dashboard","title":"Example: OAuth2 Proxy + Kubernetes-Dashboard","text":"

This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.

"},{"location":"examples/auth/oauth-external-auth/#prepare","title":"Prepare","text":"
  1. Install the kubernetes dashboard

    kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml\n
  2. Create a custom GitHub OAuth application

    • Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com
    • Authorization callback URL is the same as the base FQDN plus /oauth2/callback, like https://foo.bar.com/oauth2/callback

  3. Configure values in the file oauth2-proxy.yaml with the values:

    • OAUTH2_PROXY_CLIENT_ID with the github <Client ID>
    • OAUTH2_PROXY_CLIENT_SECRET with the github <Client Secret>
    • OAUTH2_PROXY_COOKIE_SECRET with value of python -c 'import os,base64; print(base64.b64encode(os.urandom(16)).decode(\"ascii\"))'
    • (optional, but recommended) OAUTH2_PROXY_GITHUB_USERS with GitHub usernames to allow to login
    • __INGRESS_HOST__ with a valid FQDN (e.g. foo.bar.com)
    • __INGRESS_SECRET__ with a Secret with a valid SSL certificate
  4. Deploy the oauth2 proxy and the ingress rules by running:

    $ kubectl create -f oauth2-proxy.yaml\n
"},{"location":"examples/auth/oauth-external-auth/#test","title":"Test","text":"

Test the integration by accessing the configured URL, e.g. https://foo.bar.com

"},{"location":"examples/auth/oauth-external-auth/#example-vouch-proxy-kubernetes-dashboard","title":"Example: Vouch Proxy + Kubernetes-Dashboard","text":"

This example will show you how to deploy Vouch Proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.

"},{"location":"examples/auth/oauth-external-auth/#prepare_1","title":"Prepare","text":"
  1. Install the kubernetes dashboard

    kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml\n
  2. Create a custom GitHub OAuth application

    • Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com
    • Authorization callback URL is the same as the base FQDN plus /oauth2/auth, like https://foo.bar.com/oauth2/auth

  3. Configure Vouch Proxy values in the file vouch-proxy.yaml with the values:

    • VOUCH_COOKIE_DOMAIN with value of <Ingress Host>
    • OAUTH_CLIENT_ID with the github <Client ID>
    • OAUTH_CLIENT_SECRET with the github <Client Secret>
    • (optional, but recommended) VOUCH_WHITELIST with GitHub usernames to allow to login
    • __INGRESS_HOST__ with a valid FQDN (e.g. foo.bar.com)
    • __INGRESS_SECRET__ with a Secret with a valid SSL certificate
  4. Deploy Vouch Proxy and the ingress rules by running:

    $ kubectl create -f vouch-proxy.yaml\n
"},{"location":"examples/auth/oauth-external-auth/#test_1","title":"Test","text":"

Test the integration by accessing the configured URL, e.g. https://foo.bar.com

"},{"location":"examples/canary/","title":"Canary","text":"

Ingress Nginx Has the ability to handle canary routing by setting specific annotations, the following is an example of how to configure a canary deployment with weighted canary routing.

"},{"location":"examples/canary/#create-your-main-deployment-and-service","title":"Create your main deployment and service","text":"

This is the main deployment of your application with the service that will be used to route to it

echo \"\n---\n# Deployment\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: production\n  template:\n    metadata:\n      labels:\n        app: production\n    spec:\n      containers:\n      - name: production\n        image: registry.k8s.io/ingress-nginx/e2e-test-echo@sha256:6fc5aa2994c86575975bb20a5203651207029a0d28e3f491d8a127d08baadab4\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: production\n\" | kubectl apply -f -\n
"},{"location":"examples/canary/#create-the-canary-deployment-and-service","title":"Create the canary deployment and service","text":"

This is the canary deployment that will take a weighted amount of requests instead of the main deployment

echo \"\n---\n# Deployment\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: canary\n  template:\n    metadata:\n      labels:\n        app: canary\n    spec:\n      containers:\n      - name: canary\n        image: registry.k8s.io/ingress-nginx/e2e-test-echo@sha256:6fc5aa2994c86575975bb20a5203651207029a0d28e3f491d8a127d08baadab4\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: canary\n\" | kubectl apply -f -\n
"},{"location":"examples/canary/#create-ingress-pointing-to-your-main-deployment","title":"Create Ingress Pointing To Your Main Deployment","text":"

Next you will need to expose your main deployment with an ingress resource, note there are no canary specific annotations on this ingress

echo \"\n---\n# Ingress\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: production\n  annotations:\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: /\n        backend:\n          service:\n            name: production\n            port:\n              number: 80\n\" | kubectl apply -f -\n
"},{"location":"examples/canary/#create-ingress-pointing-to-your-canary-deployment","title":"Create Ingress Pointing To Your Canary Deployment","text":"

You will then create an Ingress that has the canary specific configuration, please pay special notice of the following:

  • The host name is identical to the main ingress host name
  • The nginx.ingress.kubernetes.io/canary: \"true\" annotation is required and defines this as a canary annotation (if you do not have this the Ingresses will clash)
  • The nginx.ingress.kubernetes.io/canary-weight: \"50\" annotation dictates the weight of the routing, in this case there is a \"50%\" chance a request will hit the canary deployment over the main deployment
    echo \"\n---\n# Ingress\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: canary\n  annotations:\n    nginx.ingress.kubernetes.io/canary: \\\"true\\\"\n    nginx.ingress.kubernetes.io/canary-weight: \\\"50\\\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: /\n        backend:\n          service:\n            name: canary\n            port:\n              number: 80\n\" | kubectl apply -f -\n
"},{"location":"examples/canary/#testing-your-setup","title":"Testing your setup","text":"

You can use the following command to test your setup (replacing INGRESS_CONTROLLER_IP with your ingresse controllers IP Address)

for i in $(seq 1 10); do curl -s --resolve echo.prod.mydomain.com:80:$INGRESS_CONTROLLER_IP echo.prod.mydomain.com  | grep \"Hostname\"; done\n

You will get the following output showing that your canary setup is working as expected:

Hostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\n
"},{"location":"examples/customization/configuration-snippets/","title":"Configuration Snippets","text":""},{"location":"examples/customization/configuration-snippets/#ingress","title":"Ingress","text":"

The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at an example of specifying custom headers.

kubectl apply -f ingress.yaml\n
"},{"location":"examples/customization/configuration-snippets/#test","title":"Test","text":"

Check if the contents of the annotation are present in the nginx.conf file using:

kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf\n
"},{"location":"examples/customization/custom-configuration/","title":"Custom Configuration","text":"

Using a ConfigMap is possible to customize the NGINX configuration

For example, if we want to change the timeouts we need to create a ConfigMap:

$ cat configmap.yaml\napiVersion: v1\ndata:\n  proxy-connect-timeout: \"10\"\n  proxy-read-timeout: \"120\"\n  proxy-send-timeout: \"120\"\nkind: ConfigMap\nmetadata:\n  name: ingress-nginx-controller\n
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-configuration/configmap.yaml \\\n    | kubectl apply -f -\n

If the Configmap is updated, NGINX will be reloaded with the new configuration.

"},{"location":"examples/customization/custom-errors/","title":"Custom Errors","text":"

This example demonstrates how to use a custom backend to render custom error pages.

If you are using Helm Chart, look at example values and don't forget to add configMap to your deployment, otherwise continue with Customized default backend manual deployment.

"},{"location":"examples/customization/custom-errors/#customized-default-backend","title":"Customized default backend","text":"

First, create the custom default-backend. It will be used by the Ingress controller later on. To do that, you can take a look at the example manifest in this project's GitHub repository.

$ kubectl create -f custom-default-backend.yaml\nservice \"nginx-errors\" created\ndeployment.apps \"nginx-errors\" created\n

This should have created a Deployment and a Service with the name nginx-errors.

$ kubectl get deploy,svc\nNAME                           DESIRED   CURRENT   READY     AGE\ndeployment.apps/nginx-errors   1         1         1         10s\n\nNAME                   TYPE        CLUSTER-IP  EXTERNAL-IP   PORT(S)   AGE\nservice/nginx-errors   ClusterIP   10.0.0.12   <none>        80/TCP    10s\n
"},{"location":"examples/customization/custom-errors/#ingress-controller-configuration","title":"Ingress controller configuration","text":"

If you do not already have an instance of the Ingress-Nginx Controller running, deploy it according to the deployment guide, then follow these steps:

  1. Edit the ingress-nginx-controller Deployment and set the value of the --default-backend-service flag to the name of the newly created error backend.

  2. Edit the ingress-nginx-controller ConfigMap and create the key custom-http-errors with a value of 404,503.

  3. Take note of the IP address assigned to the Ingress-Nginx Controller Service.

    $ kubectl get svc ingress-nginx\nNAME            TYPE        CLUSTER-IP  EXTERNAL-IP   PORT(S)          AGE\ningress-nginx   ClusterIP   10.0.0.13   <none>        80/TCP,443/TCP   10m\n

Note

The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.

"},{"location":"examples/customization/custom-errors/#testing-error-pages","title":"Testing error pages","text":"

Let us send a couple of HTTP requests using cURL and validate everything is working as expected.

A request to the default backend returns a 404 error with a custom message:

$ curl -D- http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:11:24 GMT\nContent-Type: */*\nTransfer-Encoding: chunked\nConnection: keep-alive\n\n<span>The page you're looking for could not be found.</span>\n

A request with a custom Accept header returns the corresponding document type (JSON):

$ curl -D- -H 'Accept: application/json' http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:12:36 GMT\nContent-Type: application/json\nTransfer-Encoding: chunked\nConnection: keep-alive\nVary: Accept-Encoding\n\n{ \"message\": \"The page you're looking for could not be found\" }\n

To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).

"},{"location":"examples/customization/custom-headers/","title":"Custom Headers","text":""},{"location":"examples/customization/custom-headers/#caveats","title":"Caveats","text":"

Changes to the custom header config maps do not force a reload of the ingress-nginx-controllers.

"},{"location":"examples/customization/custom-headers/#workaround","title":"Workaround","text":"

To work around this limitation, perform a rolling restart of the deployment.

"},{"location":"examples/customization/custom-headers/#example","title":"Example","text":"

This example demonstrates configuration of the Ingress-Nginx Controller via a ConfigMap to pass a custom list of headers to the upstream server.

custom-headers.yaml defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/custom-headers.yaml\n

configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap.yaml\n

The Ingress-Nginx Controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.

The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap-client-response.yaml\n
"},{"location":"examples/customization/custom-headers/#test","title":"Test","text":"

Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf

"},{"location":"examples/customization/external-auth-headers/","title":"External authentication, authentication service response headers propagation","text":"

This example demonstrates propagation of selected authentication service response headers to a backend service.

Sample configuration includes:

  • Sample authentication service producing several response headers
  • Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
  • After successful authentication service generates response headers UserID and UserRole
  • Sample echo service displaying header information
  • Two ingress objects pointing to echo service
  • Public, which allows access from unauthenticated users
  • Private, which allows access from authenticated users only

You can deploy the controller as follows:

$ kubectl create -f deploy/\ndeployment \"demo-auth-service\" created\nservice \"demo-auth-service\" created\ningress \"demo-auth-service\" created\ndeployment \"demo-echo-service\" created\nservice \"demo-echo-service\" created\ningress \"public-demo-echo-service\" created\ningress \"secure-demo-echo-service\" created\n\n$ kubectl get po\nNAME                                        READY     STATUS    RESTARTS   AGE\ndemo-auth-service-2769076528-7g9mh          1/1       Running            0          30s\ndemo-echo-service-3636052215-3vw8c          1/1       Running            0          29s\n\nkubectl get ing\nNAME                       HOSTS                                 ADDRESS   PORTS     AGE\npublic-demo-echo-service   public-demo-echo-service.kube.local             80        1m\nsecure-demo-echo-service   secure-demo-echo-service.kube.local             80        1m\n
"},{"location":"examples/customization/external-auth-headers/#test-1-public-service-with-no-auth-header","title":"Test 1: public service with no auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:21 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 20\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: , UserRole:\n
"},{"location":"examples/customization/external-auth-headers/#test-2-secure-service-with-no-auth-header","title":"Test 2: secure service with no auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 403 Forbidden\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:18:48 GMT\n< Content-Type: text/html\n< Content-Length: 170\n< Connection: keep-alive\n<\n<html>\n<head><title>403 Forbidden</title></head>\n<body bgcolor=\"white\">\n<center><h1>403 Forbidden</h1></center>\n<hr><center>nginx/1.11.10</center>\n</body>\n</html>\n* Connection #0 to host 192.168.99.100 left intact\n
"},{"location":"examples/customization/external-auth-headers/#test-3-public-service-with-valid-auth-header","title":"Test 3: public service with valid auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:59 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 44\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 1443635317331776148, UserRole: admin\n
"},{"location":"examples/customization/external-auth-headers/#test-4-secure-service-with-valid-auth-header","title":"Test 4: secure service with valid auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:17:23 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 43\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 605394647632969758, UserRole: admin\n
"},{"location":"examples/customization/jwt/","title":"Accommodation for JWT","text":"

JWT (short for Json Web Token) is an authentication method widely used. Basically an authentication server generates a JWT and you then use this token in every request you make to a backend service. The JWT can be quite big and is present in every http headers. This means you may have to adapt the max-header size of your nginx-ingress in order to support it.

"},{"location":"examples/customization/jwt/#symptoms","title":"Symptoms","text":"

If you use JWT and you get http 502 error from your ingress, it may be a sign that the buffer size is not big enough.

To be 100% sure look at the logs of the ingress-nginx-controller pod, you should see something like this:

upstream sent too big header while reading response header from upstream...\n
"},{"location":"examples/customization/jwt/#increase-buffer-size-for-headers","title":"Increase buffer size for headers","text":"

In nginx, we want to modify the property proxy-buffer-size. The size is arbitrary. It depends on your needs. Be aware that a high value can lower the performance of your ingress proxy. In general a value of 16k should get you covered.

"},{"location":"examples/customization/jwt/#using-helm","title":"Using helm","text":"

If you're using helm you can simply use the config properties.

 # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/\n  config: \n    proxy-buffer-size: 16k\n

"},{"location":"examples/customization/jwt/#manually-in-kubernetes-config-files","title":"Manually in kubernetes config files","text":"

If you use an already generated config from for a provider, you will have to change the controller-configmap.yaml

---\n# Source: ingress-nginx/templates/controller-configmap.yaml\napiVersion: v1\nkind: ConfigMap\n# ...\ndata:\n  #...\n  proxy-buffer-size: \"16k\"\n

References: * Custom Configuration

"},{"location":"examples/customization/ssl-dh-param/","title":"Custom DH parameters for perfect forward secrecy","text":"

This example aims to demonstrate the deployment of an Ingress-Nginx Controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".

"},{"location":"examples/customization/ssl-dh-param/#custom-configuration","title":"Custom configuration","text":"
$ cat configmap.yaml\napiVersion: v1\ndata:\n  ssl-dh-param: \"ingress-nginx/lb-dhparam\"\nkind: ConfigMap\nmetadata:\n  name: ingress-nginx-controller\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
$ kubectl create -f configmap.yaml\n
"},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-secret","title":"Custom DH parameters secret","text":"
$ openssl dhparam 4096 2> /dev/null | base64\nLS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\n
$ cat ssl-dh-param.yaml\napiVersion: v1\ndata:\n  dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\"\nkind: Secret\nmetadata:\n  name: lb-dhparam\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
$ kubectl create -f ssl-dh-param.yaml\n
"},{"location":"examples/customization/ssl-dh-param/#test","title":"Test","text":"

Check the contents of the configmap is present in the nginx.conf file using:

$ kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf\n

"},{"location":"examples/customization/sysctl/","title":"Sysctl tuning","text":"

This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch.

kubectl patch deployment -n ingress-nginx ingress-nginx-controller \\\n    --patch=\"$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/sysctl/patch.json)\"\n

Changes:

  • Backlog Queue setting net.core.somaxconn from 128 to 32768
  • Ephemeral Ports setting net.ipv4.ip_local_port_range from 32768 60999 to 1024 65000

In a post from the NGINX blog, it is possible to see an explanation for the changes.

"},{"location":"examples/docker-registry/","title":"Docker registry","text":"

This example demonstrates how to deploy a docker registry in the cluster and configure Ingress to enable access from the Internet.

"},{"location":"examples/docker-registry/#deployment","title":"Deployment","text":"

First we deploy the docker registry in the cluster:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/deployment.yaml\n

Important

DO NOT RUN THIS IN PRODUCTION

This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

The next required step is creation of the ingress rules. To do this we have two options: with and without TLS

"},{"location":"examples/docker-registry/#without-tls","title":"Without TLS","text":"

Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-without-tls.yaml\n

Important

Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.

Please check deploy a plain http registry

"},{"location":"examples/docker-registry/#with-tls","title":"With TLS","text":"

Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-with-tls.yaml\n

Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.

"},{"location":"examples/docker-registry/#testing","title":"Testing","text":"

To test the registry is working correctly we download a known image from docker hub, create a tag pointing to the new registry and upload the image:

docker pull ubuntu:16.04\ndocker tag ubuntu:16.04 `registry.<your domain>/ubuntu:16.04`\ndocker push `registry.<your domain>/ubuntu:16.04`\n

Please replace registry.<your domain> with your domain.

"},{"location":"examples/grpc/","title":"gRPC","text":"

This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.

"},{"location":"examples/grpc/#prerequisites","title":"Prerequisites","text":"
  1. You have a kubernetes cluster running.
  2. You have a domain name such as example.com that is configured to route traffic to the Ingress-NGINX controller.
  3. You have the ingress-nginx-controller installed as per docs.
  4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
  5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
"},{"location":"examples/grpc/#step-1-create-a-kubernetes-deployment-for-grpc-app","title":"Step 1: Create a Kubernetes Deployment for gRPC app","text":"
  • Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
    $ kubectl get po -A -o wide | grep go-grpc-greeter-server\n
  • If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.

  • As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go.

  • To create a container image for this app, you can use this Dockerfile.

  • If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.

cat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: go-grpc-greeter-server\n  name: go-grpc-greeter-server\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: go-grpc-greeter-server\n  template:\n    metadata:\n      labels:\n        app: go-grpc-greeter-server\n    spec:\n      containers:\n      - image: <reponame>/go-grpc-greeter-server   # Edit this for your reponame\n        resources:\n          limits:\n            cpu: 100m\n            memory: 100Mi\n          requests:\n            cpu: 50m\n            memory: 50Mi\n        name: go-grpc-greeter-server\n        ports:\n        - containerPort: 50051\nEOF\n
"},{"location":"examples/grpc/#step-2-create-the-kubernetes-service-for-the-grpc-app","title":"Step 2: Create the Kubernetes Service for the gRPC app","text":"
  • You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
    cat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: go-grpc-greeter-server\n  name: go-grpc-greeter-server\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 50051\n  selector:\n    app: go-grpc-greeter-server\n  type: ClusterIP\nEOF\n
  • You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
$ kubectl create -f service.go-grpc-greeter-server.yaml\n
"},{"location":"examples/grpc/#step-3-create-the-kubernetes-ingress-resource-for-the-grpc-app","title":"Step 3: Create the Kubernetes Ingress resource for the gRPC app","text":"
  • Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type \"kubernetes.io/tls\" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
cat <<EOF | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/ssl-redirect: \"true\"\n    nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\"\n  name: fortune-ingress\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: grpctest.dev.mydomain.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: go-grpc-greeter-server\n            port:\n              number: 80\n  tls:\n  # This secret must exist beforehand\n  # The cert must also contain the subj-name grpctest.dev.mydomain.com\n  # https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates\n  - secretName: wildcard.dev.mydomain.com\n    hosts:\n      - grpctest.dev.mydomain.com\nEOF\n
  • If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this:
$ kubectl create -f ingress.go-grpc-greeter-server.yaml\n
  • The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive \"insecure\").

  • For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\".

  • A few more things to note:

  • We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.

  • We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.

"},{"location":"examples/grpc/#step-4-test-the-connection","title":"Step 4: test the connection","text":"
  • Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
$ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello\n{\n  \"message\": \"Hello \"\n}\n
"},{"location":"examples/grpc/#debugging-hints","title":"Debugging Hints","text":"
  1. Obviously, watch the logs on your app.
  2. Watch the logs for the ingress-nginx-controller (increasing verbosity as needed).
  3. Double-check your address and ports.
  4. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
  5. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.

If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.

See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html

"},{"location":"examples/grpc/#notes-on-using-responserequest-streams","title":"Notes on using response/request streams","text":"

grpc_read_timeout and grpc_send_timeout will be set as proxy_read_timeout and proxy_send_timeout when you set backend protocol to GRPC or GRPCS.

  1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate this.
  2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
  3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
"},{"location":"examples/multi-tls/","title":"Multi TLS certificate termination","text":"

This example uses 2 different certificates to terminate SSL for 2 hostnames.

  1. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
  2. Create multi-tls.yaml

This should generate a segment like:

$ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35\n    server {\n        listen 80;\n        listen 443 ssl http2;\n        ssl_certificate /etc/nginx-ssl/default-foobar.pem;\n        ssl_certificate_key /etc/nginx-ssl/default-foobar.pem;\n\n\n        server_name foo.bar.com;\n\n\n        if ($scheme = http) {\n            return 301 https://$host$request_uri;\n        }\n\n\n\n        location / {\n            proxy_set_header Host                   $host;\n\n            # Pass Real IP\n            proxy_set_header X-Real-IP              $remote_addr;\n\n            # Allow websocket connections\n            proxy_set_header                        Upgrade           $http_upgrade;\n            proxy_set_header                        Connection        $connection_upgrade;\n\n            proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;\n            proxy_set_header X-Forwarded-Host       $host;\n            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;\n\n            proxy_connect_timeout                   5s;\n            proxy_send_timeout                      60s;\n            proxy_read_timeout                      60s;\n\n            proxy_redirect                          off;\n            proxy_buffering                         off;\n\n            proxy_http_version                      1.1;\n\n            proxy_pass http://default-http-svc-80;\n        }\n

And you should be able to reach your nginx service or http-svc service using a hostname switch:

$  kubectl get ing\nNAME      RULE          BACKEND   ADDRESS                         AGE\nfoo-tls   -                       104.154.30.67                   13m\n          foo.bar.com\n          /             http-svc:80\n          bar.baz.com\n          /             nginx:80\n\n$ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k\nCLIENT VALUES:\nclient_address=10.245.0.6\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.35.0\nx-forwarded-for=10.245.0.1\nx-forwarded-host=foo.bar.com\nx-forwarded-proto=https\n\n$ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx on Debian!</title>\n\n$ curl 104.154.30.67\ndefault backend - 404\n

"},{"location":"examples/openpolicyagent/","title":"OpenPolicyAgent and pathType enforcing","text":"

Ingress API allows users to specify different pathType on Ingress object.

While pathType Exact and Prefix should allow only a small set of characters, pathType ImplementationSpecific allows any characters, as it may contain regexes, variables and other features that may be specific of the Ingress Controller being used.

This means that the Ingress Admins (the persona who deployed the Ingress Controller) should trust the users allowed to use pathType: ImplementationSpecific, as this may allow arbitrary configuration, and this configuration may end on the proxy (aka Nginx) configuration.

"},{"location":"examples/openpolicyagent/#example","title":"Example","text":"

The example in this repo uses Gatekeeper to block the usage of pathType: ImplementationSpecific, allowing just a specific list of namespaces to use it.

It is recommended that the admin modifies this rules to enforce a specific set of characters when the usage of ImplementationSpecific is allowed, or in ways that best suits their needs.

First, the ConstraintTemplate from template.yaml will define a rule that validates if the Ingress object is being created on an exempted namespace, and case not, will validate its pathType.

Then, the rule K8sBlockIngressPathType contained in rule.yaml will define the parameters: what kind of object should be verified (Ingress), what are the exempted namespaces, and what kinds of pathType are blocked.

"},{"location":"examples/psp/","title":"Pod Security Policy (PSP)","text":"

In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).

PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.

Before applying any objects, first apply the PSP permissions by running:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml\n

Note: PSP permissions must be granted before the creation of the Deployment and the ReplicaSet.

"},{"location":"examples/rewrite/","title":"Rewrite","text":"

This example demonstrates how to use Rewrite annotations.

"},{"location":"examples/rewrite/#prerequisites","title":"Prerequisites","text":"

You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

"},{"location":"examples/rewrite/#deployment","title":"Deployment","text":"

Rewriting can be controlled using the following annotations:

Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in / context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool"},{"location":"examples/rewrite/#examples","title":"Examples","text":""},{"location":"examples/rewrite/#rewrite-target","title":"Rewrite Target","text":"

Attention

Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

Note

Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

Note

Please see the FAQ for Validation Of path

Create an Ingress rule with a rewrite annotation:

$ echo '\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\n    nginx.ingress.kubernetes.io/rewrite-target: /$2\n  name: rewrite\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: rewrite.bar.com\n    http:\n      paths:\n      - path: /something(/|$)(.*)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n' | kubectl create -f -\n

In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.

For example, the ingress definition above will result in the following rewrites:

  • rewrite.bar.com/something rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/ rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
"},{"location":"examples/rewrite/#app-root","title":"App Root","text":"

Create an Ingress rule with an app-root annotation:

$ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/app-root: /app1\n  name: approot\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: approot.bar.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n

Check the rewrite is working

$ curl -I -k http://approot.bar.com/\nHTTP/1.1 302 Moved Temporarily\nServer: nginx/1.11.10\nDate: Mon, 13 Mar 2017 14:57:15 GMT\nContent-Type: text/html\nContent-Length: 162\nLocation: http://approot.bar.com/app1\nConnection: keep-alive\n
"},{"location":"examples/static-ip/","title":"Static IPs","text":"

This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.

"},{"location":"examples/static-ip/#prerequisites","title":"Prerequisites","text":"

You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

"},{"location":"examples/static-ip/#acquiring-an-ip","title":"Acquiring an IP","text":"

Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades.

To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer.

First, create a loadbalancer Service and wait for it to acquire an IP:

$ kubectl create -f static-ip-svc.yaml\nservice \"ingress-nginx-lb\" created\n\n$ kubectl get svc ingress-nginx-lb\nNAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE\ningress-nginx-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m\n

Then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"ingress-nginx-lb\").

$ kubectl create -f ingress-nginx-controller.yaml\ndeployment \"ingress-nginx-controller\" created\n
"},{"location":"examples/static-ip/#assigning-the-ip-to-an-ingress","title":"Assigning the IP to an Ingress","text":"

From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step.

$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n\n$ curl 104.154.109.191 -kL\nCLIENT VALUES:\nclient_address=10.180.1.25\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://104.154.109.191:8080/\n...\n
"},{"location":"examples/static-ip/#retaining-the-ip","title":"Retaining the IP","text":"

You can test retention by deleting the Ingress:

$ kubectl delete ing ingress-nginx\ningress \"ingress-nginx\" deleted\n\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n

Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.

"},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","title":"Promote ephemeral to static IP","text":"

To promote the allocated IP to static, you can update the Service manifest:

$ kubectl patch svc ingress-nginx-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}'\n\"ingress-nginx-lb\" patched\n

... and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE):

$ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1\nCreated [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].\n---\naddress: 104.154.109.191\ncreationTimestamp: '2017-01-31T16:34:50.089-08:00'\ndescription: ''\nid: '5208037144487826373'\nkind: compute#address\nname: ingress-nginx-lb\nregion: us-central1\nselfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb\nstatus: IN_USE\nusers:\n- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000\n

Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191.

"},{"location":"examples/tls-termination/","title":"TLS termination","text":"

This example demonstrates how to terminate TLS through the Ingress-Nginx Controller.

"},{"location":"examples/tls-termination/#prerequisites","title":"Prerequisites","text":"

You need a TLS cert and a test HTTP service for this example.

"},{"location":"examples/tls-termination/#deployment","title":"Deployment","text":"

Create a ingress.yaml file.

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: nginx-test\nspec:\n  tls:\n    - hosts:\n      - foo.bar.com\n      # This assumes tls-secret exists and the SSL\n      # certificate contains a CN for foo.bar.com\n      secretName: tls-secret\n  ingressClassName: nginx\n  rules:\n    - host: foo.bar.com\n      http:\n        paths:\n        - path: /\n          pathType: Prefix\n          backend:\n            # This assumes http-svc exists and routes to healthy endpoints\n            service:\n              name: http-svc\n              port:\n                number: 80\n

The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.

kubectl apply -f ingress.yaml\n
"},{"location":"examples/tls-termination/#validation","title":"Validation","text":"

You can confirm that the Ingress works.

$ kubectl describe ing nginx-test\nName:           nginx-test\nNamespace:      default\nAddress:        104.198.183.6\nDefault backend:    default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)\nTLS:\n  tls-secret terminates\nRules:\n  Host  Path    Backends\n  ----  ----    --------\n  *\n            http-svc:80 (<none>)\nAnnotations:\nEvents:\n  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason  Message\n  --------- --------    -----   ----                -------------   --------    ------  -------\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  default/nginx-test\n  7s        7s      1   {ingress-nginx-controller }         Normal      UPDATE  default/nginx-test\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  ip: 104.198.183.6\n  7s        7s      1   {ingress-nginx-controller }         Warning     MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming /\n\n$ curl 104.198.183.6 -L\ncurl: (60) SSL certificate problem: self signed certificate\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\n$ curl 104.198.183.6 -Lk\nCLIENT VALUES:\nclient_address=10.240.0.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://35.186.221.137:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=Keep-Alive\nhost=35.186.221.137\nuser-agent=curl/7.46.0\nvia=1.1 google\nx-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106\nx-forwarded-for=104.132.0.80, 35.186.221.137\nx-forwarded-proto=https\nBODY:\n
"},{"location":"user-guide/basic-usage/","title":"Basic usage - host based routing","text":"

ingress-nginx can be used for many use cases, inside various cloud providers and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.

First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed, myServiceA, myServiceB, and configured as type: ClusterIP.

Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org.

If the cluster version is < 1.19, you can create two ingress resources like this:

apiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-myservicea\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: myservicea.foo.org\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: myservicea\n          servicePort: 80\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-myserviceb\n  annotations:\n    # use the shared ingress-nginx\n    kubernetes.io/ingress.class: \"nginx\"\nspec:\n  rules:\n  - host: myserviceb.foo.org\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: myserviceb\n          servicePort: 80\n

If the cluster uses Kubernetes version >= 1.19.x, then its suggested to create 2 ingress resources, using yaml examples shown below. These examples are in conformity with the networking.kubernetes.io/v1 api.

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-myservicea\nspec:\n  rules:\n  - host: myservicea.foo.org\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: myservicea\n            port:\n              number: 80\n  ingressClassName: nginx\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-myserviceb\nspec:\n  rules:\n  - host: myserviceb.foo.org\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: myserviceb\n            port:\n              number: 80\n  ingressClassName: nginx\n

When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation or where ingressClassName: nginx is present. Please note that the ingress resource should be placed inside the same namespace of the backend resource.

On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myservicea.foo.org and myserviceb.foo.org to the nginx external IP. Get the external IP by running:

kubectl get services -n ingress-nginx\n

To test inside minikube refer to this documentation: Set up Ingress on Minikube with the NGINX Ingress Controller

"},{"location":"user-guide/cli-arguments/","title":"Command line arguments","text":"

The following command line arguments are accepted by the Ingress controller executable.

They are set in the container spec of the ingress-nginx-controller Deployment manifest

Argument Description --annotations-prefix Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --bucket-factor Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0) --certificate-authority Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified. --configmap Name of the ConfigMap containing custom global configurations for the controller. --controller-class Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched. --deep-inspect Enables ingress object security deep inspector. (default true) --default-backend-service Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --enable-annotation-validation If true, will enable the annotation validation feature. Defaults to true --disable-catch-all Disable support for catch-all Ingresses. (default false) --disable-full-test Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default). --disable-svc-external-name Disable support for Services of type ExternalName. (default false) --disable-sync-events Disables the creation of 'Sync' Event resources, but still logs them --dynamic-configuration-retries Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15) --election-id Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --election-ttl Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s) --enable-metrics Enables the collection of NGINX metrics. (default true) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default false) --enable-ssl-passthrough Enable SSL Passthrough. (default false) --disable-leader-election Disable Leader Election on Nginx Controller. (default false) --enable-topology-aware-routing Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false) --exclude-socket-metrics Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size' --health-check-path URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port Port to use for the healthz endpoint. (default 10254) --healthz-host Address to bind the healthz endpoint. --http-port Port to use for servicing HTTP traffic. (default 80) --https-port Port to use for servicing HTTPS traffic. (default 443) --ingress-class Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation \"kubernetes.io/ingress.class\" (deprecated). If this parameter is not set, or set to the default value of \"nginx\", it will handle ingresses with either an empty or \"nginx\" class name. --ingress-class-by-name Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false). --internal-logger-address Address to be used when binding internal syslogger. (default 127.0.0.1:11514) --kubeconfig Path to a kubeconfig file containing authorization and API server information. --length-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) --max-buckets Maximum number of buckets for native histograms. (default 100) --maxmind-edition-ids Maxmind edition ids to download GeoLite2 Databases. (default \"GeoLite2-City,GeoLite2-ASN\") --maxmind-retries-timeout Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s) --maxmind-retries-count Number of attempts to download the GeoIP DB. (default 1) --maxmind-license-key Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ . --maxmind-mirror Maxmind mirror url (example: http://geoip.local/databases. --metrics-per-host Export metrics per-host. (default true) --metrics-per-undefined-host Export metrics per-host even if the host is not defined in an ingress. Requires --metrics-per-host to be set to true. (default false) --monitor-max-batch-size Max batch size of NGINX metrics. (default 10000) --post-shutdown-grace-period Additional delay in seconds before controller container exits. (default 10) --profiler-port Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) --profiling Enable profiling via web interface host:port/debug/pprof/ . (default true) --publish-service Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false) --report-status-classes If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false) --ssl-passthrough-proxy-port Port to use internally for SSL Passthrough. (default 442) --status-port Port to use for the lua HTTP endpoint configuration. (default 10246) --status-update-interval Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) --stream-port Port to use for the lua TCP/UDP endpoint configuration. (default 10247) --sync-period Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit Define the sync frequency upper limit. (default 0.3) --tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --time-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]) --udp-services-configmap Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --shutdown-grace-period Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0) --size-buckets Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07]) -v, --v Level number for the log level verbosity --validating-webhook The address to start an admission controller on to validate incoming ingresses. Takes the form \":port\". If not provided, no admission controller is started. --validating-webhook-certificate The path of the validating webhook certificate PEM. --validating-webhook-key The path of the validating webhook key PEM. --version Show release information about the Ingress-Nginx Controller and exit. --watch-ingress-without-class Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false) --watch-namespace Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty. --watch-namespace-selector The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty."},{"location":"user-guide/custom-errors/","title":"Custom errors","text":"

When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error:

Header Value X-Code HTTP status code returned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service

A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json, a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML.

Important

The custom backend is expected to return the correct HTTP status code instead of 200. NGINX does not change the response from the custom default backend.

An example of such custom backend is available inside the source repository at images/custom-error-pages.

See also the Custom errors example.

"},{"location":"user-guide/default-backend/","title":"Default backend","text":"

The default backend is a service which handles all URL paths and hosts the Ingress-NGINX controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).

Basically a default backend exposes two URLs:

  • /healthz that returns 200
  • / that returns 404

Example

The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.

"},{"location":"user-guide/exposing-tcp-udp-services/","title":"Exposing TCP and UDP services","text":"

While the Kubernetes Ingress resource only officially supports routing external HTTP(s) traffic to services, ingress-nginx can be configured to receive external TCP/UDP traffic from non-HTTP protocols and route them to internal services using TCP/UDP port mappings that are specified within a ConfigMap.

To support this, the --tcp-services-configmap and --udp-services-configmap flags can be used to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <service port>:<namespace/service name>:[PROXY]:[PROXY]

It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service. The first PROXY controls the decode of the proxy protocol and the second PROXY controls the encoding using proxy protocol. This allows an incoming connection to be decoded or an outgoing connection to be encoded. It is also possible to arbitrate between two different proxies by turning on the decode and encode on a TCP service.

The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000

apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: tcp-services\n  namespace: ingress-nginx\ndata:\n  9000: \"default/example-go:8080\"\n

Since 1.9.13 NGINX provides UDP Load Balancing. The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53

apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: udp-services\n  namespace: ingress-nginx\ndata:\n  53: \"kube-system/kube-dns:53\"\n

If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress.

apiVersion: v1\nkind: Service\nmetadata:\n  name: ingress-nginx\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nspec:\n  type: LoadBalancer\n  ports:\n    - name: http\n      port: 80\n      targetPort: 80\n      protocol: TCP\n    - name: https\n      port: 443\n      targetPort: 443\n      protocol: TCP\n    - name: proxied-tcp-9000\n      port: 9000\n      targetPort: 9000\n      protocol: TCP\n  selector:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
Then, the configmap should be added into ingress controller's deployment args.
 args:\n    - /nginx-ingress-controller\n    - --tcp-services-configmap=ingress-nginx/tcp-services\n

"},{"location":"user-guide/external-articles/","title":"External Articles","text":"
  • Pain(less) NGINX Ingress
  • Accessing Kubernetes Pods from Outside of the Cluster
  • Kubernetes - Redirect HTTP to HTTPS with ELB and the Ingress-Nginx Controller
  • Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure
  • Secure your Nginx Ingress controller behind Google Cloud Armor or Identity-Aware Proxy (IAP)
"},{"location":"user-guide/fcgi-services/","title":"Exposing FastCGI Servers","text":"

FastCGI is a binary protocol for interfacing interactive programs with a web server. [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.

\u2014 Wikipedia

The ingress-nginx ingress controller can be used to directly expose FastCGI servers. Enabling FastCGI in your Ingress only requires setting the backend-protocol annotation to FCGI, and with a couple more annotations you can customize the way ingress-nginx handles the communication with your FastCGI server.

For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.

This post in a FactCGI feature issue describes a test for the FastCGI feature. The same test is described below here.

"},{"location":"user-guide/fcgi-services/#example-objects-to-expose-a-fastcgi-server-pod","title":"Example Objects to expose a FastCGI server pod","text":""},{"location":"user-guide/fcgi-services/#the-fasctcgi-server-pod","title":"The FasctCGI server pod","text":"

The Pod object example below exposes port 9000, which is the conventional FastCGI port.

apiVersion: v1\nkind: Pod\nmetadata:\n  name: example-app\n  labels:\n    app: example-app\nspec:\n  containers:\n  - name: example-app\n    image: php:fpm-alpine\n    ports:\n    - containerPort: 9000\n      name: fastcgi\n
  • For this example to work, a HTML response should be received from the FastCGI server being exposed
  • A HTTP request to the FastCGI server pod should be sent
  • The response should be generated by a php script as that is what we are demonstrating here

The image we are using here php:fpm-alpine does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.

  • Use kubectl exec to get into the example-app pod
  • You will land at the path /var/www/html
  • Create a simple php script there at the path /var/www/html called index.php
  • Make the index.php file look like this
<!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test</title>\n    </head>\n    <body>\n        <?php echo '<p>FastCGI Test Worked!</p>'; ?>\n    </body>\n</html>\n
  • Save and exit from the shell in the pod
  • If you delete the pod, then you will have to recreate the file as this method is not persistent
"},{"location":"user-guide/fcgi-services/#the-fastcgi-service","title":"The FastCGI service","text":"

The Service object example below matches port 9000 from the Pod object above.

apiVersion: v1\nkind: Service\nmetadata:\n  name: example-service\nspec:\n  selector:\n    app: example-app\n  ports:\n  - port: 9000\n    targetPort: 9000\n    name: fastcgi\n
"},{"location":"user-guide/fcgi-services/#the-configmap-object-and-the-ingress-object","title":"The configMap object and the ingress object","text":"

The Ingress and ConfigMap objects below demonstrate the supported FastCGI specific annotations.

Important

NGINX actually has 50 FastCGI directives All of the nginx directives have not been exposed in the ingress yet

"},{"location":"user-guide/fcgi-services/#the-configmap-object","title":"The ConfigMap object","text":"

This configMap object is required to set the parameters of FastCGI directives

Attention

  • The ConfigMap must be created before creating the ingress object
  • The Ingress Controller needs to find the configMap when the Ingress object with the FastCGI annotations is created
  • So create the configMap before the ingress
  • If the configMap is created after the ingress is created, then you will need to restart the Ingress Controller pods.
apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-cm\ndata:\n  SCRIPT_FILENAME: \"/var/www/html/index.php\"\n
"},{"location":"user-guide/fcgi-services/#the-ingress-object","title":"The ingress object","text":"
  • Do not create the ingress shown below until you have created the configMap seen above.
  • You can see that this ingress matches the service example-service, and the port named fastcgi from above.
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/backend-protocol: \"FCGI\"\n    nginx.ingress.kubernetes.io/fastcgi-index: \"index.php\"\n    nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-cm\"\n  name: example-app\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: app.example.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: example-service\n            port:\n              name: fastcgi\n
"},{"location":"user-guide/fcgi-services/#send-a-request-to-the-exposed-fastcgi-server","title":"Send a request to the exposed FastCGI server","text":"

You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.

% curl 172.19.0.2 -H \"Host: app.example.com\" -vik\n*   Trying 172.19.0.2:80...\n* Connected to 172.19.0.2 (172.19.0.2) port 80\n> GET / HTTP/1.1\n> Host: app.example.com\n> User-Agent: curl/8.6.0\n> Accept: */*\n> \n< HTTP/1.1 200 OK\nHTTP/1.1 200 OK\n< Date: Wed, 12 Jun 2024 07:11:59 GMT\nDate: Wed, 12 Jun 2024 07:11:59 GMT\n< Content-Type: text/html; charset=UTF-8\nContent-Type: text/html; charset=UTF-8\n< Transfer-Encoding: chunked\nTransfer-Encoding: chunked\n< Connection: keep-alive\nConnection: keep-alive\n< X-Powered-By: PHP/8.3.8\nX-Powered-By: PHP/8.3.8\n\n< \n<!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test</title>\n    </head>\n    <body>\n        <p>FastCGI Test Worked</p>    </body>\n</html>\n
"},{"location":"user-guide/fcgi-services/#fastcgi-ingress-annotations","title":"FastCGI Ingress Annotations","text":"

To enable FastCGI, the nginx.ingress.kubernetes.io/backend-protocol annotation needs to be set to FCGI, which overrides the default HTTP value.

nginx.ingress.kubernetes.io/backend-protocol: \"FCGI\"

This enables the FastCGI mode for all paths defined in the Ingress object

"},{"location":"user-guide/fcgi-services/#the-nginxingresskubernetesiofastcgi-index-annotation","title":"The nginx.ingress.kubernetes.io/fastcgi-index Annotation","text":"

To specify an index file, the fastcgi-index annotation value can optionally be set. In the example below, the value is set to index.php. This annotation corresponds to the NGINX fastcgi_index directive.

nginx.ingress.kubernetes.io/fastcgi-index: \"index.php\"

"},{"location":"user-guide/fcgi-services/#the-nginxingresskubernetesiofastcgi-params-configmap-annotation","title":"The nginx.ingress.kubernetes.io/fastcgi-params-configmap Annotation","text":"

To specify NGINX fastcgi_param directives, the fastcgi-params-configmap annotation is used, which in turn must lead to a ConfigMap object containing the NGINX fastcgi_param directives as key/values.

nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-configmap\"

And the ConfigMap object to specify the SCRIPT_FILENAME and HTTP_PROXY NGINX's fastcgi_param directives will look like the following:

apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-configmap\ndata:\n  SCRIPT_FILENAME: \"/example/index.php\"\n  HTTP_PROXY: \"\"\n

Using the namespace/ prefix is also supported, for example:

nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-namespace/example-configmap\"

"},{"location":"user-guide/ingress-path-matching/","title":"Ingress Path Matching","text":""},{"location":"user-guide/ingress-path-matching/#regular-expression-support","title":"Regular Expression Support","text":"

Important

Regular expressions is not supported in the spec.rules.host field. The wildcard character '*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"*\").

Note

Please see the FAQ for Validation Of path

The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).

Hint

Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.

See the description of the use-regex annotation for more details.

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/.*\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port:\n              number: 80\n

The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server:

location ~* \"^/foo/.*\" {\n  ...\n}\n
"},{"location":"user-guide/ingress-path-matching/#path-priority","title":"Path Priority","text":"

In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.

Please read the warning before using regular expressions in your ingress definitions.

"},{"location":"user-guide/ingress-path-matching/#example","title":"Example","text":"

Let the following two ingress definitions be created:

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: service1\n            port:\n              number: 80\n      - path: /foo/bar/\n        pathType: Prefix\n        backend:\n          service:\n            name: service2\n            port:\n              number: 80\n
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-2\n  annotations:\n    nginx.ingress.kubernetes.io/rewrite-target: /$1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar/(.+)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: service3\n            port: \n              number: 80\n

The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server:

location ~* ^/foo/bar/.+ {\n  ...\n}\n\nlocation ~* \"^/foo/bar/\" {\n  ...\n}\n\nlocation ~* \"^/foo/bar\" {\n  ...\n}\n

The following request URI's would match the corresponding location blocks:

  • test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
  • test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
  • test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.

IMPORTANT NOTES:

  • If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
"},{"location":"user-guide/ingress-path-matching/#warning","title":"Warning","text":"

The following example describes a case that may inflict unwanted path matching behavior.

This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier. For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\".

"},{"location":"user-guide/ingress-path-matching/#example_1","title":"Example","text":"

Let the following ingress be defined:

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-3\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n      - path: /foo/bar/[A-Z0-9]{3}\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n

The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server:

location ~* \"^/foo/bar/[A-Z0-9]{3}\" {\n  ...\n}\n\nlocation ~* \"^/foo/bar/bar\" {\n  ...\n}\n

A request to test.com/foo/bar/bar would match the ^/foo/bar/[A-Z0-9]{3} location block instead of the longest EXACT matching path.

"},{"location":"user-guide/k8s-122-migration/","title":"FAQ - Migration to Kubernetes 1.22 and apiVersion networking.k8s.io/v1","text":"

If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetes v1.22, this page is relevant to you.

  • Please read this official blog on deprecated Ingress API versions
  • Please read this official documentation on the IngressClass object
"},{"location":"user-guide/k8s-122-migration/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now","title":"What is an IngressClass and why is it important for users of ingress-nginx controller now?","text":"

IngressClass is a Kubernetes resource. See the description below. It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object. From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.

On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that.

kubectl explain ingressclass\n
KIND:     IngressClass\nVERSION:  networking.k8s.io/v1\nDESCRIPTION:\n     IngressClass represents the class of the Ingress, referenced by the Ingress\n     Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be\n     used to indicate that an IngressClass should be considered default. When a\n     single IngressClass resource has this annotation set to true, new Ingress\n     resources without a class specified will be assigned this default class.\nFIELDS:\n   apiVersion   <string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n   kind <string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n   metadata     <Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n   spec <Object>\n     Spec is the desired state of the IngressClass. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`\n
"},{"location":"user-guide/k8s-122-migration/#what-has-caused-this-change-in-behavior","title":"What has caused this change in behavior?","text":"

There are 2 primary reasons.

"},{"location":"user-guide/k8s-122-migration/#reason-1","title":"Reason 1","text":"

Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:

  • extensions/v1beta1
  • networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created.

From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions.

"},{"location":"user-guide/k8s-122-migration/#reason-2","title":"Reason #2","text":"

If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22, there are several scenarios where your existing Ingress objects will not work how you expect.

Read this FAQ to check which scenario matches your use case.

"},{"location":"user-guide/k8s-122-migration/#what-is-the-ingressclassname-field","title":"What is the ingressClassName field?","text":"

ingressClassName is a field in the spec of an Ingress object.

kubectl explain ingress.spec.ingressClassName\n
KIND:     Ingress\nVERSION:  networking.k8s.io/v1\nFIELD:    ingressClassName <string>\nDESCRIPTION:\n     IngressClassName is the name of the IngressClass cluster resource. The\n     associated IngressClass defines which controller will implement the\n     resource. This replaces the deprecated `kubernetes.io/ingress.class`\n     annotation. For backwards compatibility, when that annotation is set, it\n     must be given precedence over this field. The controller may emit a warning\n     if the field and annotation have different values. Implementations of this\n     API should ignore Ingresses without a class specified. An IngressClass\n     resource may be marked as default, which can be used to set a default value\n     for this field. For more information, refer to the IngressClass\n     documentation.\n

The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation.

"},{"location":"user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do","title":"I have only one ingress controller in my cluster. What should I do?","text":"

If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation \"ingressclass.kubernetes.io/is-default-class\" in your IngressClass, so any new Ingress objects will have this one as default IngressClass.

When using Helm, you can enable this annotation by setting .controller.ingressClassResource.default: true in your Helm chart installation's values file.

If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:

  • You can manually set the .spec.ingressClassName field in the manifest of your own Ingress resources.
  • You can re-create them after setting the ingressclass.kubernetes.io/is-default-class annotation to true on the IngressClass
  • Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag --watch-ingress-without-class=true. When using Helm, you can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true.

We recommend that you create the IngressClass as shown below:

---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n  labels:\n    app.kubernetes.io/component: controller\n  name: nginx\n  annotations:\n    ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n  controller: k8s.io/ingress-nginx\n

and add the value spec.ingressClassName=nginx in your Ingress objects.

"},{"location":"user-guide/k8s-122-migration/#i-have-many-ingress-objects-in-my-cluster-what-should-i-do","title":"I have many ingress objects in my cluster. What should I do?","text":"

If you have a lot of ingress objects without ingressClass configuration, you can run the ingress controller with the flag --watch-ingress-without-class=true.

"},{"location":"user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class","title":"What is the flag --watch-ingress-without-class?","text":"

It's a flag that is passed, as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this:

# ...\nargs:\n  - /nginx-ingress-controller\n  - --watch-ingress-without-class=true\n  - --controller-class=k8s.io/ingress-nginx\n  # ...\n# ...\n
"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-in-my-cluster-and-im-already-using-the-annotation","title":"I have more than one controller in my cluster, and I'm already using the annotation","text":"

No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the ingress-nginx controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName.

"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-api","title":"I have more than one controller running in my cluster, and I want to use the new API","text":"

In this scenario, you need to create multiple IngressClasses (see the example above).

Be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value.

Let's see an example, supposing that you have three IngressClasses:

  • IngressClass ingress-nginx-one, with .spec.controller equal to example.com/ingress-nginx1
  • IngressClass ingress-nginx-two, with .spec.controller equal to example.com/ingress-nginx2
  • IngressClass ingress-nginx-three, with .spec.controller equal to example.com/ingress-nginx1

For private use, you can also use a controller name that doesn't contain a /, e.g. ingress-nginx1.

When deploying your ingress controllers, you will have to change the --controller-class field as follows:

  • Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1
  • Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2

When you create an Ingress object with its ingressClassName set to ingress-nginx-two, only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object.

Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.

Bear in mind that if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true, it will serve:

  1. Ingresses without any ingressClassName set
  2. Ingresses where the deprecated annotation (kubernetes.io/ingress.class) matches the value set in the command line argument --ingress-class
  3. Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class
  4. If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two ingress-nginx controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict.
"},{"location":"user-guide/k8s-122-migration/#why-am-i-seeing-ingress-class-annotation-is-not-equal-to-the-expected-by-ingress-controller-in-my-controller-logs","title":"Why am I seeing \"ingress class annotation is not equal to the expected by Ingress Controller\" in my controller logs?","text":"

It is highly likely that you will also see the name of the ingress resource in the same error message. This error message has been observed on use the deprecated annotation (kubernetes.io/ingress.class) in an Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.

"},{"location":"user-guide/miscellaneous/","title":"Miscellaneous","text":""},{"location":"user-guide/miscellaneous/#source-ip-address","title":"Source IP address","text":"

By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.

If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.

Another option is to enable proxy protocol using use-proxy-protocol: \"true\".

In this mode NGINX does not use the content of the header to get the source IP address of the connection.

"},{"location":"user-guide/miscellaneous/#path-types","title":"Path types","text":"

Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. By default NGINX path type is Prefix to not break existing definitions

"},{"location":"user-guide/miscellaneous/#proxy-protocol","title":"Proxy Protocol","text":"

If you are using a L4 proxy to forward the traffic to the Ingress NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the PROXY Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.

Amongst others ELBs in AWS and HAProxy support Proxy Protocol.

"},{"location":"user-guide/miscellaneous/#websockets","title":"Websockets","text":"

Support for websockets is provided by NGINX out of the box. No special configuration required.

The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.

The default value of these settings is 60 seconds.

A more adequate value to support websockets is a value higher than one hour (3600).

Important

If the Ingress-Nginx Controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.

"},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","title":"Optimizing TLS Time To First Byte (TTTFB)","text":"

NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.

This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).

"},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","title":"Retries in non-idempotent methods","text":"

Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.

"},{"location":"user-guide/miscellaneous/#limitations","title":"Limitations","text":"
  • Ingress rules for TLS require the definition of the field host
"},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","title":"Why endpoints and not services","text":"

The Ingress-Nginx Controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.

"},{"location":"user-guide/monitoring/","title":"Monitoring","text":"

Two different methods to install and configure Prometheus and Grafana are described in this doc. * Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress * Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.

"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-pod-annotations","title":"Prometheus and Grafana installation using Pod Annotations","text":"

This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the Ingress-Nginx Controller.

Important

This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.

"},{"location":"user-guide/monitoring/#before-you-begin","title":"Before You Begin","text":"
  • The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.

  • The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :

  • controller.metrics.enabled=true
  • controller.podAnnotations.\"prometheus.io/scrape\"=\"true\"
  • controller.podAnnotations.\"prometheus.io/port\"=\"10254\"

  • The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :

    helm upgrade ingress-nginx ingress-nginx \\\n--repo https://kubernetes.github.io/ingress-nginx \\\n--namespace ingress-nginx \\\n--set controller.metrics.enabled=true \\\n--set-string controller.podAnnotations.\"prometheus\\.io/scrape\"=\"true\" \\\n--set-string controller.podAnnotations.\"prometheus\\.io/port\"=\"10254\"\n

  • You can validate that the controller is configured for metrics by looking at the values of the installed release, like this:
    helm get values ingress-nginx --namespace ingress-nginx\n
  • You should be able to see the values shown below:
    ..\ncontroller:\n  metrics:\n    enabled: true\n  podAnnotations:\n    prometheus.io/port: \"10254\"\n    prometheus.io/scrape: \"true\"\n..\n
  • If you are not using helm, you will have to edit your manifests like this:
    • Service manifest:
      apiVersion: v1\nkind: Service\n..\nspec:\n  ports:\n    - name: prometheus\n      port: 10254\n      targetPort: prometheus\n      ..\n
    • Deployment manifest:
      apiVersion: v1\nkind: Deployment\n..\nspec:\n  template:\n    metadata:\n      annotations:\n        prometheus.io/scrape: \"true\"\n        prometheus.io/port: \"10254\"\n    spec:\n      containers:\n        - name: controller\n          ports:\n            - name: prometheus\n              containerPort: 10254\n            ..\n
"},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","title":"Deploy and configure Prometheus Server","text":"

Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.

  • The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.

  • If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.

  • Running the following command deploys prometheus in Kubernetes:

kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/\n
"},{"location":"user-guide/monitoring/#prometheus-dashboard","title":"Prometheus Dashboard","text":"
  • Open Prometheus dashboard in a web browser:
kubectl get svc -n ingress-nginx\nNAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\ndefault-http-backend   ClusterIP   10.103.59.201   <none>        80/TCP                                       3d\ningress-nginx          NodePort    10.97.44.72     <none>        80:30100/TCP,443:30154/TCP,10254:32049/TCP   5h\nprometheus-server      NodePort    10.98.233.86    <none>        9090:32630/TCP                               1m\n
  • Obtain the IP address of the nodes in the running cluster:
kubectl get nodes -o wide\n
  • In some cases where the node only have internal IP addresses we need to execute:
kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}\n10.192.0.2 10.192.0.3 10.192.0.4\n
  • Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard.

  • According to the above example, this URL will be http://10.192.0.3:32630

"},{"location":"user-guide/monitoring/#grafana","title":"Grafana","text":"
  • Install grafana using the below command
    kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/\n
  • Look at the services

    kubectl get svc -n ingress-nginx\nNAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\ndefault-http-backend   ClusterIP   10.103.59.201   <none>        80/TCP                                       3d\ningress-nginx          NodePort    10.97.44.72     <none>        80:30100/TCP,443:30154/TCP,10254:32049/TCP   5h\nprometheus-server      NodePort    10.98.233.86    <none>        9090:32630/TCP                               10m\ngrafana                NodePort    10.98.233.87    <none>        3000:31086/TCP                               10m\n

  • Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086

The username and password is admin

  • After the login you can import the Grafana dashboard from official dashboards, by following steps given below :

    • Navigate to lefthand panel of grafana
    • Hover on the gearwheel icon for Configuration and click \"Data Sources\"
    • Click \"Add data source\"
    • Select \"Prometheus\"
    • Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
    • Left menu (hover over +) -> Dashboard
    • Click \"Import\"
    • Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
    • Click Import JSON
    • Select the Prometheus data source
    • Click \"Import\"

"},{"location":"user-guide/monitoring/#caveats","title":"Caveats","text":""},{"location":"user-guide/monitoring/#wildcard-ingresses","title":"Wildcard ingresses","text":"
  • By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:
    • Run the ingress controller with --metrics-per-host=false. You will lose labeling by hostname, but still have labeling by ingress.
    • Run the ingress controller with --metrics-per-undefined-host=true --metrics-per-host=true. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames.
"},{"location":"user-guide/monitoring/#grafana-dashboard-using-ingress-resource","title":"Grafana dashboard using ingress resource","text":"
  • If you want to expose the dashboard for grafana using an ingress resource, then you can :
    • change the service type of the prometheus-server service and the grafana service to \"ClusterIP\" like this :
      kubectl -n ingress-nginx edit svc grafana\n
    • This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other)
    • scroll down to line 34 that looks like \"type: NodePort\"
    • change it to look like \"type: ClusterIP\". Save and exit.
    • create an ingress resource with backend as \"grafana\" and port as \"3000\"
  • Similarly, you can edit the service \"prometheus-server\" and add an ingress resource.
"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-service-monitors","title":"Prometheus and Grafana installation using Service Monitors","text":"

This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.

"},{"location":"user-guide/monitoring/#verify-ingress-nginx-controller-is-installed","title":"Verify Ingress-Nginx Controller is installed","text":"
  • The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.

  • To check if Ingress controller is deployed,

    kubectl get pods -n ingress-nginx\n

  • The result should look something like: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h
"},{"location":"user-guide/monitoring/#verify-prometheus-is-installed","title":"Verify Prometheus is installed","text":"
  • To check if Prometheus is already deployed, run the following command:

helm ls -A\n
NAME          NAMESPACE       REVISION    UPDATED                                 STATUS      CHART                           APP VERSION\ningress-nginx ingress-nginx   10          2022-01-20 18:08:55.267373 -0800 PST    deployed    ingress-nginx-4.0.16            1.1.1\nprometheus    prometheus      1           2022-01-20 16:07:25.086828 -0800 PST    deployed    kube-prometheus-stack-30.1.0    0.53.1\n
- Notice that prometheus is installed in a differenet namespace than ingress-nginx

  • If prometheus is not installed, then you can install from here
"},{"location":"user-guide/monitoring/#re-configure-ingress-nginx-controller","title":"Re-configure Ingress-Nginx Controller","text":"
  • The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :
    controller.metrics.enabled=true\ncontroller.metrics.serviceMonitor.enabled=true\ncontroller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n
  • The easiest way of doing this is to helm upgrade
    helm upgrade ingress-nginx ingress-nginx/ingress-nginx \\\n--namespace ingress-nginx \\\n--set controller.metrics.enabled=true \\\n--set controller.metrics.serviceMonitor.enabled=true \\\n--set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n
  • Here controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" should match the name of the helm release of the kube-prometheus-stack

  • You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:

    helm get values ingress-nginx --namespace ingress-nginx\n
    controller:\n  metrics:\n    enabled: true\n    serviceMonitor:\n      additionalLabels:\n        release: prometheus\n      enabled: true\n

"},{"location":"user-guide/monitoring/#configure-prometheus","title":"Configure Prometheus","text":"
  • Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set serviceMonitorSelectorNilUsesHelmValues flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting podMonitorSelectorNilUsesHelmValues to false
  • The configurations required are:
    prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false\nprometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n
  • The easiest way of doing this is to use helm upgrade ...
    helm upgrade prometheus prometheus-community/kube-prometheus-stack \\\n--namespace prometheus  \\\n--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\\n--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n
  • You can validate that Prometheus has been reconfigured by looking at the values of the installed release, like this:
    helm get values prometheus --namespace prometheus\n
  • You should be able to see the values shown below:
    prometheus:\n  prometheusSpec:\n    podMonitorSelectorNilUsesHelmValues: false\n    serviceMonitorSelectorNilUsesHelmValues: false\n
"},{"location":"user-guide/monitoring/#connect-and-view-prometheus-dashboard","title":"Connect and view Prometheus dashboard","text":"
  • Port forward to Prometheus service. Find out the name of the prometheus service by using the following command:
    kubectl get svc -n prometheus\n

The result of this command would look like:

NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\nalertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   7h46m\nprometheus-grafana                        ClusterIP   10.106.28.162    <none>        80/TCP                       7h46m\nprometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093/TCP                     7h46m\nprometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443/TCP                      7h46m\nprometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090/TCP                     7h46m\nprometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080/TCP                     7h46m\nprometheus-operated                       ClusterIP   None             <none>        9090/TCP                     7h46m\nprometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100/TCP                     7h46m\n
prometheus-kube-prometheus-prometheus is the service we want to port forward to. We can do so using the following command:
kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n prometheus 9090:9090\n
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:9090 -> 9090\nForwarding from [::1]:9090 -> 9090\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090

"},{"location":"user-guide/monitoring/#connect-and-view-grafana-dashboard","title":"Connect and view Grafana dashboard","text":"
  • Port forward to Grafana service. Find out the name of the Grafana service by using the following command:
    kubectl get svc -n prometheus\n

The result of this command would look like:

NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\nalertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   7h46m\nprometheus-grafana                        ClusterIP   10.106.28.162    <none>        80/TCP                       7h46m\nprometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093/TCP                     7h46m\nprometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443/TCP                      7h46m\nprometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090/TCP                     7h46m\nprometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080/TCP                     7h46m\nprometheus-operated                       ClusterIP   None             <none>        9090/TCP                     7h46m\nprometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100/TCP                     7h46m\n
prometheus-grafana is the service we want to port forward to. We can do so using the following command:
kubectl port-forward svc/prometheus-grafana  3000:80 -n prometheus\n
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:3000 -> 3000\nForwarding from [::1]:3000 -> 3000\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000 The default username/ password is admin/prom-operator - After the login you can import the Grafana dashboard from official dashboards, by following steps given below :

  • Navigate to lefthand panel of grafana
  • Hover on the gearwheel icon for Configuration and click \"Data Sources\"
  • Click \"Add data source\"
  • Select \"Prometheus\"
  • Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)
  • Left menu (hover over +) -> Dashboard
  • Click \"Import\"
  • Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
  • Click Import JSON
  • Select the Prometheus data source
  • Click \"Import\"

"},{"location":"user-guide/monitoring/#exposed-metrics","title":"Exposed metrics","text":"

Prometheus metrics are exposed on port 10254.

"},{"location":"user-guide/monitoring/#request-metrics","title":"Request metrics","text":"
  • nginx_ingress_controller_request_duration_seconds Histogram\\ The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\\ nginx var: request_time

  • nginx_ingress_controller_response_duration_seconds Histogram\\ The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\\ Note: can be up to several millis bigger than the nginx_ingress_controller_request_duration_seconds because of the different measuring method. nginx var: upstream_response_time

  • nginx_ingress_controller_header_duration_seconds Histogram\\ The time spent on receiving first header from the upstream server\\ nginx var: upstream_header_time

  • nginx_ingress_controller_connect_duration_seconds Histogram\\ The time spent on establishing a connection with the upstream server\\ nginx var: upstream_connect_time

  • nginx_ingress_controller_response_size Histogram\\ The response length (including request line, header, and request body)\\ nginx var: bytes_sent

  • nginx_ingress_controller_request_size Histogram\\ The request length (including request line, header, and request body)\\ nginx var: request_length

  • nginx_ingress_controller_requests Counter\\ The total number of client requests

  • nginx_ingress_controller_bytes_sent Histogram\\ The number of bytes sent to a client. Deprecated, use nginx_ingress_controller_response_size\\ nginx var: bytes_sent

# HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size\n# TYPE nginx_ingress_controller_bytes_sent histogram\n# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server\n# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds\n* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server\n# TYPE nginx_ingress_controller_header_duration_seconds histogram\n# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds\n# TYPE nginx_ingress_controller_request_duration_seconds histogram\n# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_request_size histogram\n# HELP nginx_ingress_controller_requests The total number of client requests.\n# TYPE nginx_ingress_controller_requests counter\n# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server\n# TYPE nginx_ingress_controller_response_duration_seconds histogram\n# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_response_size histogram\n
"},{"location":"user-guide/monitoring/#nginx-process-metrics","title":"Nginx process metrics","text":"
# HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}\n# TYPE nginx_ingress_controller_nginx_process_connections gauge\n# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}\n# TYPE nginx_ingress_controller_nginx_process_connections_total counter\n# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds\n# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter\n# HELP nginx_ingress_controller_nginx_process_num_procs number of processes\n# TYPE nginx_ingress_controller_nginx_process_num_procs gauge\n# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970/01/01\n# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge\n# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read\n# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter\n# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests\n# TYPE nginx_ingress_controller_nginx_process_requests_total counter\n# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written\n# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter\n
"},{"location":"user-guide/monitoring/#controller-metrics","title":"Controller metrics","text":"
# HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.\n# TYPE nginx_ingress_controller_build_info gauge\n# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations\n# TYPE nginx_ingress_controller_check_success counter\n# HELP nginx_ingress_controller_config_hash Running configuration hash actually running\n# TYPE nginx_ingress_controller_config_hash gauge\n# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful\n# TYPE nginx_ingress_controller_config_last_reload_successful gauge\n# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.\n# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge\n# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate\n# TYPE nginx_ingress_controller_ssl_certificate_info gauge\n# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations\n# TYPE nginx_ingress_controller_success counter\n# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity\n# TYPE nginx_ingress_controller_orphan_ingress gauge\n
"},{"location":"user-guide/monitoring/#admission-metrics","title":"Admission metrics","text":"
# HELP nginx_ingress_controller_admission_config_size The size of the tested configuration\n# TYPE nginx_ingress_controller_admission_config_size gauge\n# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)\n# TYPE nginx_ingress_controller_admission_render_duration gauge\n# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller\n# TYPE nginx_ingress_controller_admission_render_ingresses gauge\n# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)\n# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge\n# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)\n# TYPE nginx_ingress_controller_admission_tested_duration gauge\n# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller\n# TYPE nginx_ingress_controller_admission_tested_ingresses gauge\n
"},{"location":"user-guide/monitoring/#histogram-buckets","title":"Histogram buckets","text":"

You can configure buckets for histogram metrics using these command line options (here are their default values): * --time-buckets=[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] * --length-buckets=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] * --size-buckets=[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]

"},{"location":"user-guide/multiple-ingress/","title":"Multiple Ingress controllers","text":"

By default, deploying multiple Ingress controllers (e.g., ingress-nginx & gce) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.

To fix this problem, use IngressClasses. The kubernetes.io/ingress.class annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field ingress.spec.ingressClassName. But, when user has deployed with scope.enabled, then the ingress class resource field is not used.

"},{"location":"user-guide/multiple-ingress/#using-ingressclasses","title":"Using IngressClasses","text":"

If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName.

First, ensure the --controller-class= and --ingress-class are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique --election-id for the new instance of the controller.

# ingress-nginx Deployment/Statefulset\nspec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - /nginx-ingress-controller\n             - '--election-id=ingress-controller-leader'\n             - '--controller-class=k8s.io/internal-ingress-nginx'\n             - '--ingress-class=k8s.io/internal-nginx'\n            ...\n

Then use the same value in the IngressClass:

# ingress-nginx IngressClass\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n  name: internal-nginx\nspec:\n  controller: k8s.io/internal-ingress-nginx\n  ...\n

And refer to that IngressClass in your Ingress:

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: my-ingress\nspec:\n  ingressClassName: internal-nginx\n  ...\n

or if installing with Helm:

controller:\n  electionID: ingress-controller-leader\n  ingressClass: internal-nginx  # default: nginx\n  ingressClassResource:\n    name: internal-nginx  # default: nginx\n    enabled: true\n    default: false\n    controllerValue: \"k8s.io/internal-ingress-nginx\"  # default: k8s.io/ingress-nginx\n

Important

When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --controller-class value (see IsValid method in internal/ingress/annotations/class/main.go), otherwise the class annotation becomes required.

If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.

"},{"location":"user-guide/multiple-ingress/#using-the-kubernetesioingressclass-annotation-in-deprecation","title":"Using the kubernetes.io/ingress.class annotation (in deprecation)","text":"

If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like ingress-nginx to claim.

For instance,

metadata:\n  name: foo\n  annotations:\n    kubernetes.io/ingress.class: \"gce\"\n

will target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like:

metadata:\n  name: foo\n  annotations:\n    kubernetes.io/ingress.class: \"nginx\"\n

will target the Ingress-NGINX controller, forcing the GCE controller to ignore it.

You can change the value \"nginx\" to something else by setting the --ingress-class flag:

spec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - /nginx-ingress-controller\n             - --ingress-class=internal-nginx\n

then setting the corresponding kubernetes.io/ingress.class: \"internal-nginx\" annotation on your Ingresses.

To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress. If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string.

Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.

"},{"location":"user-guide/tls/","title":"TLS/HTTPS","text":""},{"location":"user-guide/tls/#tls-secrets","title":"TLS Secrets","text":"

Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

Warning

Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs W1012 09:15:45.920000 6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key

You can generate a self-signed certificate and private key with:

$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj \"/CN=${HOST}/O=${HOST}\" -addext \"subjectAltName = DNS:${HOST}\"\n

Then create the secret in the cluster via:

kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}\n

The resulting secret will be of type kubernetes.io/tls.

"},{"location":"user-guide/tls/#host-names","title":"Host names","text":"

Ensure that the relevant ingress rules specify a matching hostname.

"},{"location":"user-guide/tls/#default-ssl-certificate","title":"Default SSL Certificate","text":"

NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.

For this reason the Ingress controller provides the flag --default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.

For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

If the tls: section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.

On the other hand, if the tls: section is set - even without specifying a secretName option - NGINX will force HTTPS redirect.

To force redirects for Ingresses that do not specify a TLS-block at all, take a look at force-ssl-redirect in ConfigMap.

"},{"location":"user-guide/tls/#ssl-passthrough","title":"SSL Passthrough","text":"

The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

Warning

This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.

If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.

Note

Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

"},{"location":"user-guide/tls/#http-strict-transport-security","title":"HTTP Strict Transport Security","text":"

HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

HSTS is enabled by default.

To disable this behavior use hsts: \"false\" in the configuration ConfigMap.

"},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"

By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.

Tip

When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.

"},{"location":"user-guide/tls/#automated-certificate-management-with-cert-manager","title":"Automated Certificate Management with cert-manager","text":"

cert-manager automatically requests missing or expired certificates from a range of supported issuers (including Let's Encrypt) by monitoring ingress resources.

To set up cert-manager you should take a look at this full example.

To enable it for an ingress resource you have to deploy cert-manager, configure a certificate issuer update the manifest:

apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-demo\n  annotations:\n    cert-manager.io/issuer: \"letsencrypt-staging\" # Replace this with a production issuer once you've tested it\n    [..]\nspec:\n  tls:\n    - hosts:\n        - ingress-demo.example.com\n      secretName: ingress-demo-tls\n    [...]\n
"},{"location":"user-guide/tls/#default-tls-version-and-ciphers","title":"Default TLS Version and Ciphers","text":"

To provide the most secure baseline configuration possible,

ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.

"},{"location":"user-guide/tls/#legacy-tls","title":"Legacy TLS","text":"

The default configuration, though secure, does not support some older browsers and operating systems.

For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with ingress-nginx's default configuration.

To change this default behavior, use a ConfigMap.

A sample ConfigMap fragment to allow these older clients to connect could look something like the following (generated using the Mozilla SSL Configuration Generator)mozilla-ssl-config-old:

kind: ConfigMap\napiVersion: v1\nmetadata:\n  name: nginx-config\ndata:\n  ssl-ciphers: \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA\"\n  ssl-protocols: \"TLSv1.2 TLSv1.3\"\n
"},{"location":"user-guide/nginx-configuration/","title":"NGINX Configuration","text":"

There are three ways to customize NGINX:

  1. ConfigMap: using a Configmap to set global configurations in NGINX.
  2. Annotations: use this if you want a specific configuration for a particular Ingress rule.
  3. Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.
"},{"location":"user-guide/nginx-configuration/annotations-risk/","title":"Annotations Scope and Risk","text":"Group Annotation Risk Scope Aliases server-alias High ingress Allowlist allowlist-source-range Medium location BackendProtocol backend-protocol Low location BasicDigestAuth auth-realm Medium location BasicDigestAuth auth-secret Medium location BasicDigestAuth auth-secret-type Low location BasicDigestAuth auth-type Low location Canary canary Low ingress Canary canary-by-cookie Medium ingress Canary canary-by-header Medium ingress Canary canary-by-header-pattern Medium ingress Canary canary-by-header-value Medium ingress Canary canary-weight Low ingress Canary canary-weight-total Low ingress CertificateAuth auth-tls-error-page High location CertificateAuth auth-tls-match-cn High location CertificateAuth auth-tls-pass-certificate-to-upstream Low location CertificateAuth auth-tls-secret Medium location CertificateAuth auth-tls-verify-client Medium location CertificateAuth auth-tls-verify-depth Low location ClientBodyBufferSize client-body-buffer-size Low location ConfigurationSnippet configuration-snippet Critical location Connection connection-proxy-header Low location CorsConfig cors-allow-credentials Low ingress CorsConfig cors-allow-headers Medium ingress CorsConfig cors-allow-methods Medium ingress CorsConfig cors-allow-origin Medium ingress CorsConfig cors-expose-headers Medium ingress CorsConfig cors-max-age Low ingress CorsConfig enable-cors Low ingress CustomHTTPErrors custom-http-errors Low location CustomHeaders custom-headers Medium location DefaultBackend default-backend Low location Denylist denylist-source-range Medium location DisableProxyInterceptErrors disable-proxy-intercept-errors Low location EnableGlobalAuth enable-global-auth Low location ExternalAuth auth-always-set-cookie Low location ExternalAuth auth-cache-duration Medium location ExternalAuth auth-cache-key Medium location ExternalAuth auth-keepalive Low location ExternalAuth auth-keepalive-requests Low location ExternalAuth auth-keepalive-share-vars Low location ExternalAuth auth-keepalive-timeout Low location ExternalAuth auth-method Low location ExternalAuth auth-proxy-set-headers Medium location ExternalAuth auth-request-redirect Medium location ExternalAuth auth-response-headers Medium location ExternalAuth auth-signin High location ExternalAuth auth-signin-redirect-param Medium location ExternalAuth auth-snippet Critical location ExternalAuth auth-url High location FastCGI fastcgi-index Medium location FastCGI fastcgi-params-configmap Medium location HTTP2PushPreload http2-push-preload Low location LoadBalancing load-balance Low location Logs enable-access-log Low location Logs enable-rewrite-log Low location Mirror mirror-host High ingress Mirror mirror-request-body Low ingress Mirror mirror-target High ingress ModSecurity enable-modsecurity Low ingress ModSecurity enable-owasp-core-rules Low ingress ModSecurity modsecurity-snippet Critical ingress ModSecurity modsecurity-transaction-id High ingress Opentelemetry enable-opentelemetry Low location Opentelemetry opentelemetry-operation-name Medium location Opentelemetry opentelemetry-trust-incoming-span Low location Proxy proxy-body-size Medium location Proxy proxy-buffer-size Low location Proxy proxy-buffering Low location Proxy proxy-buffers-number Low location Proxy proxy-connect-timeout Low location Proxy proxy-cookie-domain Medium location Proxy proxy-cookie-path Medium location Proxy proxy-http-version Low location Proxy proxy-max-temp-file-size Low location Proxy proxy-next-upstream Medium location Proxy proxy-next-upstream-timeout Low location Proxy proxy-next-upstream-tries Low location Proxy proxy-read-timeout Low location Proxy proxy-redirect-from Medium location Proxy proxy-redirect-to Medium location Proxy proxy-request-buffering Low location Proxy proxy-send-timeout Low location ProxySSL proxy-ssl-ciphers Medium ingress ProxySSL proxy-ssl-name High ingress ProxySSL proxy-ssl-protocols Low ingress ProxySSL proxy-ssl-secret Medium ingress ProxySSL proxy-ssl-server-name Low ingress ProxySSL proxy-ssl-verify Low ingress ProxySSL proxy-ssl-verify-depth Low ingress RateLimit limit-allowlist Low location RateLimit limit-burst-multiplier Low location RateLimit limit-connections Low location RateLimit limit-rate Low location RateLimit limit-rate-after Low location RateLimit limit-rpm Low location RateLimit limit-rps Low location Redirect from-to-www-redirect Low location Redirect permanent-redirect Medium location Redirect permanent-redirect-code Low location Redirect temporal-redirect Medium location Redirect temporal-redirect-code Low location Rewrite app-root Medium location Rewrite force-ssl-redirect Medium location Rewrite preserve-trailing-slash Medium location Rewrite rewrite-target Medium ingress Rewrite ssl-redirect Low location Rewrite use-regex Low location SSLCipher ssl-ciphers Low ingress SSLCipher ssl-prefer-server-ciphers Low ingress SSLPassthrough ssl-passthrough Low ingress Satisfy satisfy Low location ServerSnippet server-snippet Critical ingress ServiceUpstream service-upstream Low ingress SessionAffinity affinity Low ingress SessionAffinity affinity-canary-behavior Low ingress SessionAffinity affinity-mode Medium ingress SessionAffinity session-cookie-change-on-failure Low ingress SessionAffinity session-cookie-conditional-samesite-none Low ingress SessionAffinity session-cookie-domain Medium ingress SessionAffinity session-cookie-expires Medium ingress SessionAffinity session-cookie-max-age Medium ingress SessionAffinity session-cookie-name Medium ingress SessionAffinity session-cookie-path Medium ingress SessionAffinity session-cookie-samesite Low ingress SessionAffinity session-cookie-secure Low ingress StreamSnippet stream-snippet Critical ingress UpstreamHashBy upstream-hash-by High location UpstreamHashBy upstream-hash-by-subset Low location UpstreamHashBy upstream-hash-by-subset-size Low location UpstreamVhost upstream-vhost Low location UsePortInRedirects use-port-in-redirects Low location XForwardedPrefix x-forwarded-prefix Medium location"},{"location":"user-guide/nginx-configuration/annotations/","title":"Annotations","text":"

You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.

Tip

Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\", \"false\", \"100\".

Note

The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.

Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/affinity-mode \"balanced\" or \"persistent\" nginx.ingress.kubernetes.io/affinity-canary-behavior \"sticky\" or \"legacy\" nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-secret-type string nginx.ingress.kubernetes.io/auth-type \"basic\" or \"digest\" nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-tls-match-cn string nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-cache-key string nginx.ingress.kubernetes.io/auth-cache-duration string nginx.ingress.kubernetes.io/auth-keepalive number nginx.ingress.kubernetes.io/auth-keepalive-share-vars \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-keepalive-requests number nginx.ingress.kubernetes.io/auth-keepalive-timeout number nginx.ingress.kubernetes.io/auth-proxy-set-headers string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/enable-global-auth \"true\" or \"false\" nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-header-pattern string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/canary-weight-total number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/custom-headers string nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-expose-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/temporal-redirect-code number nginx.ingress.kubernetes.io/preserve-trailing-slash \"true\" or \"false\" nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/proxy-http-version \"1.0\" or \"1.1\" nginx.ingress.kubernetes.io/proxy-ssl-secret string nginx.ingress.kubernetes.io/proxy-ssl-ciphers string nginx.ingress.kubernetes.io/proxy-ssl-name string nginx.ingress.kubernetes.io/proxy-ssl-protocols string nginx.ingress.kubernetes.io/proxy-ssl-verify string nginx.ingress.kubernetes.io/proxy-ssl-verify-depth number nginx.ingress.kubernetes.io/proxy-ssl-server-name string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-change-on-failure \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-domain string nginx.ingress.kubernetes.io/session-cookie-expires string nginx.ingress.kubernetes.io/session-cookie-max-age string nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/session-cookie-samesite string nginx.ingress.kubernetes.io/session-cookie-secure string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/stream-snippet string nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/denylist-source-range CIDR nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/proxy-max-temp-file-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers \"true\" or \"false\" nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-opentelemetry \"true\" or \"false\" nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span \"true\" or \"false\" nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string nginx.ingress.kubernetes.io/mirror-request-body string nginx.ingress.kubernetes.io/mirror-target string nginx.ingress.kubernetes.io/mirror-host string"},{"location":"user-guide/nginx-configuration/annotations/#canary","title":"Canary","text":"

In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set:

  • nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.

  • nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with nginx.ingress.kubernetes.io/canary-by-header. The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.

  • nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.

  • nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.

  • nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of <weight-total> means implies all requests will be sent to the alternative service specified in the Ingress. <weight-total> defaults to 100, and can be increased via nginx.ingress.kubernetes.io/canary-weight-total.

  • nginx.ingress.kubernetes.io/canary-weight-total: The total weight of traffic. If unspecified, it defaults to 100.

  • Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight

    Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity. If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the canary ingress definition.

    Known Limitations

    Currently a maximum of one canary ingress can be applied per Ingress rule.

    "},{"location":"user-guide/nginx-configuration/annotations/#rewrite","title":"Rewrite","text":"

    In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.

    If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.

    Example

    Please check the rewrite example.

    "},{"location":"user-guide/nginx-configuration/annotations/#session-affinity","title":"Session Affinity","text":"

    The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.

    The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.

    The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.

    Attention

    If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.

    Example

    Please check the affinity example.

    "},{"location":"user-guide/nginx-configuration/annotations/#cookie-affinity","title":"Cookie affinity","text":"

    If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.

    The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.

    Use nginx.ingress.kubernetes.io/session-cookie-domain to set the Domain attribute of the sticky cookie.

    Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: \"true\".

    Use nginx.ingress.kubernetes.io/session-cookie-expires to control the cookie expires, its value is a number of seconds until the cookie expires.

    Use nginx.ingress.kubernetes.io/session-cookie-path to control the cookie path when use-regex is set to true.

    Use nginx.ingress.kubernetes.io/session-cookie-change-on-failure to control the cookie change after request failure.

    "},{"location":"user-guide/nginx-configuration/annotations/#authentication","title":"Authentication","text":"

    It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.

    The annotations are:

    nginx.ingress.kubernetes.io/auth-type: [basic|digest]\n

    Indicates the HTTP Authentication Type: Basic or Digest Access Authentication.

    nginx.ingress.kubernetes.io/auth-secret: secretName\n

    The name of the Secret that contains the usernames and passwords which are granted access to the paths defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.

    nginx.ingress.kubernetes.io/auth-secret-type: [auth-file|auth-map]\n

    The auth-secret can have two forms:

    • auth-file - default, an htpasswd file in the key auth within the secret
    • auth-map - the keys of the secret are the usernames, and the values are the hashed passwords
    nginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n

    Example

    Please check the auth example.

    "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-hashing","title":"Custom NGINX upstream hashing","text":"

    NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.

    There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.

    To enable consistent hashing for a backend:

    nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri$host\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"${request_uri}-text-value\" to consistently hash upstream requests by the current request URI.

    \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: \"true\". This maps requests to subset of nodes instead of a single one. nginx.ingress.kubernetes.io/upstream-hash-by-subset-size determines the size of each subset (default 3).

    Please check the chashsubset example.

    "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-load-balancing","title":"Custom NGINX load balancing","text":"

    This is similar to load-balance in ConfigMap, but configures load balancing algorithm per ingress.

    Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.

    "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-vhost","title":"Custom NGINX upstream vhost","text":"

    This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.

    "},{"location":"user-guide/nginx-configuration/annotations/#client-certificate-authentication","title":"Client Certificate Authentication","text":"

    It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.

    Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.

    To enable, add the annotation nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. This secret must have a file named ca.crt containing the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress.

    You can further customize client certificate authentication and behavior with these annotations:

    • nginx.ingress.kubernetes.io/auth-tls-verify-depth: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
    • nginx.ingress.kubernetes.io/auth-tls-verify-client: Enables verification of client certificates. Possible values are:
      • on: Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. Failed certificate verification will result in a status code 400 (Bad Request) (default)
      • off: Don't request client certificates and don't do client certificate verification.
      • optional: Do optional client certificate validation against the CAs from auth-tls-secret. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
      • optional_no_ca: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret. Certificate verification result is sent to the upstream service.
    • nginx.ingress.kubernetes.io/auth-tls-error-page: The URL/Page that user should be redirected in case of a Certificate Authentication Error
    • nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert. Possible values are \"true\" or \"false\" (default).
    • nginx.ingress.kubernetes.io/auth-tls-match-cn: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with \"CN=\", example: \"CN=myvalidclient\". If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: \"CN=(option1|option2|myvalidclient)\". In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.

    The following headers are sent to the upstream service according to the auth-tls-* annotations:

    • ssl-client-issuer-dn: The issuer information of the client certificate. Example: \"CN=My CA\"
    • ssl-client-subject-dn: The subject information of the client certificate. Example: \"CN=My Client\"
    • ssl-client-verify: The result of the client verification. Possible values: \"SUCCESS\", \"FAILED: \"
    • ssl-client-cert: The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to \"true\". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A
    • Example

      Please check the client-certs example.

      Attention

      TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior.

      Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/

      Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls

      "},{"location":"user-guide/nginx-configuration/annotations/#backend-certificate-authentication","title":"Backend Certificate Authentication","text":"

      It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.

      • nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form \"namespace/secretName\".
      • nginx.ingress.kubernetes.io/proxy-ssl-verify: Enables or disables verification of the proxied HTTPS server certificate. (default: off)
      • nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
      • nginx.ingress.kubernetes.io/proxy-ssl-ciphers: Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
      • nginx.ingress.kubernetes.io/proxy-ssl-name: Allows to set proxy_ssl_name. This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
      • nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
      • nginx.ingress.kubernetes.io/proxy-ssl-server-name: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
      "},{"location":"user-guide/nginx-configuration/annotations/#configuration-snippet","title":"Configuration snippet","text":"

      Using this annotation you can add additional configuration to the NGINX location. For example:

      nginx.ingress.kubernetes.io/configuration-snippet: |\n  more_set_headers \"Request-Id: $req_id\";\n

      Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the related issue on github for more information.

      "},{"location":"user-guide/nginx-configuration/annotations/#custom-http-errors","title":"Custom HTTP Errors","text":"

      Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.

      Example usage:

      nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\"\n

      "},{"location":"user-guide/nginx-configuration/annotations/#custom-headers","title":"Custom Headers","text":"

      This annotation is of the form nginx.ingress.kubernetes.io/custom-headers: custom-headers-configmap to specify a configmap name that contains custom headers. This annotation uses more_set_headers nginx directive.

      Example configmap:

      apiVersion: v1\ndata:\n  Content-Type: application/json\nkind: ConfigMap\nmetadata:\n  name: custom-headers-configmap\n

      Attention

      First define the allowed response headers in global-allowed-response-headers.

      "},{"location":"user-guide/nginx-configuration/annotations/#default-backend","title":"Default Backend","text":"

      This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has multiple ports, the first one is the one which will receive the backend traffic.

      This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the custom-http-errors annotation are set.

      "},{"location":"user-guide/nginx-configuration/annotations/#enable-cors","title":"Enable CORS","text":"

      To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\". This will add a section in the server location enabling this functionality.

      CORS can be controlled with the following annotations:

      • nginx.ingress.kubernetes.io/cors-allow-methods: Controls which methods are accepted.

        This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).

        • Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
        • Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"
      • nginx.ingress.kubernetes.io/cors-allow-headers: Controls which headers are accepted.

        This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.

        • Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
        • Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"
      • nginx.ingress.kubernetes.io/cors-expose-headers: Controls which headers are exposed to response.

        This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.

        • Default: empty
        • Example: nginx.ingress.kubernetes.io/cors-expose-headers: \"*, X-CustomResponseHeader\"
      • nginx.ingress.kubernetes.io/cors-allow-origin: Controls what's the accepted Origin for CORS.

        This is a multi-valued field, separated by ','. It must follow this format: protocol://origin-site.com or protocol://origin-site.com:port

        • Default: *
        • Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443, http://origin-site.com, myprotocol://example.org:1199\"

        It also supports single level wildcard subdomains and follows this format: protocol://*.foo.bar, protocol://*.bar.foo:8080 or protocol://*.abc.bar.foo:9000 - Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://*.origin-site.com:4443, http://*.origin-site.com, myprotocol://example.org:1199\"

      • nginx.ingress.kubernetes.io/cors-allow-credentials: Controls if credentials can be passed during CORS operations.

        • Default: true
        • Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\"
      • nginx.ingress.kubernetes.io/cors-max-age: Controls how long preflight requests can be cached.

        • Default: 1728000
        • Example: nginx.ingress.kubernetes.io/cors-max-age: 600

      Note

      For more information please see https://enable-cors.org

      "},{"location":"user-guide/nginx-configuration/annotations/#http2-push-preload","title":"HTTP2 Push Preload.","text":"

      Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests.

      Example

      • nginx.ingress.kubernetes.io/http2-push-preload: \"true\"
      "},{"location":"user-guide/nginx-configuration/annotations/#server-alias","title":"Server Alias","text":"

      Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: \"<alias 1>,<alias 2>\". This will create a server with the same configuration, but adding new values to the server_name directive.

      Note

      A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.

      For more information please see the server_name documentation.

      "},{"location":"user-guide/nginx-configuration/annotations/#server-snippet","title":"Server snippet","text":"

      Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/server-snippet: |\n        set $agentflag 0;\n\n        if ($http_user_agent ~* \"(Mobile)\" ){\n          set $agentflag 1;\n        }\n\n        if ( $agentflag = 1 ) {\n          return 301 https://m.example.com;\n        }\n

      Attention

      This annotation can be used only once per host.

      "},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","title":"Client Body Buffer Size","text":"

      Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.

      Note

      The annotation value must be given in a format understood by Nginx.

      Example

      • nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes
      • nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte
      • nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte
      • nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte
      • nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte

      For more information please see https://nginx.org

      "},{"location":"user-guide/nginx-configuration/annotations/#external-authentication","title":"External Authentication","text":"

      To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.

      nginx.ingress.kubernetes.io/auth-url: \"URL to the authentication service\"\n

      Additionally it is possible to set:

      • nginx.ingress.kubernetes.io/auth-keepalive: <Connections> to specify the maximum number of keepalive connections to auth-url. Only takes effect when no variables are used in the host part of the URL. Defaults to 0 (keepalive disabled).

      Note: does not work with HTTP/2 listener because of a limitation in Lua subrequests. UseHTTP2 configuration should be disabled!

      • nginx.ingress.kubernetes.io/auth-keepalive-share-vars: Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to \"true\" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to \"false\".
      • nginx.ingress.kubernetes.io/auth-keepalive-requests: <Requests> to specify the maximum number of requests that can be served through one keepalive connection. Defaults to 1000 and only applied if auth-keepalive is set to higher than 0.
      • nginx.ingress.kubernetes.io/auth-keepalive-timeout: <Timeout> to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to 60 and only applied if auth-keepalive is set to higher than 0.
      • nginx.ingress.kubernetes.io/auth-method: <Method> to specify the HTTP method to use.
      • nginx.ingress.kubernetes.io/auth-signin: <SignIn_URL> to specify the location of the error page.
      • nginx.ingress.kubernetes.io/auth-signin-redirect-param: <SignIn_URL> to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
      • nginx.ingress.kubernetes.io/auth-response-headers: <Response_Header_1, ..., Response_Header_n> to specify headers to pass to backend once authentication request completes.
      • nginx.ingress.kubernetes.io/auth-proxy-set-headers: <ConfigMap> the name of a ConfigMap that specifies headers to pass to the authentication service
      • nginx.ingress.kubernetes.io/auth-request-redirect: <Request_Redirect_URL> to specify the X-Auth-Request-Redirect header value.
      • nginx.ingress.kubernetes.io/auth-cache-key: <Cache_Key> this enables caching for auth requests. specify a lookup key for auth responses. e.g. $remote_user$http_authorization. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
      • nginx.ingress.kubernetes.io/auth-cache-duration: <Cache_duration> to specify a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
      • nginx.ingress.kubernetes.io/auth-always-set-cookie: <Boolean_Flag> to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
      • nginx.ingress.kubernetes.io/auth-snippet: <Auth_Snippet> to specify a custom snippet to use with external authentication, e.g.
      nginx.ingress.kubernetes.io/auth-url: http://foo.com/external-auth\nnginx.ingress.kubernetes.io/auth-snippet: |\n    proxy_set_header Foo-Header 42;\n

      Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set

      Example

      Please check the external-auth example.

      "},{"location":"user-guide/nginx-configuration/annotations/#global-external-authentication","title":"Global External Authentication","text":"

      By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use enable-global-auth: \"false\" in the NGINX ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to \"true\".

      Note

      For more information please see global-auth-url.

      "},{"location":"user-guide/nginx-configuration/annotations/#rate-limiting","title":"Rate Limiting","text":"

      These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks.

      • nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
      • nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
      • nginx.ingress.kubernetes.io/limit-rpm: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
      • nginx.ingress.kubernetes.io/limit-burst-multiplier: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.
      • nginx.ingress.kubernetes.io/limit-rate-after: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.
      • nginx.ingress.kubernetes.io/limit-rate: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.
      • nginx.ingress.kubernetes.io/limit-whitelist: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.

      If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections, limit-rpm, limit-rps.

      To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap. The value set in an Ingress annotation will override the global setting.

      The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.

      "},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect","title":"Permanent Redirect","text":"

      This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.

      "},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect-code","title":"Permanent Redirect Code","text":"

      This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.

      "},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect","title":"Temporal Redirect","text":"

      This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)

      "},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect-code","title":"Temporal Redirect Code","text":"

      This annotation allows you to modify the status code used for temporal redirects. For example nginx.ingress.kubernetes.io/temporal-redirect-code: '307' would return your temporal-redirect with a 307.

      "},{"location":"user-guide/nginx-configuration/annotations/#ssl-passthrough","title":"SSL Passthrough","text":"

      The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.

      Note

      SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.

      Attention

      Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.

      "},{"location":"user-guide/nginx-configuration/annotations/#service-upstream","title":"Service Upstream","text":"

      By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

      The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

      This can be desirable for things like zero-downtime deployments . See issue #257.

      "},{"location":"user-guide/nginx-configuration/annotations/#known-issues","title":"Known Issues","text":"

      If the service-upstream annotation is specified the following things should be taken into consideration:

      • Sticky Sessions will not work as only round-robin load balancing is supported.
      • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.
      "},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"

      By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap.

      To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.

      When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.

      To preserve the trailing slash in the URI with ssl-redirect, set nginx.ingress.kubernetes.io/preserve-trailing-slash: \"true\" annotation for that particular resource.

      "},{"location":"user-guide/nginx-configuration/annotations/#redirect-fromto-www","title":"Redirect from/to www","text":"

      In some scenarios, it is required to redirect from www.domain.com to domain.com or vice versa, which way the redirect is performed depends on the configured host value in the Ingress object.

      For example, if .spec.rules.host is configured with a value like www.example.com, then this annotation will redirect from example.com to www.example.com. If .spec.rules.host is configured with a value like example.com, so without a www, then this annotation will redirect from www.example.com to example.com instead.

      To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"

      Attention

      If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.

      Attention

      For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.

      "},{"location":"user-guide/nginx-configuration/annotations/#denylist-source-range","title":"Denylist source range","text":"

      You can specify blocked client IP source ranges through the nginx.ingress.kubernetes.io/denylist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

      To configure this setting globally for all Ingress rules, the denylist-source-range value may be set in the NGINX ConfigMap.

      Note

      Adding an annotation to an Ingress rule overrides any global restriction.

      "},{"location":"user-guide/nginx-configuration/annotations/#whitelist-source-range","title":"Whitelist source range","text":"

      You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

      To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

      Note

      Adding an annotation to an Ingress rule overrides any global restriction.

      "},{"location":"user-guide/nginx-configuration/annotations/#custom-timeouts","title":"Custom timeouts","text":"

      Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:

      • nginx.ingress.kubernetes.io/proxy-connect-timeout
      • nginx.ingress.kubernetes.io/proxy-send-timeout
      • nginx.ingress.kubernetes.io/proxy-read-timeout
      • nginx.ingress.kubernetes.io/proxy-next-upstream
      • nginx.ingress.kubernetes.io/proxy-next-upstream-timeout
      • nginx.ingress.kubernetes.io/proxy-next-upstream-tries
      • nginx.ingress.kubernetes.io/proxy-request-buffering

      If you indicate Backend Protocol as GRPC or GRPCS, the following grpc values will be set and inherited from proxy timeouts:

      • grpc_connect_timeout=5s, from nginx.ingress.kubernetes.io/proxy-connect-timeout
      • grpc_send_timeout=60s, from nginx.ingress.kubernetes.io/proxy-send-timeout
      • grpc_read_timeout=60s, from nginx.ingress.kubernetes.io/proxy-read-timeout

      Note: All timeout values are unitless and in seconds e.g. nginx.ingress.kubernetes.io/proxy-read-timeout: \"120\" sets a valid 120 seconds proxy read timeout.

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-redirect","title":"Proxy redirect","text":"

      The annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response

      Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.

      By default the value of each annotation is \"off\".

      "},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","title":"Custom max body size","text":"

      For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.

      To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:

      nginx.ingress.kubernetes.io/proxy-body-size: 8m\n
      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-domain","title":"Proxy cookie domain","text":"

      Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response.

      To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap.

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-path","title":"Proxy cookie path","text":"

      Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response.

      To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap.

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffering","title":"Proxy buffering","text":"

      Enable or disable proxy buffering proxy_buffering. By default proxy buffering is disabled in the NGINX config.

      To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:

      nginx.ingress.kubernetes.io/proxy-buffering: \"on\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffers-number","title":"Proxy buffers Number","text":"

      Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4

      To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-buffers-number: \"4\"\n

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffer-size","title":"Proxy buffer size","text":"

      Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\"

      To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-buffer-size: \"8k\"\n

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-max-temp-file-size","title":"Proxy max temp file size","text":"

      When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.

      The zero value disables buffering of responses to temporary files.

      To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-max-temp-file-size: \"1024m\"\n

      "},{"location":"user-guide/nginx-configuration/annotations/#proxy-http-version","title":"Proxy HTTP version","text":"

      Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to \"1.1\".

      nginx.ingress.kubernetes.io/proxy-http-version: \"1.0\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#ssl-ciphers","title":"SSL ciphers","text":"

      Specifies the enabled ciphers.

      Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host.

      nginx.ingress.kubernetes.io/ssl-ciphers: \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n

      The following annotation will set the ssl_prefer_server_ciphers directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.

      nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: \"true\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#connection-proxy-header","title":"Connection proxy header","text":"

      Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation:

      nginx.ingress.kubernetes.io/connection-proxy-header: \"keep-alive\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#enable-access-log","title":"Enable Access Log","text":"

      Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:

      nginx.ingress.kubernetes.io/enable-access-log: \"false\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#enable-rewrite-log","title":"Enable Rewrite Log","text":"

      Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:

      nginx.ingress.kubernetes.io/enable-rewrite-log: \"true\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#enable-opentelemetry","title":"Enable Opentelemetry","text":"

      Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)

      nginx.ingress.kubernetes.io/enable-opentelemetry: \"true\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#opentelemetry-trust-incoming-span","title":"Opentelemetry Trust Incoming Span","text":"

      The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)

      nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-spans: \"true\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header","title":"X-Forwarded-Prefix Header","text":"

      To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used:

      nginx.ingress.kubernetes.io/x-forwarded-prefix: \"/path\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#modsecurity","title":"ModSecurity","text":"

      ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path must be disabled manually.

      It can be enabled using the following annotation:

      nginx.ingress.kubernetes.io/enable-modsecurity: \"true\"\n
      ModSecurity will run in \"Detection-Only\" mode using the recommended configuration.

      You can enable the OWASP Core Rule Set by setting the following annotation:

      nginx.ingress.kubernetes.io/enable-owasp-core-rules: \"true\"\n

      You can pass transactionIDs from nginx by setting up the following:

      nginx.ingress.kubernetes.io/modsecurity-transaction-id: \"$request_id\"\n

      You can also add your own set of modsecurity rules via a snippet:

      nginx.ingress.kubernetes.io/modsecurity-snippet: |\nSecRuleEngine On\nSecDebugLog /tmp/modsec_debug.log\n

      Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:

      nginx 0.24.1 and below

      nginx.ingress.kubernetes.io/modsecurity-snippet: |\nInclude /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\nInclude /etc/nginx/modsecurity/modsecurity.conf\n
      nginx 0.25.0 and above
      nginx.ingress.kubernetes.io/modsecurity-snippet: |\nInclude /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n

      "},{"location":"user-guide/nginx-configuration/annotations/#backend-protocol","title":"Backend Protocol","text":"

      Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI

      By default NGINX uses HTTP.

      Example:

      nginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#use-regex","title":"Use Regex","text":"

      Attention

      When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie, nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex.

      Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false.

      The following will indicate that regular expression paths are being used:

      nginx.ingress.kubernetes.io/use-regex: \"true\"\n

      The following will indicate that regular expression paths are not being used:

      nginx.ingress.kubernetes.io/use-regex: \"false\"\n

      When this annotation is set to true, the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

      Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

      Please read about ingress path matching before using this modifier.

      "},{"location":"user-guide/nginx-configuration/annotations/#satisfy","title":"Satisfy","text":"

      By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.

      nginx.ingress.kubernetes.io/satisfy: \"any\"\n
      "},{"location":"user-guide/nginx-configuration/annotations/#mirror","title":"Mirror","text":"

      Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in \"test\" backends.

      The mirror backend can be set by applying:

      nginx.ingress.kubernetes.io/mirror-target: https://test.env.com$request_uri\n

      By default the request-body is sent to the mirror backend, but can be turned off by applying:

      nginx.ingress.kubernetes.io/mirror-request-body: \"off\"\n

      Also by default header Host for mirrored requests will be set the same as a host part of uri in the \"mirror-target\" annotation. You can override it by \"mirror-host\" annotation:

      nginx.ingress.kubernetes.io/mirror-target: https://1.2.3.4$request_uri\nnginx.ingress.kubernetes.io/mirror-host: \"test.env.com\"\n

      Note: The mirror directive will be applied to all paths within the ingress resource.

      The request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle.

      For more information on the mirror module see ngx_http_mirror_module

      "},{"location":"user-guide/nginx-configuration/annotations/#stream-snippet","title":"Stream snippet","text":"

      Using the annotation nginx.ingress.kubernetes.io/stream-snippet it is possible to add custom stream configuration.

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/stream-snippet: |\n      server {\n        listen 8000;\n        proxy_pass 127.0.0.1:80;\n      }\n
      "},{"location":"user-guide/nginx-configuration/configmap/","title":"ConfigMaps","text":"

      ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.

      The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.

      In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:

      data:\n  map-hash-bucket-size: \"128\"\n  ssl-protocols: SSLv2\n

      Important

      The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\".

      \"Slice\" types (defined below as []string or []int) can be provided as a comma-delimited string.

      "},{"location":"user-guide/nginx-configuration/configmap/#configuration-options","title":"Configuration options","text":"

      The following table shows a configuration option's name, type, and the default value:

      name type default notes add-headers string \"\" allow-backend-server-header bool \"false\" allow-cross-namespace-resources bool \"false\" allow-snippet-annotations bool \"false\" annotations-risk-level string High annotation-value-word-blocklist string array \"\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" http-access-log-path string \"\" stream-access-log-path string \"\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-modsecurity bool \"false\" modsecurity-snippet string \"\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool \"false\" disable-ipv6 bool \"false\" disable-ipv6-dns bool \"false\" enable-underscores-in-headers bool \"false\" enable-ocsp bool \"false\" ignore-invalid-headers bool \"true\" retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"\" DEPRECATED in favour of large_client_header_buffers http2-max-header-size string \"\" DEPRECATED in favour of large_client_header_buffers http2-max-requests int 0 DEPRECATED in favour of keepalive_requests http2-max-concurrent-streams int 128 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"31536000\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 1000 large-client-header-buffers string \"4 8k\" log-format-escape-none bool \"false\" log-format-escape-json bool \"false\" log-format-upstream string $remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int <size of the processor\u2019s cache line> proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"false\" ssl-ciphers string \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2 TLSv1.3\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"false\" ssl-session-ticket-key string <Randomly Generated> ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" enable-aio-write bool \"true\" use-gzip bool \"false\" use-geoip bool \"true\" use-geoip2 bool \"false\" geoip2-autoreload-in-minutes int \"0\" enable-brotli bool \"false\" brotli-level int 4 brotli-min-length int 20 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component\" use-http2 bool \"true\" gzip-disable string \"\" gzip-level int 1 gzip-min-length int 256 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component\" worker-processes string <Number of CPUs> worker-cpu-affinity string \"\" worker-shutdown-timeout string \"240s\" enable-serial-reloads bool \"false\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 320 upstream-keepalive-time string \"1h\" upstream-keepalive-timeout int 60 upstream-keepalive-requests int 10000 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-next-upstream bool \"true\" proxy-stream-next-upstream-timeout string \"600s\" proxy-stream-next-upstream-tries int 3 proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" enable-real-ip bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"false\" generate-request-id bool \"true\" jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-endpoint string \"\" jaeger-service-name string \"nginx\" jaeger-propagation-format string \"jaeger\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" jaeger-sampler-host string \"http://127.0.0.1\" jaeger-sampler-port int 5778 jaeger-trace-context-header-name string uber-trace-id jaeger-debug-header string uber-debug-id jaeger-baggage-header string jaeger-baggage jaeger-trace-baggage-header-prefix string uberctx- datadog-collector-host string \"\" datadog-collector-port int 8126 datadog-service-name string \"nginx\" datadog-environment string \"prod\" datadog-operation-name-override string \"nginx.handle\" datadog-priority-sampling bool \"true\" datadog-sample-rate float 1.0 enable-opentelemetry bool \"false\" opentelemetry-trust-incoming-span bool \"true\" opentelemetry-operation-name string \"\" opentelemetry-config string \"/etc/nginx/opentelemetry.toml\" otlp-collector-host string \"\" otlp-collector-port int 4317 otel-max-queuesize int otel-schedule-delay-millis int otel-max-export-batch-size int otel-service-name string \"nginx\" otel-sampler string \"AlwaysOff\" otel-sampler-parent-based bool \"false\" otel-sampler-ratio float 0.01 main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" stream-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-timeout int 0 proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" force-ssl-redirect bool \"false\" denylist-source-range []string []string{} whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 lua-shared-dicts string \"\" http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 enable-syslog bool \"false\" syslog-host string \"\" syslog-port int 514 no-tls-redirect-locations string \"/.well-known/acme-challenge\" global-allowed-response-headers string \"\" global-auth-url string \"\" global-auth-method string \"\" global-auth-signin string \"\" global-auth-signin-redirect-param string \"rd\" global-auth-response-headers string \"\" global-auth-request-redirect string \"\" global-auth-snippet string \"\" global-auth-cache-key string \"\" global-auth-cache-duration string \"200 202 401 5m\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\" proxy-ssl-location-only bool \"false\" default-type string \"text/html\" service-upstream bool \"false\" ssl-reject-handshake bool \"false\" debug-connections []string \"127.0.0.1,1.1.1.1/24\" strict-validate-path-type bool \"true\" grpc-buffer-size-kb int 0"},{"location":"user-guide/nginx-configuration/configmap/#add-headers","title":"add-headers","text":"

      Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

      "},{"location":"user-guide/nginx-configuration/configmap/#allow-backend-server-header","title":"allow-backend-server-header","text":"

      Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#allow-cross-namespace-resources","title":"allow-cross-namespace-resources","text":"

      Enables users to consume cross namespace resource on annotations, when was previously enabled . default: false

      Annotations that may be impacted with this change:

      • auth-secret
      • auth-proxy-set-header
      • auth-tls-secret
      • fastcgi-params-configmap
      • proxy-ssl-secret
      "},{"location":"user-guide/nginx-configuration/configmap/#allow-snippet-annotations","title":"allow-snippet-annotations","text":"

      Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ false

      Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file

      "},{"location":"user-guide/nginx-configuration/configmap/#annotations-risk-level","title":"annotations-risk-level","text":"

      Represents the risk accepted on an annotation. If the risk is, for instance Medium, annotations with risk High and Critical will not be accepted.

      Accepted values are Critical, High, Medium and Low.

      default: High

      "},{"location":"user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist","title":"annotation-value-word-blocklist","text":"

      Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742

      When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.

      default: \"\"

      When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.

      suggested: \"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\\\"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#hide-headers","title":"hide-headers","text":"

      Sets additional header that will not be passed from the upstream server to the client response. default: empty

      References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

      "},{"location":"user-guide/nginx-configuration/configmap/#access-log-params","title":"access-log-params","text":"

      Additional params for access_log. For example, buffer=16k, gzip, flush=1m

      References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

      "},{"location":"user-guide/nginx-configuration/configmap/#access-log-path","title":"access-log-path","text":"

      Access log path for both http and stream context. Goes to /var/log/nginx/access.log by default.

      Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

      "},{"location":"user-guide/nginx-configuration/configmap/#http-access-log-path","title":"http-access-log-path","text":"

      Access log path for http context globally. default: \"\"

      Note: If not specified, the access-log-path will be used.

      "},{"location":"user-guide/nginx-configuration/configmap/#stream-access-log-path","title":"stream-access-log-path","text":"

      Access log path for stream context globally. default: \"\"

      Note: If not specified, the access-log-path will be used.

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-access-log-for-default-backend","title":"enable-access-log-for-default-backend","text":"

      Enables logging access to default backend. default: is disabled.

      "},{"location":"user-guide/nginx-configuration/configmap/#error-log-path","title":"error-log-path","text":"

      Error log path. Goes to /var/log/nginx/error.log by default.

      Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

      References: https://nginx.org/en/docs/ngx_core_module.html#error_log

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-modsecurity","title":"enable-modsecurity","text":"

      Enables the modsecurity module for NGINX. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-owasp-modsecurity-crs","title":"enable-owasp-modsecurity-crs","text":"

      Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#modsecurity-snippet","title":"modsecurity-snippet","text":"

      Adds custom rules to modsecurity section of nginx configuration

      "},{"location":"user-guide/nginx-configuration/configmap/#client-header-buffer-size","title":"client-header-buffer-size","text":"

      Allows to configure a custom buffer size for reading client request header.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

      "},{"location":"user-guide/nginx-configuration/configmap/#client-header-timeout","title":"client-header-timeout","text":"

      Defines a timeout for reading client request header, in seconds.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

      "},{"location":"user-guide/nginx-configuration/configmap/#client-body-buffer-size","title":"client-body-buffer-size","text":"

      Sets buffer size for reading client request body.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

      "},{"location":"user-guide/nginx-configuration/configmap/#client-body-timeout","title":"client-body-timeout","text":"

      Defines a timeout for reading client request body, in seconds.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

      "},{"location":"user-guide/nginx-configuration/configmap/#disable-access-log","title":"disable-access-log","text":"

      Disables the Access Log from the entire Ingress Controller. default: false

      References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

      "},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6","title":"disable-ipv6","text":"

      Disable listening on IPV6. default: false; IPv6 listening is enabled

      "},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6-dns","title":"disable-ipv6-dns","text":"

      Disable IPV6 for nginx DNS resolver. default: false; IPv6 resolving enabled.

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-underscores-in-headers","title":"enable-underscores-in-headers","text":"

      Enables underscores in header names. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-ocsp","title":"enable-ocsp","text":"

      Enables Online Certificate Status Protocol stapling (OCSP) support. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#ignore-invalid-headers","title":"ignore-invalid-headers","text":"

      Set if header fields with invalid names should be ignored. default: is enabled

      "},{"location":"user-guide/nginx-configuration/configmap/#retry-non-idempotent","title":"retry-non-idempotent","text":"

      Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".

      "},{"location":"user-guide/nginx-configuration/configmap/#error-log-level","title":"error-log-level","text":"

      Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

      References: https://nginx.org/en/docs/ngx_core_module.html#error_log

      "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-field-size","title":"http2-max-field-size","text":"

      Warning

      This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

      Limits the maximum size of an HPACK-compressed request header field.

      References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

      "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-header-size","title":"http2-max-header-size","text":"

      Warning

      This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

      Limits the maximum size of the entire request header list after HPACK decompression.

      References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

      "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-requests","title":"http2-max-requests","text":"

      Warning

      This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.

      Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.

      References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests

      "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-concurrent-streams","title":"http2-max-concurrent-streams","text":"

      Sets the maximum number of concurrent HTTP/2 streams in a connection.

      References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams

      "},{"location":"user-guide/nginx-configuration/configmap/#hsts","title":"hsts","text":"

      Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

      References:

      • https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
      • https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
      "},{"location":"user-guide/nginx-configuration/configmap/#hsts-include-subdomains","title":"hsts-include-subdomains","text":"

      Enables or disables the use of HSTS in all the subdomains of the server-name.

      "},{"location":"user-guide/nginx-configuration/configmap/#hsts-max-age","title":"hsts-max-age","text":"

      Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.

      "},{"location":"user-guide/nginx-configuration/configmap/#hsts-preload","title":"hsts-preload","text":"

      Enables or disables the preload attribute in the HSTS feature (when it is enabled).

      "},{"location":"user-guide/nginx-configuration/configmap/#keep-alive","title":"keep-alive","text":"

      Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

      Important

      Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7

      Changes with nginx 1.19.7                                        16 Feb 2021\n\n    *) Change: connections handling in HTTP/2 has been changed to better\n       match HTTP/1.x; the \"http2_recv_timeout\", \"http2_idle_timeout\", and\n       \"http2_max_requests\" directives have been removed, the\n       \"keepalive_timeout\" and \"keepalive_requests\" directives should be\n       used instead.\n

      References: nginx change log nginx issue tracker nginx mailing list

      "},{"location":"user-guide/nginx-configuration/configmap/#keep-alive-requests","title":"keep-alive-requests","text":"

      Sets the maximum number of requests that can be served through one keep-alive connection.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

      "},{"location":"user-guide/nginx-configuration/configmap/#large-client-header-buffers","title":"large-client-header-buffers","text":"

      Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers

      "},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-none","title":"log-format-escape-none","text":"

      Sets if the escape parameter is disabled entirely for character escaping in variables (\"true\") or controlled by log-format-escape-json (\"false\") Sets the nginx log format.

      "},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-json","title":"log-format-escape-json","text":"

      Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format.

      "},{"location":"user-guide/nginx-configuration/configmap/#log-format-upstream","title":"log-format-upstream","text":"

      Sets the nginx log format. Example for json output:

      log-format-upstream: '{\"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\", \"x_forwarded_for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\",\n  \"remote_user\": \"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\": $status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\",\n  \"path\": \"$uri\", \"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\",\n  \"http_user_agent\": \"$http_user_agent\" }'\n

      Please check the log-format for definition of each field.

      "},{"location":"user-guide/nginx-configuration/configmap/#log-format-stream","title":"log-format-stream","text":"

      Sets the nginx stream format.

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-multi-accept","title":"enable-multi-accept","text":"

      If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true

      References: https://nginx.org/en/docs/ngx_core_module.html#multi_accept

      "},{"location":"user-guide/nginx-configuration/configmap/#max-worker-connections","title":"max-worker-connections","text":"

      Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files. default: 16384

      Tip

      Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).

      "},{"location":"user-guide/nginx-configuration/configmap/#max-worker-open-files","title":"max-worker-open-files","text":"

      Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) - 1024\". default: 0

      "},{"location":"user-guide/nginx-configuration/configmap/#map-hash-bucket-size","title":"map-hash-bucket-size","text":"

      Sets the bucket size for the map variables hash tables. The details of setting up hash tables are provided in a separate document.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr","title":"proxy-real-ip-cidr","text":"

      If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: \"0.0.0.0/0\"

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-set-headers","title":"proxy-set-headers","text":"

      Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example

      "},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-max-size","title":"server-name-hash-max-size","text":"

      Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.

      References: https://nginx.org/en/docs/hash.html

      "},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size","title":"server-name-hash-bucket-size","text":"

      Sets the size of the bucket for the server names hash tables.

      References:

      • https://nginx.org/en/docs/hash.html
      • https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-max-size","title":"proxy-headers-hash-max-size","text":"

      Sets the maximum size of the proxy headers hash tables.

      References:

      • https://nginx.org/en/docs/hash.html
      • https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size
      "},{"location":"user-guide/nginx-configuration/configmap/#reuse-port","title":"reuse-port","text":"

      Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-bucket-size","title":"proxy-headers-hash-bucket-size","text":"

      Sets the size of the bucket for the proxy headers hash tables.

      References:

      • https://nginx.org/en/docs/hash.html
      • https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size
      "},{"location":"user-guide/nginx-configuration/configmap/#server-tokens","title":"server-tokens","text":"

      Send NGINX Server header in responses and display NGINX version in error pages. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-ciphers","title":"ssl-ciphers","text":"

      Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.

      The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.

      The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

      DHE-based cyphers will not be available until DH parameter is configured Custom DH parameters for perfect forward secrecy

      Please check the Mozilla SSL Configuration Generator.

      Note: ssl_prefer_server_ciphers directive will be enabled by default for http context.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-ecdh-curve","title":"ssl-ecdh-curve","text":"

      Specifies a curve for ECDHE ciphers.

      References: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-dh-param","title":"ssl-dh-param","text":"

      Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".

      References:

      • https://wiki.openssl.org/index.php/Diffie-Hellman_parameters
      • https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
      • https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-protocols","title":"ssl-protocols","text":"

      Sets the SSL protocols to use. The default is: TLSv1.2 TLSv1.3.

      Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-early-data","title":"ssl-early-data","text":"

      Enables or disables TLS 1.3 early data, also known as Zero Round Trip Time Resumption (0-RTT).

      This requires ssl-protocols to have TLSv1.3 enabled. Enable this with caution, because requests sent within early data are subject to replay attacks.

      ssl_early_data. The default is: false.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache","title":"ssl-session-cache","text":"

      Enables or disables the use of shared SSL cache among worker processes.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache-size","title":"ssl-session-cache-size","text":"

      Sets the size of the SSL shared session cache between all worker processes.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-tickets","title":"ssl-session-tickets","text":"

      Enables or disables session resumption through TLS session tickets.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-ticket-key","title":"ssl-session-ticket-key","text":"

      Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64

      TLS session ticket-key, by default, a randomly generated key is used.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-timeout","title":"ssl-session-timeout","text":"

      Sets the time during which a client may reuse the session parameters stored in a cache.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-buffer-size","title":"ssl-buffer-size","text":"

      Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).

      References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/

      "},{"location":"user-guide/nginx-configuration/configmap/#use-proxy-protocol","title":"use-proxy-protocol","text":"

      Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-protocol-header-timeout","title":"proxy-protocol-header-timeout","text":"

      Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-aio-write","title":"enable-aio-write","text":"

      Enables or disables the directive aio_write that writes files asynchronously. default: true

      "},{"location":"user-guide/nginx-configuration/configmap/#use-gzip","title":"use-gzip","text":"

      Enables or disables compression of HTTP responses using the \"gzip\" module. MIME types to compress are controlled by gzip-types. default: false

      "},{"location":"user-guide/nginx-configuration/configmap/#use-geoip","title":"use-geoip","text":"

      Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true

      Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.

      "},{"location":"user-guide/nginx-configuration/configmap/#use-geoip2","title":"use-geoip2","text":"

      Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/ingress-controller/geoip/GeoLite2-City.mmdb and /etc/ingress-controller/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.

      Important

      If the feature is enabled but the files are missing, GeoIP2 will not be enabled.

      default: false

      "},{"location":"user-guide/nginx-configuration/configmap/#geoip2-autoreload-in-minutes","title":"geoip2-autoreload-in-minutes","text":"

      Enables the geoip2 module autoreload in MaxMind databases setting the interval in minutes.

      default: 0

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-brotli","title":"enable-brotli","text":"

      Enables or disables compression of HTTP responses using the \"brotli\" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: false

      Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli

      "},{"location":"user-guide/nginx-configuration/configmap/#brotli-level","title":"brotli-level","text":"

      Sets the Brotli Compression Level that will be used. default: 4

      "},{"location":"user-guide/nginx-configuration/configmap/#brotli-min-length","title":"brotli-min-length","text":"

      Minimum length of responses, in bytes, that will be eligible for brotli compression. default: 20

      "},{"location":"user-guide/nginx-configuration/configmap/#brotli-types","title":"brotli-types","text":"

      Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component

      "},{"location":"user-guide/nginx-configuration/configmap/#use-http2","title":"use-http2","text":"

      Enables or disables HTTP/2 support in secure connections.

      "},{"location":"user-guide/nginx-configuration/configmap/#gzip-disable","title":"gzip-disable","text":"

      Disables gzipping of responses for requests with \"User-Agent\" header fields matching any of the specified regular expressions.

      "},{"location":"user-guide/nginx-configuration/configmap/#gzip-level","title":"gzip-level","text":"

      Sets the gzip Compression Level that will be used. default: 1

      "},{"location":"user-guide/nginx-configuration/configmap/#gzip-min-length","title":"gzip-min-length","text":"

      Minimum length of responses to be returned to the client before it is eligible for gzip compression, in bytes. default: 256

      "},{"location":"user-guide/nginx-configuration/configmap/#gzip-types","title":"gzip-types","text":"

      Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. default: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.

      "},{"location":"user-guide/nginx-configuration/configmap/#worker-processes","title":"worker-processes","text":"

      Sets the number of worker processes. The default of \"auto\" means number of available CPU cores.

      "},{"location":"user-guide/nginx-configuration/configmap/#worker-cpu-affinity","title":"worker-cpu-affinity","text":"

      Binds worker processes to the sets of CPUs. worker_cpu_affinity. By default worker processes are not bound to any specific CPUs. The value can be:

      • \"\": empty string indicate no affinity is applied.
      • cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus.
      • auto: binding worker processes automatically to available CPUs.
      "},{"location":"user-guide/nginx-configuration/configmap/#worker-shutdown-timeout","title":"worker-shutdown-timeout","text":"

      Sets a timeout for Nginx to wait for worker to gracefully shutdown. default: \"240s\"

      "},{"location":"user-guide/nginx-configuration/configmap/#load-balance","title":"load-balance","text":"

      Sets the algorithm to use for load balancing. The value can either be:

      • round_robin: to use the default round robin loadbalancer
      • ewma: to use the Peak EWMA method for routing (implementation)

      The default is round_robin.

      • To load balance using consistent hashing of IP or other variables, consider the nginx.ingress.kubernetes.io/upstream-hash-by annotation.
      • To load balance using session cookies, consider the nginx.ingress.kubernetes.io/affinity annotation.

      References: https://nginx.org/en/docs/http/load_balancing.html

      "},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-bucket-size","title":"variables-hash-bucket-size","text":"

      Sets the bucket size for the variables hash table.

      References: https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size

      "},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-max-size","title":"variables-hash-max-size","text":"

      Sets the maximum size of the variables hash table.

      References: https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size

      "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-connections","title":"upstream-keepalive-connections","text":"

      Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320

      References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

      "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-time","title":"upstream-keepalive-time","text":"

      Sets the maximum time during which requests can be processed through one keepalive connection. default: \"1h\"

      References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time

      "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-timeout","title":"upstream-keepalive-timeout","text":"

      Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60

      References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout

      "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-requests","title":"upstream-keepalive-requests","text":"

      Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000

      References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests

      "},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-zone-variable","title":"limit-conn-zone-variable","text":"

      Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-timeout","title":"proxy-stream-timeout","text":"

      Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.

      References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream","title":"proxy-stream-next-upstream","text":"

      When a connection to the proxied server cannot be established, determines whether a client connection will be passed to the next server.

      References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream-timeout","title":"proxy-stream-next-upstream-timeout","text":"

      Limits the time allowed to pass a connection to the next server. The 0 value turns off this limitation.

      References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_timeout

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream-tries","title":"proxy-stream-next-upstream-tries","text":"

      Limits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation.

      References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_tries

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-responses","title":"proxy-stream-responses","text":"

      Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.

      References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses

      "},{"location":"user-guide/nginx-configuration/configmap/#bind-address","title":"bind-address","text":"

      Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

      "},{"location":"user-guide/nginx-configuration/configmap/#use-forwarded-headers","title":"use-forwarded-headers","text":"

      If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

      If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-real-ip","title":"enable-real-ip","text":"

      enable-real-ip enables the configuration of https://nginx.org/en/docs/http/ngx_http_realip_module.html. Specific attributes of the module can be configured further by using forwarded-for-header and proxy-real-ip-cidr settings.

      "},{"location":"user-guide/nginx-configuration/configmap/#forwarded-for-header","title":"forwarded-for-header","text":"

      Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For

      "},{"location":"user-guide/nginx-configuration/configmap/#compute-full-forwarded-for","title":"compute-full-forwarded-for","text":"

      Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-add-original-uri-header","title":"proxy-add-original-uri-header","text":"

      Adds an X-Original-Uri header with the original request URI to the backend request

      "},{"location":"user-guide/nginx-configuration/configmap/#generate-request-id","title":"generate-request-id","text":"

      Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-host","title":"jaeger-collector-host","text":"

      Specifies the host to use when uploading traces. It must be a valid URL.

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-port","title":"jaeger-collector-port","text":"

      Specifies the port to use when uploading traces. default: 6831

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-endpoint","title":"jaeger-endpoint","text":"

      Specifies the endpoint to use when uploading traces to a collector. This takes priority over jaeger-collector-host if both are specified.

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-service-name","title":"jaeger-service-name","text":"

      Specifies the service name to use for any traces created. default: nginx

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-propagation-format","title":"jaeger-propagation-format","text":"

      Specifies the traceparent/tracestate propagation format. default: jaeger

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-type","title":"jaeger-sampler-type","text":"

      Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-param","title":"jaeger-sampler-param","text":"

      Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-host","title":"jaeger-sampler-host","text":"

      Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-port","title":"jaeger-sampler-port","text":"

      Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. default: 5778

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-trace-context-header-name","title":"jaeger-trace-context-header-name","text":"

      Specifies the header name used for passing trace context. default: uber-trace-id

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-debug-header","title":"jaeger-debug-header","text":"

      Specifies the header name used for force sampling. default: jaeger-debug-id

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-baggage-header","title":"jaeger-baggage-header","text":"

      Specifies the header name used to submit baggage if there is no root span. default: jaeger-baggage

      "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-tracer-baggage-header-prefix","title":"jaeger-tracer-baggage-header-prefix","text":"

      Specifies the header prefix used to propagate baggage. default: uberctx-

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-collector-host","title":"datadog-collector-host","text":"

      Specifies the datadog agent host to use when uploading traces. It must be a valid URL.

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-collector-port","title":"datadog-collector-port","text":"

      Specifies the port to use when uploading traces. default: 8126

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-service-name","title":"datadog-service-name","text":"

      Specifies the service name to use for any traces created. default: nginx

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-environment","title":"datadog-environment","text":"

      Specifies the environment this trace belongs to. default: prod

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-operation-name-override","title":"datadog-operation-name-override","text":"

      Overrides the operation name to use for any traces crated. default: nginx.handle

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-priority-sampling","title":"datadog-priority-sampling","text":"

      Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true

      "},{"location":"user-guide/nginx-configuration/configmap/#datadog-sample-rate","title":"datadog-sample-rate","text":"

      Specifies sample rate for any traces created. This is effective only when datadog-priority-sampling is false default: 1.0

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-opentelemetry","title":"enable-opentelemetry","text":"

      Enables the nginx OpenTelemetry extension. default: is disabled

      References: https://github.com/open-telemetry/opentelemetry-cpp-contrib

      "},{"location":"user-guide/nginx-configuration/configmap/#opentelemetry-operation-name","title":"opentelemetry-operation-name","text":"

      Specifies a custom name for the server span. default: is empty

      For example, set to \"HTTP $request_method $uri\".

      "},{"location":"user-guide/nginx-configuration/configmap/#otlp-collector-host","title":"otlp-collector-host","text":"

      Specifies the host to use when uploading traces. It must be a valid URL.

      "},{"location":"user-guide/nginx-configuration/configmap/#otlp-collector-port","title":"otlp-collector-port","text":"

      Specifies the port to use when uploading traces. default: 4317

      "},{"location":"user-guide/nginx-configuration/configmap/#otel-service-name","title":"otel-service-name","text":"

      Specifies the service name to use for any traces created. default: nginx

      "},{"location":"user-guide/nginx-configuration/configmap/#opentelemetry-trust-incoming-span-true","title":"opentelemetry-trust-incoming-span: \"true\"","text":"

      Enables or disables using spans from incoming requests as parent for created ones. default: true

      "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler-parent-based","title":"otel-sampler-parent-based","text":"

      Uses sampler implementation which by default will take a sample if parent Activity is sampled. default: false

      "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler-ratio","title":"otel-sampler-ratio","text":"

      Specifies sample rate for any traces created. default: 0.01

      "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler","title":"otel-sampler","text":"

      Specifies the sampler to be used when sampling traces. The available samplers are: AlwaysOff, AlwaysOn, TraceIdRatioBased, remote. default: AlwaysOff

      "},{"location":"user-guide/nginx-configuration/configmap/#main-snippet","title":"main-snippet","text":"

      Adds custom configuration to the main section of the nginx configuration.

      "},{"location":"user-guide/nginx-configuration/configmap/#http-snippet","title":"http-snippet","text":"

      Adds custom configuration to the http section of the nginx configuration.

      "},{"location":"user-guide/nginx-configuration/configmap/#server-snippet","title":"server-snippet","text":"

      Adds custom configuration to all the servers in the nginx configuration.

      "},{"location":"user-guide/nginx-configuration/configmap/#stream-snippet","title":"stream-snippet","text":"

      Adds custom configuration to the stream section of the nginx configuration.

      "},{"location":"user-guide/nginx-configuration/configmap/#location-snippet","title":"location-snippet","text":"

      Adds custom configuration to all the locations in the nginx configuration.

      You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.

      "},{"location":"user-guide/nginx-configuration/configmap/#custom-http-errors","title":"custom-http-errors","text":"

      Enables which HTTP codes should be passed for processing with the error_page directive

      Setting at least one code also enables proxy_intercept_errors which are required to process error_page.

      Example usage: custom-http-errors: 404,415

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-body-size","title":"proxy-body-size","text":"

      Sets the maximum allowed size of the client request body. See NGINX client_max_body_size.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-connect-timeout","title":"proxy-connect-timeout","text":"

      Sets the timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.

      It will also set the grpc_connect_timeout for gRPC connections.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-read-timeout","title":"proxy-read-timeout","text":"

      Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

      It will also set the grpc_read_timeout for gRPC connections.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-send-timeout","title":"proxy-send-timeout","text":"

      Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.

      It will also set the grpc_send_timeout for gRPC connections.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffers-number","title":"proxy-buffers-number","text":"

      Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffer-size","title":"proxy-buffer-size","text":"

      Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-path","title":"proxy-cookie-path","text":"

      Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-domain","title":"proxy-cookie-domain","text":"

      Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream","title":"proxy-next-upstream","text":"

      Specifies in which cases a request should be passed to the next server.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-timeout","title":"proxy-next-upstream-timeout","text":"

      Limits the time in seconds during which a request can be passed to the next server.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-tries","title":"proxy-next-upstream-tries","text":"

      Limit the number of possible tries a request should be passed to the next server.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-redirect-from","title":"proxy-redirect-from","text":"

      Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off

      References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-request-buffering","title":"proxy-request-buffering","text":"

      Enables or disables buffering of a client request body.

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-redirect","title":"ssl-redirect","text":"

      Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\"

      "},{"location":"user-guide/nginx-configuration/configmap/#force-ssl-redirect","title":"force-ssl-redirect","text":"

      Sets the global value of redirects (308) to HTTPS if the server has a default TLS certificate (defined in extra-args). default: \"false\"

      "},{"location":"user-guide/nginx-configuration/configmap/#denylist-source-range","title":"denylist-source-range","text":"

      Sets the default denylisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

      "},{"location":"user-guide/nginx-configuration/configmap/#whitelist-source-range","title":"whitelist-source-range","text":"

      Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

      "},{"location":"user-guide/nginx-configuration/configmap/#skip-access-log-urls","title":"skip-access-log-urls","text":"

      Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty

      "},{"location":"user-guide/nginx-configuration/configmap/#limit-rate","title":"limit-rate","text":"

      Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

      "},{"location":"user-guide/nginx-configuration/configmap/#limit-rate-after","title":"limit-rate-after","text":"

      Sets the initial amount after which the further transmission of a response to a client will be rate limited.

      "},{"location":"user-guide/nginx-configuration/configmap/#lua-shared-dicts","title":"lua-shared-dicts","text":"

      Customize default Lua shared dictionaries or define more. You can use the following syntax to do so:

      lua-shared-dicts: \"<my dict name>: <my dict size>, [<my dict name>: <my dict size>], ...\"\n

      For example following will set default certificate_data dictionary to 100M and will introduce a new dictionary called my_custom_plugin:

      lua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 5\"\n

      You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the my_custom_plugin dict is only 512KB.

      lua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 512k\"\n

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after

      "},{"location":"user-guide/nginx-configuration/configmap/#http-redirect-code","title":"http-redirect-code","text":"

      Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 default: 308

      Why the default code is 308?

      RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffering","title":"proxy-buffering","text":"

      Enables or disables buffering of responses from the proxied server.

      "},{"location":"user-guide/nginx-configuration/configmap/#limit-req-status-code","title":"limit-req-status-code","text":"

      Sets the status code to return in response to rejected requests. default: 503

      "},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-status-code","title":"limit-conn-status-code","text":"

      Sets the status code to return in response to rejected connections. default: 503

      "},{"location":"user-guide/nginx-configuration/configmap/#enable-syslog","title":"enable-syslog","text":"

      Enable syslog feature for access log and error log. default: false

      "},{"location":"user-guide/nginx-configuration/configmap/#syslog-host","title":"syslog-host","text":"

      Sets the address of syslog server. The address can be specified as a domain name or IP address.

      "},{"location":"user-guide/nginx-configuration/configmap/#syslog-port","title":"syslog-port","text":"

      Sets the port of syslog server. default: 514

      "},{"location":"user-guide/nginx-configuration/configmap/#no-tls-redirect-locations","title":"no-tls-redirect-locations","text":"

      A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-allowed-response-headers","title":"global-allowed-response-headers","text":"

      A comma-separated list of allowed response headers inside the custom headers annotations

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-url","title":"global-auth-url","text":"

      A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to \"false\". default: \"\"

      References: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-method","title":"global-auth-method","text":"

      A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: \"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-signin","title":"global-auth-signin","text":"

      Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: \"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-signin-redirect-param","title":"global-auth-signin-redirect-param","text":"

      Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: \"rd\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-response-headers","title":"global-auth-response-headers","text":"

      Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: \"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-request-redirect","title":"global-auth-request-redirect","text":"

      Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: \"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-snippet","title":"global-auth-snippet","text":"

      Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-snippet. default: \"\"

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-cache-key","title":"global-auth-cache-key","text":"

      Enables caching for global auth requests. Specify a lookup key for auth responses, e.g. $remote_user$http_authorization.

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-cache-duration","title":"global-auth-cache-duration","text":"

      Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.

      "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-always-set-cookie","title":"global-auth-always-set-cookie","text":"

      Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. default: false

      "},{"location":"user-guide/nginx-configuration/configmap/#no-auth-locations","title":"no-auth-locations","text":"

      A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\"

      "},{"location":"user-guide/nginx-configuration/configmap/#block-cidrs","title":"block-cidrs","text":"

      A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally.

      References: https://nginx.org/en/docs/http/ngx_http_access_module.html#deny

      "},{"location":"user-guide/nginx-configuration/configmap/#block-user-agents","title":"block-user-agents","text":"

      A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

      References: https://nginx.org/en/docs/http/ngx_http_map_module.html#map

      "},{"location":"user-guide/nginx-configuration/configmap/#block-referers","title":"block-referers","text":"

      A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

      References: https://nginx.org/en/docs/http/ngx_http_map_module.html#map

      "},{"location":"user-guide/nginx-configuration/configmap/#proxy-ssl-location-only","title":"proxy-ssl-location-only","text":"

      Set if proxy-ssl parameters should be applied only on locations and not on servers. default: is disabled

      "},{"location":"user-guide/nginx-configuration/configmap/#default-type","title":"default-type","text":"

      Sets the default MIME type of a response. default: text/html

      References: https://nginx.org/en/docs/http/ngx_http_core_module.html#default_type

      "},{"location":"user-guide/nginx-configuration/configmap/#service-upstream","title":"service-upstream","text":"

      Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule. default: \"false\"

      "},{"location":"user-guide/nginx-configuration/configmap/#ssl-reject-handshake","title":"ssl-reject-handshake","text":"

      Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress. default: \"false\"

      References: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_reject_handshake

      "},{"location":"user-guide/nginx-configuration/configmap/#debug-connections","title":"debug-connections","text":"

      Enables debugging log for selected client connections. default: \"\"

      References: http://nginx.org/en/docs/ngx_core_module.html#debug_connection

      "},{"location":"user-guide/nginx-configuration/configmap/#strict-validate-path-type","title":"strict-validate-path-type","text":"

      Ingress objects contains a field called pathType that defines the proxy behavior. It can be Exact, Prefix and ImplementationSpecific.

      When pathType is configured as Exact or Prefix, there should be a more strict validation, allowing only paths starting with \"/\" and containing only alphanumeric characters and \"-\", \"_\" and additional \"/\".

      When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType ImplementationSpecific and containing invalid characters to be denied.

      This means that Ingress objects that rely on paths containing regex characters should use ImplementationSpecific pathType.

      The cluster admin should establish validation rules using mechanisms like Open Policy Agent to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used.

      default: \"true\"

      "},{"location":"user-guide/nginx-configuration/configmap/#grpc-buffer-size-kb","title":"grpc-buffer-size-kb","text":"

      Sets the configuration for the GRPC Buffer Size parameter. If not set it will use the default from NGINX.

      References: https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size

      "},{"location":"user-guide/nginx-configuration/custom-template/","title":"Custom NGINX template","text":"

      The NGINX template is located in the file /etc/nginx/template/nginx.tmpl.

      Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template

              volumeMounts:\n          - mountPath: /etc/nginx/template\n            name: nginx-template-volume\n            readOnly: true\n      volumes:\n        - name: nginx-template-volume\n          configMap:\n            name: nginx-template\n            items:\n            - key: nginx.tmpl\n              path: nginx.tmpl\n

      Please note the template is tied to the Go code. Do not change names in the variable $cfg.

      For more information about the template syntax please check the Go template package. In addition to the built-in functions provided by the Go package the following functions are also available:

      • empty: returns true if the specified parameter (string) is empty
      • contains: strings.Contains
      • hasPrefix: strings.HasPrefix
      • hasSuffix: strings.HasSuffix
      • toUpper: strings.ToUpper
      • toLower: strings.ToLower
      • split: strings.Split
      • quote: wraps a string in double quotes
      • buildLocation: helps to build the NGINX Location section in each server
      • buildProxyPass: builds the reverse proxy configuration
      • buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation

      TODO:

      • buildAuthLocation:
      • buildAuthResponseHeaders:
      • buildResolvers:
      • buildDenyVariable:
      • buildUpstreamName:
      • buildForwardedFor:
      • buildAuthSignURL:
      • buildNextUpstream:
      • filterRateLimits:
      • formatIP:
      • getenv:
      • getIngressInformation:
      • serverConfig:
      • isLocationAllowed:
      • isValidClientBodyBufferSize:
      "},{"location":"user-guide/nginx-configuration/log-format/","title":"Log format","text":"

      The default configuration uses a custom logging format to add additional information about upstreams, response time and status.

      log_format upstreaminfo\n    '$remote_addr - $remote_user [$time_local] \"$request\" '\n    '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n    '$request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr '\n    '$upstream_response_length $upstream_response_time $upstream_status $req_id';\n
      Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream-<namespace>-<service name>-<service port> $proxy_alternative_upstream_name name of the alternative upstream. The format is upstream-<namespace>-<service name>-<service port> $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id value of the X-Request-ID HTTP header. If the header is not set, a randomly generated ID.

      Additional available variables:

      Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service

      Sources:

      • Upstream variables
      • Embedded variables
      "},{"location":"user-guide/third-party-addons/modsecurity/","title":"ModSecurity Web Application Firewall","text":"

      ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org

      The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).

      The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap.

      Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. Due to the value of the setting SecAuditLogType=Concurrent the ModSecurity log is stored in multiple files inside the directory /var/log/audit. The default Serial value in SecAuditLogType can impact performance.

      The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the OWASP ModSecurity Core Rule Set repository. Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.

      "},{"location":"user-guide/third-party-addons/modsecurity/#supported-annotations","title":"Supported annotations","text":"

      For more info on supported annotations, please see annotations/#modsecurity

      "},{"location":"user-guide/third-party-addons/modsecurity/#example-of-using-modsecurity-with-plugins-via-the-helm-chart","title":"Example of using ModSecurity with plugins via the helm chart","text":"

      Suppose you have a ConfigMap that contains the contents of the nextcloud-rule-exclusions plugin like this:

      apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: modsecurity-plugins\ndata:\n  empty-after.conf: |\n    # no data\n  empty-before.conf: |\n    # no data\n  empty-config.conf: |\n    # no data\n  nextcloud-rule-exclusions-before.conf:\n    # this is just a snippet\n    # find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin\n    #\n    # [ File Manager ]\n    # The web interface uploads files, and interacts with the user.\n    SecRule REQUEST_FILENAME \"@contains /remote.php/webdav\" \\\n        \"id:9508102,\\\n        phase:1,\\\n        pass,\\\n        t:none,\\\n        nolog,\\\n        ver:'nextcloud-rule-exclusions-plugin/1.2.0',\\\n        ctl:ruleRemoveById=920420,\\\n        ctl:ruleRemoveById=920440,\\\n        ctl:ruleRemoveById=941000-942999,\\\n        ctl:ruleRemoveById=951000-951999,\\\n        ctl:ruleRemoveById=953100-953130,\\\n        ctl:ruleRemoveByTag=attack-injection-php\"\n

      If you're using the helm chart, you can pass in the following parameters in your values.yaml:

      controller:\n  config:\n    # Enables Modsecurity\n    enable-modsecurity: \"true\"\n\n    # Update ModSecurity config and rules\n    modsecurity-snippet: |\n      # this enables the mod security nextcloud plugin\n      Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf\n\n      # this enables the default OWASP Core Rule Set\n      Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n\n      # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)\n      SecRuleEngine On\n\n      # Enable scanning of the request body\n      SecRequestBodyAccess On\n\n      # Enable XML and JSON parsing\n      SecRule REQUEST_HEADERS:Content-Type \"(?:text|application(?:/soap\\+|/)|application/xml)/\" \\\n        \"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML\"\n\n      SecRule REQUEST_HEADERS:Content-Type \"application/json\" \\\n        \"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON\"\n\n      # Reject if larger (we could also let it pass with ProcessPartial)\n      SecRequestBodyLimitAction Reject\n\n      # Send ModSecurity audit logs to the stdout (only for rejected requests)\n      SecAuditLog /dev/stdout\n\n      # format the logs in JSON\n      SecAuditLogFormat JSON\n\n      # could be On/Off/RelevantOnly\n      SecAuditEngine RelevantOnly\n\n  # Add a volume for the plugins directory\n  extraVolumes:\n    - name: plugins\n      configMap:\n        name: modsecurity-plugins\n\n  # override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap\n  extraVolumeMounts:\n    - name: plugins\n      mountPath: /etc/nginx/owasp-modsecurity-crs/plugins\n
      "},{"location":"user-guide/third-party-addons/opentelemetry/","title":"OpenTelemetry","text":"

      Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.

      Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled.

      Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes.

      Demo: OpenTelemetry in Ingress NGINX.

      "},{"location":"user-guide/third-party-addons/opentelemetry/#usage","title":"Usage","text":"

      To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap:

      data:\n  enable-opentelemetry: \"true\"\n

      To enable or disable instrumentation for a single Ingress, use the enable-opentelemetry annotation:

      kind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/enable-opentelemetry: \"true\"\n

      We must also set the host to use when uploading traces:

      otlp-collector-host: \"otel-coll-collector.otel.svc\"\n
      NOTE: While the option is called otlp-collector-host, you will need to point this to any backend that receives otlp-grpc.

      Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector, Jaeger Tempo, and zipkin have been tested.

      Other optional configuration options:

      # specifies the name to use for the server span\nopentelemetry-operation-name\n\n# sets whether or not to trust incoming telemetry spans\nopentelemetry-trust-incoming-span\n\n# specifies the port to use when uploading traces, Default: 4317\notlp-collector-port\n\n# specifies the service name to use for any traces created, Default: nginx\notel-service-name\n\n# The maximum queue size. After the size is reached data are dropped.\notel-max-queuesize\n\n# The delay interval in milliseconds between two consecutive exports.\notel-schedule-delay-millis\n\n# How long the export can run before it is cancelled.\notel-schedule-delay-millis\n\n# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.\notel-max-export-batch-size\n\n# specifies sample rate for any traces created, Default: 0.01\notel-sampler-ratio\n\n# specifies the sampler to be used when sampling traces.\n# The available samplers are: AlwaysOn,  AlwaysOff, TraceIdRatioBased, Default: AlwaysOff\notel-sampler\n\n# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false\notel-sampler-parent-based\n

      Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:

      kind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: \"true\"\n

      "},{"location":"user-guide/third-party-addons/opentelemetry/#examples","title":"Examples","text":"

      The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop.

      In the esigo/nginx-example GitHub repository is an example of a simple hello service:

      graph TB\n    subgraph Browser\n    start[\"http://esigo.dev/hello/nginx\"]\n    end\n\n    subgraph app\n        sa[service-a]\n        sb[service-b]\n        sa --> |name: nginx| sb\n        sb --> |hello nginx!| sa\n    end\n\n    subgraph otel\n        otc[\"Otel Collector\"]\n    end\n\n    subgraph observability\n        tempo[\"Tempo\"]\n        grafana[\"Grafana\"]\n        backend[\"Jaeger\"]\n        zipkin[\"Zipkin\"]\n    end\n\n    subgraph ingress-nginx\n        ngx[nginx]\n    end\n\n    subgraph ngx[nginx]\n        ng[nginx]\n        om[OpenTelemetry module]\n    end\n\n    subgraph Node\n        app\n        otel\n        observability\n        ingress-nginx\n        om --> |otlp-gRPC| otc --> |jaeger| backend\n        otc --> |zipkin| zipkin\n        otc --> |otlp-gRPC| tempo --> grafana\n        sa --> |otlp-gRPC| otc\n        sb --> |otlp-gRPC| otc\n        start --> ng --> sa\n    end

      To install the example and collectors run:

      1. Enable Ingress addon with:

          opentelemetry:\n    enabled: true\n    image: registry.k8s.io/ingress-nginx/opentelemetry-1.25.3:v20240813-b933310d@sha256:f7604ac0547ed64d79b98d92133234e66c2c8aade3c1f4809fed5eec1fb7f922\n    containerSecurityContext:\n    allowPrivilegeEscalation: false\n
      2. Enable OpenTelemetry and set the otlp-collector-host:

        $ echo '\n  apiVersion: v1\n  kind: ConfigMap\n  data:\n    enable-opentelemetry: \"true\"\n    opentelemetry-config: \"/etc/nginx/opentelemetry.toml\"\n    opentelemetry-operation-name: \"HTTP $request_method $service_name $uri\"\n    opentelemetry-trust-incoming-span: \"true\"\n    otlp-collector-host: \"otel-coll-collector.otel.svc\"\n    otlp-collector-port: \"4317\"\n    otel-max-queuesize: \"2048\"\n    otel-schedule-delay-millis: \"5000\"\n    otel-max-export-batch-size: \"512\"\n    otel-service-name: \"nginx-proxy\" # Opentelemetry resource name\n    otel-sampler: \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased\n    otel-sampler-ratio: \"1.0\"\n    otel-sampler-parent-based: \"false\"\n  metadata:\n    name: ingress-nginx-controller\n    namespace: ingress-nginx\n  ' | kubectl replace -f -\n
      3. Deploy otel-collector, grafana and Jaeger backend:

        # add helm charts needed for grafana and OpenTelemetry collector\nhelm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts\nhelm repo add grafana https://grafana.github.io/helm-charts\nhelm repo update\n# deploy cert-manager needed for OpenTelemetry collector operator\nkubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml\n# create observability namespace\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml\n# install OpenTelemetry collector operator\nhelm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator\n# deploy OpenTelemetry collector\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml\n# deploy Jaeger all-in-one\nkubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability\n# deploy zipkin\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability\n# deploy tempo and grafana\nhelm upgrade --install tempo grafana/tempo --create-namespace -n observability\nhelm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability\n
      4. Build and deploy demo app:

        # build images\nmake images\n\n# deploy demo app:\nmake deploy-app\n
      5. Make a few requests to the Service:

        kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8090:80\ncurl http://esigo.dev:8090/hello/nginx\n\n\nStatusCode        : 200\nStatusDescription : OK\nContent           : {\"v\":\"hello nginx!\"}\n\nRawContent        : HTTP/1.1 200 OK\n                    Connection: keep-alive\n                    Content-Length: 21\n                    Content-Type: text/plain; charset=utf-8\n                    Date: Mon, 10 Oct 2022 17:43:33 GMT\n\n                    {\"v\":\"hello nginx!\"}\n\nForms             : {}\nHeaders           : {[Connection, keep-alive], [Content-Length, 21], [Content-Type, text/plain; charset=utf-8], [Date,\n                    Mon, 10 Oct 2022 17:43:33 GMT]}\nImages            : {}\nInputFields       : {}\nLinks             : {}\nParsedHtml        : System.__ComObject\nRawContentLength  : 21\n
      6. View the Grafana UI:

        kubectl port-forward --namespace=observability service/grafana 3000:80\n
        In the Grafana interface we can see the details:

      7. View the Jaeger UI:

        kubectl port-forward --namespace=observability service/jaeger-all-in-one-query 16686:16686\n
        In the Jaeger interface we can see the details:

      8. View the Zipkin UI:

        kubectl port-forward --namespace=observability service/zipkin 9411:9411\n
        In the Zipkin interface we can see the details:

      "},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog","text":"

      If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:

      "},{"location":"user-guide/third-party-addons/opentelemetry/#annotations","title":"Annotations","text":"Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry nginx.ingress.kubernetes.io/opentracing-trust-incoming-span nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span"},{"location":"user-guide/third-party-addons/opentelemetry/#configs","title":"Configs","text":"Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port, otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio"}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"

      This is the documentation for the Ingress NGINX Controller.

      It is built around the Kubernetes Ingress resource, using a ConfigMap to store the controller configuration.

      You can learn more about using Ingress in the official Kubernetes documentation.

      "},{"location":"#getting-started","title":"Getting Started","text":"

      See Deployment for a whirlwind tour that will get you started.

      "},{"location":"e2e-tests/","title":"E2e tests","text":""},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","title":"e2e test suite for Ingress NGINX Controller","text":""},{"location":"e2e-tests/#admission-admission-controller","title":"[Admission] admission controller","text":"
      • should not allow overlaps of host and paths without canary annotations
      • should allow overlaps of host and paths with canary annotation
      • should block ingress with invalid path
      • should return an error if there is an error validating the ingress definition
      • should return an error if there is an invalid value in some annotation
      • should return an error if there is a forbidden value in some annotation
      • should return an error if there is an invalid path and wrong pathType is set
      • should not return an error if the Ingress V1 definition is valid with Ingress Class
      • should not return an error if the Ingress V1 definition is valid with IngressClass annotation
      • should return an error if the Ingress V1 definition contains invalid annotations
      • should not return an error for an invalid Ingress when it has unknown class
      "},{"location":"e2e-tests/#affinity-session-cookie-name","title":"affinity session-cookie-name","text":"
      • should set sticky cookie SERVERID
      • should change cookie name on ingress definition change
      • should set the path to /something on the generated cookie
      • does not set the path to / on the generated cookie if there's more than one rule referring to the same backend
      • should set cookie with expires
      • should set cookie with domain
      • should not set cookie without domain annotation
      • should work with use-regex annotation and session-cookie-path
      • should warn user when use-regex is true and session-cookie-path is not set
      • should not set affinity across all server locations when using separate ingresses
      • should set sticky cookie without host
      • should work with server-alias annotation
      • should set secure in cookie with provided true annotation on http
      • should not set secure in cookie with provided false annotation on http
      • should set secure in cookie with provided false annotation on https
      "},{"location":"e2e-tests/#affinitymode","title":"affinitymode","text":"
      • Balanced affinity mode should balance
      • Check persistent affinity mode
      "},{"location":"e2e-tests/#server-alias","title":"server-alias","text":"
      • should return status code 200 for host 'foo' and 404 for 'bar'
      • should return status code 200 for host 'foo' and 'bar'
      • should return status code 200 for hosts defined in two ingresses, different path with one alias
      "},{"location":"e2e-tests/#app-root","title":"app-root","text":"
      • should redirect to /foo
      "},{"location":"e2e-tests/#auth-","title":"auth-*","text":"
      • should return status code 200 when no authentication is configured
      • should return status code 503 when authentication is configured with an invalid secret
      • should return status code 401 when authentication is configured but Authorization header is not configured
      • should return status code 401 when authentication is configured and Authorization header is sent with invalid credentials
      • should return status code 401 and cors headers when authentication and cors is configured but Authorization header is not configured
      • should return status code 200 when authentication is configured and Authorization header is sent
      • should return status code 200 when authentication is configured with a map and Authorization header is sent
      • should return status code 401 when authentication is configured with invalid content and Authorization header is sent
      • proxy_set_header My-Custom-Header 42;
      • proxy_set_header My-Custom-Header 42;
      • proxy_set_header 'My-Custom-Header' '42';
      • user retains cookie by default
      • user does not retain cookie if upstream returns error status code
      • user with annotated ingress retains cookie if upstream returns error status code
      • should return status code 200 when signed in
      • should redirect to signin url when not signed in
      • keeps processing new ingresses even if one of the existing ingresses is misconfigured
      • should overwrite Foo header with auth response
      • should return status code 200 when signed in
      • should redirect to signin url when not signed in
      • keeps processing new ingresses even if one of the existing ingresses is misconfigured
      • should return status code 200 when signed in after auth backend is deleted
      • should deny login for different location on same server
      • should deny login for different servers
      • should redirect to signin url when not signed in
      • should return 503 (location was denied)
      • should add error to the config
      "},{"location":"e2e-tests/#auth-tls-","title":"auth-tls-*","text":"
      • should set sslClientCertificate, sslVerifyClient and sslVerifyDepth with auth-tls-secret
      • should set valid auth-tls-secret, sslVerify to off, and sslVerifyDepth to 2
      • should 302 redirect to error page instead of 400 when auth-tls-error-page is set
      • should pass URL-encoded certificate to upstream
      • should validate auth-tls-verify-client
      • should return 403 using auth-tls-match-cn with no matching CN from client
      • should return 200 using auth-tls-match-cn with matching CN from client
      • should reload the nginx config when auth-tls-match-cn is updated
      • should return 200 using auth-tls-match-cn where atleast one of the regex options matches CN from client
      "},{"location":"e2e-tests/#backend-protocol","title":"backend-protocol","text":"
      • should set backend protocol to https:// and use proxy_pass
      • should set backend protocol to https:// and use proxy_pass with lowercase annotation
      • should set backend protocol to $scheme:// and use proxy_pass
      • should set backend protocol to grpc:// and use grpc_pass
      • should set backend protocol to grpcs:// and use grpc_pass
      • should set backend protocol to '' and use fastcgi_pass
      "},{"location":"e2e-tests/#canary-","title":"canary-*","text":"
      • should response with a 200 status from the mainline upstream when requests are made to the mainline ingress
      • should return 404 status for requests to the canary if no matching ingress is found
      • should return the correct status codes when endpoints are unavailable
      • should route requests to the correct upstream if mainline ingress is created before the canary ingress
      • should route requests to the correct upstream if mainline ingress is created after the canary ingress
      • should route requests to the correct upstream if the mainline ingress is modified
      • should route requests to the correct upstream if the canary ingress is modified
      • should route requests to the correct upstream
      • should route requests to the correct upstream
      • should route requests to the correct upstream
      • should route requests to the correct upstream
      • should routes to mainline upstream when the given Regex causes error
      • should route requests to the correct upstream
      • respects always and never values
      • should route requests only to mainline if canary weight is 0
      • should route requests only to canary if canary weight is 100
      • should route requests only to canary if canary weight is equal to canary weight total
      • should route requests split between mainline and canary if canary weight is 50
      • should route requests split between mainline and canary if canary weight is 100 and weight total is 200
      • should not use canary as a catch-all server
      • should not use canary with domain as a server
      • does not crash when canary ingress has multiple paths to the same non-matching backend
      • always routes traffic to canary if first request was affinitized to canary (default behavior)
      • always routes traffic to canary if first request was affinitized to canary (explicit sticky behavior)
      • routes traffic to either mainline or canary backend (legacy behavior)
      "},{"location":"e2e-tests/#client-body-buffer-size","title":"client-body-buffer-size","text":"
      • should set client_body_buffer_size to 1000
      • should set client_body_buffer_size to 1K
      • should set client_body_buffer_size to 1k
      • should set client_body_buffer_size to 1m
      • should set client_body_buffer_size to 1M
      • should not set client_body_buffer_size to invalid 1b
      "},{"location":"e2e-tests/#connection-proxy-header","title":"connection-proxy-header","text":"
      • set connection header to keep-alive
      "},{"location":"e2e-tests/#cors-","title":"cors-*","text":"
      • should enable cors
      • should set cors methods to only allow POST, GET
      • should set cors max-age
      • should disable cors allow credentials
      • should allow origin for cors
      • should allow headers for cors
      • should expose headers for cors
      • should allow - single origin for multiple cors values
      • should not allow - single origin for multiple cors values
      • should allow correct origins - single origin for multiple cors values
      • should not break functionality
      • should not break functionality - without *
      • should not break functionality with extra domain
      • should not match
      • should allow - single origin with required port
      • should not allow - single origin with port and origin without port
      • should not allow - single origin without port and origin with required port
      • should allow - matching origin with wildcard origin (2 subdomains)
      • should not allow - unmatching origin with wildcard origin (2 subdomains)
      • should allow - matching origin+port with wildcard origin
      • should not allow - portless origin with wildcard origin
      • should allow correct origins - missing subdomain + origin with wildcard origin and correct origin
      • should allow - missing origins (should allow all origins)
      • should allow correct origin but not others - cors allow origin annotations contain trailing comma
      "},{"location":"e2e-tests/#custom-headers-","title":"custom-headers-*","text":"
      • should return status code 200 when no custom-headers is configured
      • should return status code 503 when custom-headers is configured with an invalid secret
      • more_set_headers 'My-Custom-Header' '42';
      "},{"location":"e2e-tests/#custom-http-errors","title":"custom-http-errors","text":"
      • configures Nginx correctly
      "},{"location":"e2e-tests/#default-backend","title":"default-backend","text":"
      • should use a custom default backend as upstream
      "},{"location":"e2e-tests/#disable-access-log-disable-http-access-log-disable-stream-access-log","title":"disable-access-log disable-http-access-log disable-stream-access-log","text":"
      • disable-access-log set access_log off
      • disable-http-access-log set access_log off
      • disable-stream-access-log set access_log off
      "},{"location":"e2e-tests/#disable-proxy-intercept-errors","title":"disable-proxy-intercept-errors","text":"
      • configures Nginx correctly
      "},{"location":"e2e-tests/#backend-protocol-fastcgi","title":"backend-protocol - FastCGI","text":"
      • should use fastcgi_pass in the configuration file
      • should add fastcgi_index in the configuration file
      • should add fastcgi_param in the configuration file
      • should return OK for service with backend protocol FastCGI
      "},{"location":"e2e-tests/#force-ssl-redirect","title":"force-ssl-redirect","text":"
      • should redirect to https
      "},{"location":"e2e-tests/#from-to-www-redirect","title":"from-to-www-redirect","text":"
      • should redirect from www HTTP to HTTP
      • should redirect from www HTTPS to HTTPS
      "},{"location":"e2e-tests/#backend-protocol-grpc","title":"backend-protocol - GRPC","text":"
      • should use grpc_pass in the configuration file
      • should return OK for service with backend protocol GRPC
      • authorization metadata should be overwritten by external auth response headers
      • should return OK for service with backend protocol GRPCS
      • should return OK when request not exceed timeout
      • should return Error when request exceed timeout
      "},{"location":"e2e-tests/#http2-push-preload","title":"http2-push-preload","text":"
      • enable the http2-push-preload directive
      "},{"location":"e2e-tests/#allowlist-source-range","title":"allowlist-source-range","text":"
      • should set valid ip allowlist range
      "},{"location":"e2e-tests/#denylist-source-range","title":"denylist-source-range","text":"
      • only deny explicitly denied IPs, allow all others
      • only allow explicitly allowed IPs, deny all others
      "},{"location":"e2e-tests/#annotation-limit-connections","title":"Annotation - limit-connections","text":"
      • should limit-connections
      "},{"location":"e2e-tests/#limit-rate","title":"limit-rate","text":"
      • Check limit-rate annotation
      "},{"location":"e2e-tests/#enable-access-log-enable-rewrite-log","title":"enable-access-log enable-rewrite-log","text":"
      • set access_log off
      • set rewrite_log on
      "},{"location":"e2e-tests/#mirror-","title":"mirror-*","text":"
      • should set mirror-target to http://localhost/mirror
      • should set mirror-target to https://test.env.com/$request_uri
      • should disable mirror-request-body
      "},{"location":"e2e-tests/#modsecurity-owasp","title":"modsecurity owasp","text":"
      • should enable modsecurity
      • should enable modsecurity with transaction ID and OWASP rules
      • should disable modsecurity
      • should enable modsecurity with snippet
      • should enable modsecurity without using 'modsecurity on;'
      • should disable modsecurity using 'modsecurity off;'
      • should enable modsecurity with snippet and block requests
      • should enable modsecurity globally and with modsecurity-snippet block requests
      • should enable modsecurity when enable-owasp-modsecurity-crs is set to true
      • should enable modsecurity through the config map
      • should enable modsecurity through the config map but ignore snippet as disabled by admin
      • should disable default modsecurity conf setting when modsecurity-snippet is specified
      "},{"location":"e2e-tests/#preserve-trailing-slash","title":"preserve-trailing-slash","text":"
      • should allow preservation of trailing slashes
      "},{"location":"e2e-tests/#proxy-","title":"proxy-*","text":"
      • should set proxy_redirect to off
      • should set proxy_redirect to default
      • should set proxy_redirect to hello.com goodbye.com
      • should set proxy client-max-body-size to 8m
      • should not set proxy client-max-body-size to incorrect value
      • should set valid proxy timeouts
      • should not set invalid proxy timeouts
      • should turn on proxy-buffering
      • should turn off proxy-request-buffering
      • should build proxy next upstream
      • should setup proxy cookies
      • should change the default proxy HTTP version
      "},{"location":"e2e-tests/#proxy-ssl-","title":"proxy-ssl-*","text":"
      • should set valid proxy-ssl-secret
      • should set valid proxy-ssl-secret, proxy-ssl-verify to on, proxy-ssl-verify-depth to 2, and proxy-ssl-server-name to on
      • should set valid proxy-ssl-secret, proxy-ssl-ciphers to HIGH:!AES
      • should set valid proxy-ssl-secret, proxy-ssl-protocols
      • proxy-ssl-location-only flag should change the nginx config server part
      "},{"location":"e2e-tests/#permanent-redirect-permanent-redirect-code","title":"permanent-redirect permanent-redirect-code","text":"
      • should respond with a standard redirect code
      • should respond with a custom redirect code
      "},{"location":"e2e-tests/#rewrite-target-use-regex-enable-rewrite-log","title":"rewrite-target use-regex enable-rewrite-log","text":"
      • should write rewrite logs
      • should use correct longest path match
      • should use ~* location modifier if regex annotation is present
      • should fail to use longest match for documented warning
      • should allow for custom rewrite parameters
      "},{"location":"e2e-tests/#satisfy","title":"satisfy","text":"
      • should configure satisfy directive correctly
      • should allow multiple auth with satisfy any
      "},{"location":"e2e-tests/#server-snippet","title":"server-snippet","text":""},{"location":"e2e-tests/#service-upstream","title":"service-upstream","text":"
      • should use the Service Cluster IP and Port
      • should use the Service Cluster IP and Port
      • should not use the Service Cluster IP and Port
      "},{"location":"e2e-tests/#configuration-snippet","title":"configuration-snippet","text":"
      • set snippet more_set_headers in all locations
      • drops snippet more_set_header in all locations if disabled by admin
      "},{"location":"e2e-tests/#ssl-ciphers","title":"ssl-ciphers","text":"
      • should change ssl ciphers
      • should keep ssl ciphers
      "},{"location":"e2e-tests/#stream-snippet","title":"stream-snippet","text":"
      • should add value of stream-snippet to nginx config
      • should add stream-snippet and drop annotations per admin config
      "},{"location":"e2e-tests/#upstream-hash-by-","title":"upstream-hash-by-*","text":"
      • should connect to the same pod
      • should connect to the same subset of pods
      "},{"location":"e2e-tests/#upstream-vhost","title":"upstream-vhost","text":"
      • set host to upstreamvhost.bar.com
      "},{"location":"e2e-tests/#x-forwarded-prefix","title":"x-forwarded-prefix","text":"
      • should set the X-Forwarded-Prefix to the annotation value
      • should not add X-Forwarded-Prefix if the annotation value is empty
      "},{"location":"e2e-tests/#cgroups-cgroups","title":"[CGroups] cgroups","text":"
      • detects cgroups version v1
      • detect cgroups version v2
      "},{"location":"e2e-tests/#debug-cli","title":"Debug CLI","text":"
      • should list the backend servers
      • should get information for a specific backend server
      • should produce valid JSON for /dbg general
      "},{"location":"e2e-tests/#default-backend-custom-service","title":"[Default Backend] custom service","text":"
      • uses custom default backend that returns 200 as status code
      "},{"location":"e2e-tests/#default-backend_1","title":"[Default Backend]","text":"
      • should return 404 sending requests when only a default backend is running
      • enables access logging for default backend
      • disables access logging for default backend
      "},{"location":"e2e-tests/#default-backend-ssl","title":"[Default Backend] SSL","text":"
      • should return a self generated SSL certificate
      "},{"location":"e2e-tests/#default-backend-change-default-settings","title":"[Default Backend] change default settings","text":"
      • should apply the annotation to the default backend
      "},{"location":"e2e-tests/#disable-leader-routing-works-when-leader-election-was-disabled","title":"[Disable Leader] Routing works when leader election was disabled","text":"
      • should create multiple ingress routings rules when leader election has disabled
      "},{"location":"e2e-tests/#endpointslices-long-service-name","title":"[Endpointslices] long service name","text":"
      • should return 200 when service name has max allowed number of characters 63
      "},{"location":"e2e-tests/#topologyhints-topology-aware-routing","title":"[TopologyHints] topology aware routing","text":"
      • should return 200 when service has topology hints
      "},{"location":"e2e-tests/#shutdown-grace-period-shutdown","title":"[Shutdown] Grace period shutdown","text":"
      • /healthz should return status code 500 during shutdown grace period
      "},{"location":"e2e-tests/#shutdown-ingress-controller","title":"[Shutdown] ingress controller","text":"
      • should shutdown in less than 60 seconds without pending connections
      "},{"location":"e2e-tests/#shutdown-graceful-shutdown-with-pending-request","title":"[Shutdown] Graceful shutdown with pending request","text":"
      • should let slow requests finish before shutting down
      "},{"location":"e2e-tests/#ingress-deepinspection","title":"[Ingress] DeepInspection","text":"
      • should drop whole ingress if one path matches invalid regex
      "},{"location":"e2e-tests/#single-ingress-multiple-hosts","title":"single ingress - multiple hosts","text":"
      • should set the correct $service_name NGINX variable
      "},{"location":"e2e-tests/#ingress-pathtype-exact","title":"[Ingress] [PathType] exact","text":"
      • should choose exact location for /exact
      "},{"location":"e2e-tests/#ingress-pathtype-mix-exact-and-prefix-paths","title":"[Ingress] [PathType] mix Exact and Prefix paths","text":"
      • should choose the correct location
      "},{"location":"e2e-tests/#ingress-pathtype-prefix-checks","title":"[Ingress] [PathType] prefix checks","text":"
      • should return 404 when prefix /aaa does not match request /aaaccc
      • should test prefix path using simple regex pattern for /id/{int}
      • should test prefix path using regex pattern for /id/{int} ignoring non-digits characters at end of string
      • should test prefix path using fixed path size regex pattern /id/{int}{3}
      • should correctly route multi-segment path patterns
      "},{"location":"e2e-tests/#ingress-definition-without-host","title":"[Ingress] definition without host","text":"
      • should set ingress details variables for ingresses without a host
      • should set ingress details variables for ingresses with host without IngressRuleValue, only Backend
      "},{"location":"e2e-tests/#memory-leak-dynamic-certificates","title":"[Memory Leak] Dynamic Certificates","text":"
      • should not leak memory from ingress SSL certificates or configuration updates
      "},{"location":"e2e-tests/#load-balancer-load-balance","title":"[Load Balancer] load-balance","text":"
      • should apply the configmap load-balance setting
      "},{"location":"e2e-tests/#load-balancer-ewma","title":"[Load Balancer] EWMA","text":"
      • does not fail requests
      "},{"location":"e2e-tests/#load-balancer-round-robin","title":"[Load Balancer] round-robin","text":"
      • should evenly distribute requests with round-robin (default algorithm)
      "},{"location":"e2e-tests/#lua-dynamic-certificates","title":"[Lua] dynamic certificates","text":"
      • picks up the certificate when we add TLS spec to existing ingress
      • picks up the previously missing secret for a given ingress without reloading
      • supports requests with domain with trailing dot
      • picks up the updated certificate without reloading
      • falls back to using default certificate when secret gets deleted without reloading
      • picks up a non-certificate only change
      • removes HTTPS configuration when we delete TLS spec
      "},{"location":"e2e-tests/#lua-dynamic-configuration","title":"[Lua] dynamic configuration","text":"
      • configures balancer Lua middleware correctly
      • handles endpoints only changes
      • handles endpoints only changes (down scaling of replicas)
      • handles endpoints only changes consistently (down scaling of replicas vs. empty service)
      • handles an annotation change
      "},{"location":"e2e-tests/#metrics-exported-prometheus-metrics","title":"[metrics] exported prometheus metrics","text":"
      • exclude socket request metrics are absent
      • exclude socket request metrics are present
      "},{"location":"e2e-tests/#nginx-configuration","title":"nginx-configuration","text":"
      • start nginx with default configuration
      • fails when using alias directive
      • fails when using root directive
      "},{"location":"e2e-tests/#security-request-smuggling","title":"[Security] request smuggling","text":"
      • should not return body content from error_page
      "},{"location":"e2e-tests/#service-backend-status-code-503","title":"[Service] backend status code 503","text":"
      • should return 503 when backend service does not exist
      • should return 503 when all backend service endpoints are unavailable
      "},{"location":"e2e-tests/#service-type-externalname","title":"[Service] Type ExternalName","text":"
      • works with external name set to incomplete fqdn
      • should return 200 for service type=ExternalName without a port defined
      • should return 200 for service type=ExternalName with a port defined
      • should return status 502 for service type=ExternalName with an invalid host
      • should return 200 for service type=ExternalName using a port name
      • should return 200 for service type=ExternalName using FQDN with trailing dot
      • should update the external name after a service update
      • should sync ingress on external name service addition/deletion
      "},{"location":"e2e-tests/#service-nil-service-backend","title":"[Service] Nil Service Backend","text":"
      • should return 404 when backend service is nil
      "},{"location":"e2e-tests/#access-log","title":"access-log","text":"
      • use the default configuration
      • use the specified configuration
      • use the specified configuration
      • use the specified configuration
      • use the specified configuration
      "},{"location":"e2e-tests/#aio-write","title":"aio-write","text":"
      • should be enabled by default
      • should be enabled when setting is true
      • should be disabled when setting is false
      "},{"location":"e2e-tests/#bad-annotation-values","title":"Bad annotation values","text":"
      • [BAD_ANNOTATIONS] should drop an ingress if there is an invalid character in some annotation
      • [BAD_ANNOTATIONS] should drop an ingress if there is a forbidden word in some annotation
      • [BAD_ANNOTATIONS] should allow an ingress if there is a default blocklist config in place
      • [BAD_ANNOTATIONS] should drop an ingress if there is a custom blocklist config in place and allow others to pass
      "},{"location":"e2e-tests/#brotli","title":"brotli","text":"
      • should only compress responses that meet the brotli-min-length condition
      "},{"location":"e2e-tests/#configmap-change","title":"Configmap change","text":"
      • should reload after an update in the configuration
      "},{"location":"e2e-tests/#add-headers","title":"add-headers","text":"
      • Add a custom header
      • Add multiple custom headers
      "},{"location":"e2e-tests/#ssl-flag-default-ssl-certificate","title":"[SSL] [Flag] default-ssl-certificate","text":"
      • uses default ssl certificate for catch-all ingress
      • uses default ssl certificate for host based ingress when configured certificate does not match host
      "},{"location":"e2e-tests/#flag-disable-catch-all","title":"[Flag] disable-catch-all","text":"
      • should ignore catch all Ingress with backend
      • should ignore catch all Ingress with backend and rules
      • should delete Ingress updated to catch-all
      • should allow Ingress with rules
      "},{"location":"e2e-tests/#flag-disable-service-external-name","title":"[Flag] disable-service-external-name","text":"
      • should ignore services of external-name type
      "},{"location":"e2e-tests/#flag-disable-sync-events","title":"[Flag] disable-sync-events","text":"
      • should create sync events (default)
      • should create sync events
      • should not create sync events
      "},{"location":"e2e-tests/#enable-real-ip","title":"enable-real-ip","text":"
      • trusts X-Forwarded-For header only when setting is true
      • should not trust X-Forwarded-For header when setting is false
      "},{"location":"e2e-tests/#use-forwarded-headers","title":"use-forwarded-headers","text":"
      • should trust X-Forwarded headers when setting is true
      • should not trust X-Forwarded headers when setting is false
      "},{"location":"e2e-tests/#geoip2","title":"Geoip2","text":"
      • should include geoip2 line in config when enabled and db file exists
      • should only allow requests from specific countries
      • should up and running nginx controller using autoreload flag
      "},{"location":"e2e-tests/#security-block-","title":"[Security] block-*","text":"
      • should block CIDRs defined in the ConfigMap
      • should block User-Agents defined in the ConfigMap
      • should block Referers defined in the ConfigMap
      "},{"location":"e2e-tests/#security-global-auth-url","title":"[Security] global-auth-url","text":"
      • should return status code 401 when request any protected service
      • should return status code 200 when request whitelisted (via no-auth-locations) service and 401 when request protected service
      • should return status code 200 when request whitelisted (via ingress annotation) service and 401 when request protected service
      • should still return status code 200 after auth backend is deleted using cache
      • user retains cookie by default
      • user does not retain cookie if upstream returns error status code
      • user with global-auth-always-set-cookie key in configmap retains cookie if upstream returns error status code
      "},{"location":"e2e-tests/#global-options","title":"global-options","text":"
      • should have worker_rlimit_nofile option
      • should have worker_rlimit_nofile option and be independent on amount of worker processes
      "},{"location":"e2e-tests/#grpc","title":"GRPC","text":"
      • should set the correct GRPC Buffer Size
      "},{"location":"e2e-tests/#gzip","title":"gzip","text":"
      • should be disabled by default
      • should be enabled with default settings
      • should set gzip_comp_level to 4
      • should set gzip_disable to msie6
      • should set gzip_min_length to 100
      • should set gzip_types to text/html
      "},{"location":"e2e-tests/#hash-size","title":"hash size","text":"
      • should set server_names_hash_bucket_size
      • should set server_names_hash_max_size
      • should set proxy-headers-hash-bucket-size
      • should set proxy-headers-hash-max-size
      • should set variables-hash-bucket-size
      • should set variables-hash-max-size
      • should set vmap-hash-bucket-size
      "},{"location":"e2e-tests/#flag-ingress-class","title":"[Flag] ingress-class","text":"
      • should ignore Ingress with a different class annotation
      • should ignore Ingress with different controller class
      • should accept both Ingresses with default IngressClassName and IngressClass annotation
      • should ignore Ingress without IngressClass configuration
      • should delete Ingress when class is removed
      • should serve Ingress when class is added
      • should serve Ingress when class is updated between annotation and ingressClassName
      • should ignore Ingress with no class and accept the correctly configured Ingresses
      • should watch Ingress with no class and ignore ingress with a different class
      • should watch Ingress that uses the class name even if spec is different
      • should watch Ingress with correct annotation
      • should ignore Ingress with only IngressClassName
      "},{"location":"e2e-tests/#keep-alive-keep-alive-requests","title":"keep-alive keep-alive-requests","text":"
      • should set keepalive_timeout
      • should set keepalive_requests
      • should set keepalive connection to upstream server
      • should set keep alive connection timeout to upstream server
      • should set keepalive time to upstream server
      • should set the request count to upstream server through one keep alive connection
      "},{"location":"e2e-tests/#configmap-limit-rate","title":"Configmap - limit-rate","text":"
      • Check limit-rate config
      "},{"location":"e2e-tests/#flag-custom-http-and-https-ports","title":"[Flag] custom HTTP and HTTPS ports","text":"
      • should set X-Forwarded-Port headers accordingly when listening on a non-default HTTP port
      • should set X-Forwarded-Port header to 443
      • should set the X-Forwarded-Port header to 443
      "},{"location":"e2e-tests/#log-format-","title":"log-format-*","text":"
      • should not configure log-format escape by default
      • should enable the log-format-escape-json
      • should disable the log-format-escape-json
      • should enable the log-format-escape-none
      • should disable the log-format-escape-none
      • log-format-escape-json enabled
      • log-format default escape
      • log-format-escape-none enabled
      "},{"location":"e2e-tests/#lua-lua-shared-dicts","title":"[Lua] lua-shared-dicts","text":"
      • configures lua shared dicts
      "},{"location":"e2e-tests/#main-snippet","title":"main-snippet","text":"
      • should add value of main-snippet setting to nginx config
      "},{"location":"e2e-tests/#security-modsecurity-snippet","title":"[Security] modsecurity-snippet","text":"
      • should add value of modsecurity-snippet setting to nginx config
      "},{"location":"e2e-tests/#enable-multi-accept","title":"enable-multi-accept","text":"
      • should be enabled by default
      • should be enabled when set to true
      • should be disabled when set to false
      "},{"location":"e2e-tests/#flag-watch-namespace-selector","title":"[Flag] watch namespace selector","text":"
      • should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar
      "},{"location":"e2e-tests/#security-no-auth-locations","title":"[Security] no-auth-locations","text":"
      • should return status code 401 when accessing '/' unauthentication
      • should return status code 200 when accessing '/' authentication
      • should return status code 200 when accessing '/noauth' unauthenticated
      "},{"location":"e2e-tests/#add-no-tls-redirect-locations","title":"Add no tls redirect locations","text":"
      • Check no tls redirect locations config
      "},{"location":"e2e-tests/#ocsp","title":"OCSP","text":"
      • should enable OCSP and contain stapling information in the connection
      "},{"location":"e2e-tests/#configure-opentelemetry","title":"Configure Opentelemetry","text":"
      • should not exists opentelemetry directive
      • should exists opentelemetry directive when is enabled
      • should include opentelemetry_trust_incoming_spans on directive when enabled
      • should not exists opentelemetry_operation_name directive when is empty
      • should exists opentelemetry_operation_name directive when is configured
      "},{"location":"e2e-tests/#proxy-connect-timeout","title":"proxy-connect-timeout","text":"
      • should set valid proxy timeouts using configmap values
      • should not set invalid proxy timeouts using configmap values
      "},{"location":"e2e-tests/#dynamic-proxy_host","title":"Dynamic $proxy_host","text":"
      • should exist a proxy_host
      • should exist a proxy_host using the upstream-vhost annotation value
      "},{"location":"e2e-tests/#proxy-next-upstream","title":"proxy-next-upstream","text":"
      • should build proxy next upstream using configmap values
      "},{"location":"e2e-tests/#use-proxy-protocol","title":"use-proxy-protocol","text":"
      • should respect port passed by the PROXY Protocol
      • should respect proto passed by the PROXY Protocol server port
      • should enable PROXY Protocol for HTTPS
      • should enable PROXY Protocol for TCP
      "},{"location":"e2e-tests/#proxy-read-timeout","title":"proxy-read-timeout","text":"
      • should set valid proxy read timeouts using configmap values
      • should not set invalid proxy read timeouts using configmap values
      "},{"location":"e2e-tests/#proxy-send-timeout","title":"proxy-send-timeout","text":"
      • should set valid proxy send timeouts using configmap values
      • should not set invalid proxy send timeouts using configmap values
      "},{"location":"e2e-tests/#reuse-port","title":"reuse-port","text":"
      • reuse port should be enabled by default
      • reuse port should be disabled
      • reuse port should be enabled
      "},{"location":"e2e-tests/#configmap-server-snippet","title":"configmap server-snippet","text":"
      • should add value of server-snippet setting to all ingress config
      • should add global server-snippet and drop annotations per admin config
      "},{"location":"e2e-tests/#server-tokens","title":"server-tokens","text":"
      • should not exists Server header in the response
      • should exists Server header in the response when is enabled
      "},{"location":"e2e-tests/#ssl-ciphers_1","title":"ssl-ciphers","text":"
      • Add ssl ciphers
      "},{"location":"e2e-tests/#flag-enable-ssl-passthrough","title":"[Flag] enable-ssl-passthrough","text":""},{"location":"e2e-tests/#with-enable-ssl-passthrough-enabled","title":"With enable-ssl-passthrough enabled","text":"
      • should enable ssl-passthrough-proxy-port on a different port
      • should pass unknown traffic to default backend and handle known traffic
      "},{"location":"e2e-tests/#configmap-stream-snippet","title":"configmap stream-snippet","text":"
      • should add value of stream-snippet via config map to nginx config
      "},{"location":"e2e-tests/#ssl-tls-protocols-ciphers-and-headers","title":"[SSL] TLS protocols, ciphers and headers)","text":"
      • setting cipher suite
      • setting max-age parameter
      • setting includeSubDomains parameter
      • setting preload parameter
      • overriding what's set from the upstream
      • should not use ports during the HTTP to HTTPS redirection
      • should not use ports or X-Forwarded-Host during the HTTP to HTTPS redirection
      "},{"location":"e2e-tests/#annotation-validations","title":"annotation validations","text":"
      • should allow ingress based on their risk on webhooks
      • should allow ingress based on their risk on webhooks
      "},{"location":"e2e-tests/#ssl-redirect-to-https","title":"[SSL] redirect to HTTPS","text":"
      • should redirect from HTTP to HTTPS when secret is missing
      "},{"location":"e2e-tests/#ssl-secret-update","title":"[SSL] secret update","text":"
      • should not appear references to secret updates not used in ingress rules
      • should return the fake SSL certificate if the secret is invalid
      "},{"location":"e2e-tests/#status-status-update","title":"[Status] status update","text":"
      • should update status field after client-go reconnection
      "},{"location":"e2e-tests/#tcp-tcp-services","title":"[TCP] tcp-services","text":"
      • should expose a TCP service
      • should expose an ExternalName TCP service
      • should reload after an update in the configuration
      "},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#multiple-controller-in-one-cluster","title":"Multiple controller in one cluster","text":"

      Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?

      You can install them in different namespaces.

      • Create a new namespace
      kubectl create namespace ingress-nginx-2\n
      • Use Helm to install the additional instance of the ingress controller
      • Ensure you have Helm working (refer to the Helm documentation)
      • We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config;
      helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\n
      • Make sure you have updated the helm repo data;
      helm repo update\n
      • Now, install an additional instance of the ingress-nginx controller like this:
      helm install ingress-nginx-2 ingress-nginx/ingress-nginx  \\\n--namespace ingress-nginx-2 \\\n--set controller.ingressClassResource.name=nginx-two \\\n--set controller.ingressClass=nginx-two \\\n--set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\\n--set controller.ingressClassResource.enabled=true \\\n--set controller.ingressClassByName=true\n

      If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.

      Note that controller.ingressClassResource.name and controller.ingressClass have to be set correctly. The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.

      "},{"location":"faq/#i-cant-use-multiple-namespaces-what-should-i-do","title":"I can't use multiple namespaces, what should I do?","text":"

      If you need to install all instances in the same namespace, then you need to specify a different election id, like this:

      helm install ingress-nginx-2 ingress-nginx/ingress-nginx  \\\n--namespace kube-system \\\n--set controller.electionID=nginx-two-leader \\\n--set controller.ingressClassResource.name=nginx-two \\\n--set controller.ingressClass=nginx-two \\\n--set controller.ingressClassResource.controllerValue=\"example.com/ingress-nginx-2\" \\\n--set controller.ingressClassResource.enabled=true \\\n--set controller.ingressClassByName=true\n
      "},{"location":"faq/#retaining-client-ipaddress","title":"Retaining Client IPAddress","text":"

      Question - How to obtain the real-client-ipaddress ?

      The goto solution for retaining the real-client IPaddress is to enable PROXY protocol.

      Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.

      The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.

      Enabling proxy-protocol on the controller is documented here .

      For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.

      Some more info available here

      Some more info on proxy-protocol is here

      "},{"location":"faq/#client-ipaddress-on-single-node-cluster","title":"client-ipaddress on single-node cluster","text":"

      Single node clusters are created for dev & test uses with tools like \"kind\" or \"minikube\". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.

      After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to \"Local\" then the real-ipaddress of a client is visible to the controller.

      % kubectl explain service.spec.externalTrafficPolicy\nKIND:       Service\nVERSION:    v1\n\nFIELD: externalTrafficPolicy <string>\n\nDESCRIPTION:\n    externalTrafficPolicy describes how nodes distribute service traffic they\n    receive on one of the Service's \"externally-facing\" addresses (NodePorts,\n    ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will\n    configure the service in a way that assumes that external load balancers\n    will take care of balancing the service traffic between nodes, and so each\n    node will deliver traffic only to the node-local endpoints of the service,\n    without masquerading the client source IP. (Traffic mistakenly sent to a\n    node with no endpoints will be dropped.) The default value, \"Cluster\", uses\n    the standard behavior of routing to all endpoints evenly (possibly modified\n    by topology and other features). Note that traffic sent to an External IP or\n    LoadBalancer IP from within the cluster will always get \"Cluster\" semantics,\n    but clients sending to a NodePort from within the cluster may need to take\n    traffic policy into account when picking a node.\n\n    Possible enum values:\n     - `\"Cluster\"` routes traffic to all endpoints.\n     - `\"Local\"` preserves the source IP of the traffic by routing only to\n    endpoints on the same node as the traffic was received on (dropping the\n    traffic if there are no local endpoints).\n
      "},{"location":"faq/#client-ipaddress-l7","title":"client-ipaddress L7","text":"

      The solution is to get the real client IPaddress from the \"X-Forward-For\" HTTP header

      Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.

      • First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented here

      • Next, edit nginx.conf file inside your app pod, to contain the directives shown below:

      set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production)\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n\nlog_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n                '$status $body_bytes_sent \"$http_referer\" '\n                '\"$http_user_agent\" '\n                'host=$host x-forwarded-for=$http_x_forwarded_for';\n\naccess_log /var/log/nginx/access.log main;\n
      "},{"location":"faq/#kubernetes-v122-migration","title":"Kubernetes v1.22 Migration","text":"

      If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.

      "},{"location":"faq/#validation-of-path","title":"Validation Of path","text":"
      • For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.

      • This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.

      • When \"ingress.spec.rules.http.pathType=Exact\" or \"pathType=Prefix\", this validation will limit the characters accepted on the field \"ingress.spec.rules.http.paths.path\", to \"alphanumeric characters\", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\"

      • When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \"ImplementationSpecific\".

      • API Spec on pathType is documented here

      • When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, \"/,\",\"_\",\"-\", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.

      • The cluster admin should establish validation rules using mechanisms like \"Open Policy Agent\", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here

      • A complete example of an Openpolicyagent gatekeeper rule is available here

      • If you have any issues or concerns, please do one of the following:

      • Open a GitHub issue
      • Comment in our Dev Slack Channel
      • Open a thread in our Google Group ingress-nginx-dev@kubernetes.io
      "},{"location":"faq/#why-is-chunking-not-working-since-controller-v110","title":"Why is chunking not working since controller v1.10 ?","text":"
      • If your code is setting the HTTP header \"Transfer-Encoding: chunked\" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e

      • More details are available in this issue https://github.com/kubernetes/ingress-nginx/issues/11162

      "},{"location":"how-it-works/","title":"How it works","text":"

      The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.

      "},{"location":"how-it-works/#nginx-configuration","title":"NGINX configuration","text":"

      The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

      "},{"location":"how-it-works/#nginx-model","title":"NGINX model","text":"

      Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

      To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

      One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

      The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

      "},{"location":"how-it-works/#building-the-nginx-model","title":"Building the NGINX model","text":"

      Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

      Operations to build the model:

      • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

      • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

      • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
      • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

      • Create a list of NGINX Servers (per hostname)

      • Create a list of NGINX Upstreams
      • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
      • Annotations are applied to all the paths in the Ingress.
      • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
      "},{"location":"how-it-works/#when-a-reload-is-required","title":"When a reload is required","text":"

      The next list describes the scenarios when a reload is required:

      • New Ingress Resource Created.
      • TLS section is added to existing Ingress.
      • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
      • A path is added/removed from an Ingress.
      • An Ingress, Service, Secret is removed.
      • Some missing referenced object from the Ingress is available, like a Service or Secret.
      • A Secret is updated.
      "},{"location":"how-it-works/#avoiding-reloads","title":"Avoiding reloads","text":"

      In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

      "},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","title":"Avoiding reloads on Endpoints changes","text":"

      On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

      In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

      "},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","title":"Avoiding outage from wrong configuration","text":"

      Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

      To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

      "},{"location":"kubectl-plugin/","title":"kubectl plugin","text":""},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","title":"The ingress-nginx kubectl plugin","text":""},{"location":"kubectl-plugin/#installation","title":"Installation","text":"

      Install krew, then run

      kubectl krew install ingress-nginx\n

      to install the plugin. Then run

      kubectl ingress-nginx --help\n

      to make sure the plugin is properly installed and to get a list of commands:

      kubectl ingress-nginx --help\nA kubectl plugin for inspecting your ingress-nginx deployments\n\nUsage:\n  ingress-nginx [command]\n\nAvailable Commands:\n  backends    Inspect the dynamic backend information of an ingress-nginx instance\n  certs       Output the certificate data stored in an ingress-nginx pod\n  conf        Inspect the generated nginx.conf\n  exec        Execute a command inside an ingress-nginx pod\n  general     Inspect the other dynamic ingress-nginx information\n  help        Help about any command\n  info        Show information about the ingress-nginx service\n  ingresses   Provide a short summary of all of the ingress definitions\n  lint        Inspect kubernetes resources for possible issues\n  logs        Get the kubernetes logs for an ingress-nginx pod\n  ssh         ssh into a running ingress-nginx pod\n\nFlags:\n      --as string                      Username to impersonate for the operation\n      --as-group stringArray           Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n      --cache-dir string               Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\")\n      --certificate-authority string   Path to a cert file for the certificate authority\n      --client-certificate string      Path to a client certificate file for TLS\n      --client-key string              Path to a client key file for TLS\n      --cluster string                 The name of the kubeconfig cluster to use\n      --context string                 The name of the kubeconfig context to use\n  -h, --help                           help for ingress-nginx\n      --insecure-skip-tls-verify       If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n      --kubeconfig string              Path to the kubeconfig file to use for CLI requests.\n  -n, --namespace string               If present, the namespace scope for this CLI request\n      --request-timeout string         The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n  -s, --server string                  The address and port of the Kubernetes API server\n      --token string                   Bearer token for authentication to the API server\n      --user string                    The name of the kubeconfig user to use\n\nUse \"ingress-nginx [command] --help\" for more information about a command.\n
      "},{"location":"kubectl-plugin/#common-flags","title":"Common Flags","text":"
      • Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
      • Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment>, --pod <pod>, and --container <container> flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller, and the --container flag defaults to controller.
      • Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.
      "},{"location":"kubectl-plugin/#subcommands","title":"Subcommands","text":"

      Note that backends, general, certs, and conf require ingress-nginx version 0.23.0 or higher.

      "},{"location":"kubectl-plugin/#backends","title":"backends","text":"

      Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about:

      $ kubectl ingress-nginx backends -n ingress-nginx\n[\n  {\n    \"name\": \"default-apple-service-5678\",\n    \"service\": {\n      \"metadata\": {\n        \"creationTimestamp\": null\n      },\n      \"spec\": {\n        \"ports\": [\n          {\n            \"protocol\": \"TCP\",\n            \"port\": 5678,\n            \"targetPort\": 5678\n          }\n        ],\n        \"selector\": {\n          \"app\": \"apple\"\n        },\n        \"clusterIP\": \"10.97.230.121\",\n        \"type\": \"ClusterIP\",\n        \"sessionAffinity\": \"None\"\n      },\n      \"status\": {\n        \"loadBalancer\": {}\n      }\n    },\n    \"port\": 0,\n    \"sslPassthrough\": false,\n    \"endpoints\": [\n      {\n        \"address\": \"10.1.3.86\",\n        \"port\": \"5678\"\n      }\n    ],\n    \"sessionAffinityConfig\": {\n      \"name\": \"\",\n      \"cookieSessionAffinity\": {\n        \"name\": \"\"\n      }\n    },\n    \"upstreamHashByConfig\": {\n      \"upstream-hash-by-subset-size\": 3\n    },\n    \"noServer\": false,\n    \"trafficShapingPolicy\": {\n      \"weight\": 0,\n      \"header\": \"\",\n      \"headerValue\": \"\",\n      \"cookie\": \"\"\n    }\n  },\n  {\n    \"name\": \"default-echo-service-8080\",\n    ...\n  },\n  {\n    \"name\": \"upstream-default-backend\",\n    ...\n  }\n]\n

      Add the --list option to show only the backend names. Add the --backend <backend> option to show only the backend with the given name.

      "},{"location":"kubectl-plugin/#certs","title":"certs","text":"

      Use kubectl ingress-nginx certs --host <hostname> to dump the SSL cert/key information for a given host.

      WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere.

      $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n-----BEGIN CERTIFICATE-----\n...\n-----END CERTIFICATE-----\n\n-----BEGIN RSA PRIVATE KEY-----\n<REDACTED! DO NOT SHARE THIS!>\n-----END RSA PRIVATE KEY-----\n
      "},{"location":"kubectl-plugin/#conf","title":"conf","text":"

      Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host <hostname> option to view only the server block for that host:

      kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local\n\n    server {\n        server_name testaddr.local ;\n\n        listen 80;\n\n        set $proxy_upstream_name \"-\";\n        set $pass_access_scheme $scheme;\n        set $pass_server_port $server_port;\n        set $best_http_host $http_host;\n        set $pass_port $pass_server_port;\n\n        location / {\n\n            set $namespace      \"\";\n            set $ingress_name   \"\";\n            set $service_name   \"\";\n            set $service_port   \"0\";\n            set $location_path  \"/\";\n\n...\n
      "},{"location":"kubectl-plugin/#exec","title":"exec","text":"

      kubectl ingress-nginx exec is exactly the same as kubectl exec, with the same command flags. It will automatically choose an ingress-nginx pod to run the command in.

      $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx\nfastcgi_params\ngeoip\nlua\nmime.types\nmodsecurity\nmodules\nnginx.conf\nopentracing.json\nopentelemetry.toml\nowasp-modsecurity-crs\ntemplate\n
      "},{"location":"kubectl-plugin/#info","title":"info","text":"

      Shows the internal and external IP/CNAMES for an ingress-nginx service.

      $ kubectl ingress-nginx info -n ingress-nginx\nService cluster IP address: 10.187.253.31\nLoadBalancer IP|CNAME: 35.123.123.123\n

      Use the --service <service> flag if your ingress-nginx LoadBalancer service is not named ingress-nginx.

      "},{"location":"kubectl-plugin/#ingresses","title":"ingresses","text":"

      kubectl ingress-nginx ingresses, alternately kubectl ingress-nginx ing, shows a more detailed view of the ingress definitions in a namespace.

      Compare:

      $ kubectl get ingresses --all-namespaces\nNAMESPACE   NAME               HOSTS                            ADDRESS     PORTS   AGE\ndefault     example-ingress1   testaddr.local,testaddr2.local   localhost   80      5d\ndefault     test-ingress-2     *                                localhost   80      5d\n

      vs.

      $ kubectl ingress-nginx ingresses --all-namespaces\nNAMESPACE   INGRESS NAME       HOST+PATH                        ADDRESSES   TLS   SERVICE         SERVICE PORT   ENDPOINTS\ndefault     example-ingress1   testaddr.local/etameta           localhost   NO    pear-service    5678           5\ndefault     example-ingress1   testaddr2.local/otherpath        localhost   NO    apple-service   5678           1\ndefault     example-ingress1   testaddr2.local/otherotherpath   localhost   NO    pear-service    5678           5\ndefault     test-ingress-2     *                                localhost   NO    echo-service    8080           2\n
      "},{"location":"kubectl-plugin/#lint","title":"lint","text":"

      kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions.

      $ kubectl ingress-nginx lint --all-namespaces --verbose\nChecking ingresses...\n\u2717 anamespace/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https://github.com/kubernetes/ingress-nginx/issues/3743\n\u2717 othernamespace/ingress-definition-blah\n  - The rewrite-target annotation value does not reference a capture group\n      Lint added for version 0.22.0\n      https://github.com/kubernetes/ingress-nginx/issues/3174\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n  - Uses removed config flag --sort-backends\n      Lint added for version 0.22.0\n      https://github.com/kubernetes/ingress-nginx/issues/3655\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https://github.com/kubernetes/ingress-nginx/issues/3808\n

      To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags:

      $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0\nChecking ingresses...\n\u2717 anamespace/this-nginx\n  - Contains the removed session-cookie-hash annotation.\n       Lint added for version 0.24.0\n       https://github.com/kubernetes/ingress-nginx/issues/3743\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n  - Uses removed config flag --enable-dynamic-certificates\n      Lint added for version 0.24.0\n      https://github.com/kubernetes/ingress-nginx/issues/3808\n
      "},{"location":"kubectl-plugin/#logs","title":"logs","text":"

      kubectl ingress-nginx logs is almost the same as kubectl logs, with fewer flags. It will automatically choose an ingress-nginx pod to read logs from.

      $ kubectl ingress-nginx logs -n ingress-nginx\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    dev\n  Build:      git-48dc3a867\n  Repository: git@github.com:kubernetes/ingress-nginx.git\n-------------------------------------------------------------------------------\n\nW0405 16:53:46.061589       7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)\nnginx version: nginx/1.15.9\nW0405 16:53:46.070093       7 client_config.go:549] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.\nI0405 16:53:46.070499       7 main.go:205] Creating API client for https://10.96.0.1:443\nI0405 16:53:46.077784       7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64\nI0405 16:53:46.183359       7 nginx.go:265] Starting NGINX Ingress controller\nI0405 16:53:46.193913       7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services\n...\n
      "},{"location":"kubectl-plugin/#ssh","title":"ssh","text":"

      kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.

      $ kubectl ingress-nginx ssh -n ingress-nginx\nwww-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$\n
      "},{"location":"lua_tests/","title":"Lua Tests","text":""},{"location":"lua_tests/#running-the-lua-tests","title":"Running the Lua Tests","text":"

      To run the Lua tests you can run the following from the root directory:

      make lua-test\n

      This command makes use of docker hence does not need any dependency installations besides docker

      "},{"location":"lua_tests/#where-are-the-lua-tests","title":"Where are the Lua Tests?","text":"

      Lua Tests can be found in the rootfs/etc/nginx/lua/test directory

      "},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"troubleshooting/#ingress-controller-logs-and-events","title":"Ingress-Controller Logs and Events","text":"

      There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.

      "},{"location":"troubleshooting/#check-the-ingress-resource-events","title":"Check the Ingress Resource Events","text":"
      $ kubectl get ing -n <namespace-of-ingress-resource>\nNAME           HOSTS      ADDRESS     PORTS     AGE\ncafe-ingress   cafe.com   10.0.2.15   80        25s\n\n$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>\nName:             cafe-ingress\nNamespace:        default\nAddress:          10.0.2.15\nDefault backend:  default-http-backend:80 (172.17.0.5:8080)\nRules:\n  Host      Path  Backends\n  ----      ----  --------\n  cafe.com\n            /tea      tea-svc:80 (<none>)\n            /coffee   coffee-svc:80 (<none>)\nAnnotations:\n  kubectl.kubernetes.io/last-applied-configuration:  {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}}\n\nEvents:\n  Type    Reason  Age   From                      Message\n  ----    ------  ----  ----                      -------\n  Normal  CREATE  1m    ingress-nginx-controller  Ingress default/cafe-ingress\n  Normal  UPDATE  58s   ingress-nginx-controller  Ingress default/cafe-ingress\n
      "},{"location":"troubleshooting/#check-the-ingress-controller-logs","title":"Check the Ingress Controller Logs","text":"
      $ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1/1       Running   0          1m\n\n$ kubectl logs -n <namespace> ingress-nginx-controller-67956bf89d-fv58j\n-------------------------------------------------------------------------------\nNGINX Ingress controller\n  Release:    0.14.0\n  Build:      git-734361d\n  Repository: https://github.com/kubernetes/ingress-nginx\n-------------------------------------------------------------------------------\n....\n
      "},{"location":"troubleshooting/#check-the-nginx-configuration","title":"Check the Nginx Configuration","text":"
      $ kubectl get pods -n <namespace-of-ingress-controller>\nNAME                                        READY     STATUS    RESTARTS   AGE\ningress-nginx-controller-67956bf89d-fv58j   1/1       Running   0          1m\n\n$ kubectl exec -it -n <namespace-of-ingress-controller> ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf\ndaemon off;\nworker_processes 2;\npid /run/nginx.pid;\nworker_rlimit_nofile 523264;\nworker_shutdown_timeout 240s;\nevents {\n    multi_accept        on;\n    worker_connections  16384;\n    use                 epoll;\n}\nhttp {\n....\n
      "},{"location":"troubleshooting/#check-if-used-services-exist","title":"Check if used Services Exist","text":"
      $ kubectl get svc --all-namespaces\nNAMESPACE     NAME                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE\ndefault       coffee-svc             ClusterIP   10.106.154.35    <none>        80/TCP          18m\ndefault       kubernetes             ClusterIP   10.96.0.1        <none>        443/TCP         30m\ndefault       tea-svc                ClusterIP   10.104.172.12    <none>        80/TCP          18m\nkube-system   default-http-backend   NodePort    10.108.189.236   <none>        80:30001/TCP    30m\nkube-system   kube-dns               ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP   30m\nkube-system   kubernetes-dashboard   NodePort    10.103.128.17    <none>        80:30000/TCP    30m\n
      "},{"location":"troubleshooting/#debug-logging","title":"Debug Logging","text":"

      Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment.

      $ kubectl get deploy -n <namespace-of-ingress-controller>\nNAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE\ndefault-http-backend       1         1         1            1           35m\ningress-nginx-controller   1         1         1            1           35m\n\n$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller\n# Add --v=X to \"- args\", where X is an integer\n
      • --v=2 shows details using diff about the changes in the configuration in nginx
      • --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
      • --v=5 configures NGINX in debug mode
      "},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","title":"Authentication to the Kubernetes API Server","text":"

      A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.

      Both authentications must work:

      +-------------+   service          +------------+\n|             |   authentication   |            |\n+  apiserver  +<-------------------+  ingress   |\n|             |                    | controller |\n+-------------+                    +------------+\n

      Service authentication

      The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:

      • Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.

      • Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host. The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.

      • Using the flag --apiserver-host: Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy. Please do not use this approach in production.

      In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side.

      Kubernetes                                                  Workstation\n+---------------------------------------------------+     +------------------+\n|                                                   |     |                  |\n|  +-----------+   apiserver        +------------+  |     |  +------------+  |\n|  |           |   proxy            |            |  |     |  |            |  |\n|  | apiserver |                    |  ingress   |  |     |  |  ingress   |  |\n|  |           |                    | controller |  |     |  | controller |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |  service account/  |            |  |     |  |            |  |\n|  |           |  kubeconfig        |            |  |     |  |            |  |\n|  |           +<-------------------+            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  +------+----+      kubeconfig    +------+-----+  |     |  +------+-----+  |\n|         |<--------------------------------------------------------|        |\n|                                                   |     |                  |\n+---------------------------------------------------+     +------------------+\n
      "},{"location":"troubleshooting/#service-account","title":"Service Account","text":"

      If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server.

      Verify with the following commands:

      # start a container that contains curl\n$ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh\n\n# check if secret exists\n/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt     namespace  token\n/ $\n\n# check base connectivity from cluster inside\n/ $ curl -k https://kubernetes.default.svc.cluster.local\n{\n  \"kind\": \"Status\",\n  \"apiVersion\": \"v1\",\n  \"metadata\": {\n\n  },\n  \"status\": \"Failure\",\n  \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\",\n  \"reason\": \"Forbidden\",\n  \"details\": {\n\n  },\n  \"code\": 403\n}/ $\n\n# connect using tokens\n}/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local\n&& echo\n{\n  \"paths\": [\n    \"/api\",\n    \"/api/v1\",\n    \"/apis\",\n    \"/apis/\",\n    ... TRUNCATED\n    \"/readyz/shutdown\",\n    \"/version\"\n  ]\n}\n/ $\n\n# when you type `exit` or `^D` the test pod will be deleted.\n

      If it is not working, there are two possible reasons:

      1. The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret <name>. It will automatically be recreated.

      2. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter.

        Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.

      More information:

      • User Guide: Service Accounts
      • Cluster Administrator Guide: Managing Service Accounts
      "},{"location":"troubleshooting/#kube-config","title":"Kube-Config","text":"

      If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.

      "},{"location":"troubleshooting/#using-gdb-with-nginx","title":"Using GDB with Nginx","text":"

      Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.

      Note: The below is based on the nginx documentation.

      1. SSH into the worker

        $ ssh user@workerIP\n
      2. Obtain the Docker Container Running nginx

        $ docker ps | grep ingress-nginx-controller\nCONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES\nd9e1d243156a        registry.k8s.io/ingress-nginx/controller   \"/usr/bin/dumb-init \u2026\"   19 minutes ago      Up 19 minutes                                                                            k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0\n
      3. Exec into the container

        $ docker exec -it --user=0 --privileged d9e1d243156a bash\n
      4. Make sure nginx is running in --with-debug

        $ nginx -V 2>&1 | grep -- '--with-debug'\n
      5. Get list of processes running on container

        $ ps -ef\nUID        PID  PPID  C STIME TTY          TIME CMD\nroot         1     0  0 20:23 ?        00:00:00 /usr/bin/dumb-init /nginx-ingres\nroot         5     1  0 20:23 ?        00:00:05 /ingress-nginx-controller --defa\nroot        21     5  0 20:23 ?        00:00:00 nginx: master process /usr/sbin/\nnobody     106    21  0 20:23 ?        00:00:00 nginx: worker process\nnobody     107    21  0 20:23 ?        00:00:00 nginx: worker process\nroot       172     0  0 20:43 pts/0    00:00:00 bash\n
      6. Attach gdb to the nginx master process

        $ gdb -p 21\n....\nAttaching to process 21\nReading symbols from /usr/sbin/nginx...done.\n....\n(gdb)\n
      7. Copy and paste the following:

        set $cd = ngx_cycle->config_dump\nset $nelts = $cd.nelts\nset $elts = (ngx_conf_dump_t*)($cd.elts)\nwhile ($nelts-- > 0)\nset $name = $elts[$nelts]->name.data\nprintf \"Dumping %s to nginx_conf.txt\\n\", $name\nappend memory nginx_conf.txt \\\n        $elts[$nelts]->buffer.start $elts[$nelts]->buffer.end\nend\n
      8. Quit GDB by pressing CTRL+D

      9. Open nginx_conf.txt

        cat nginx_conf.txt\n
      "},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)","text":"
      1. Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider )

        Warning  Failed     5m5s (x4 over 6m34s)   kubelet            Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF\n
        Then please follow the below steps.

      2. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details

        a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null

        (\u2388 |myprompt)\u279c  ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null\n                    % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current\n                                                    Dload  Upload   Total   Spent    Left  Speed\n                    0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0\n (\u2388 |myprompt)\u279c  ~\n
        b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
        (\u2388 |myprompt)\u279c  ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                    HTTP/2 200\n                                    docker-distribution-api-version: registry/2.0\n                                    content-type: application/vnd.docker.distribution.manifest.list.v2+json\n                                    docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\n                                    content-length: 1384\n                                    date: Wed, 28 Sep 2022 16:46:28 GMT\n                                    server: Docker Registry\n                                    x-xss-protection: 0\n                                    x-frame-options: SAMEORIGIN\n                                    alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"\n\n  (\u2388 |myprompt)\u279c  ~\n
        Redirection in the proxy is implemented to ensure the pulling of the images.

      3. This is the solution recommended to whitelist the below image repositories :

        *.appspot.com    \n*.k8s.io        \n*.pkg.dev\n*.gcr.io\n
        More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.

      "},{"location":"troubleshooting/#unable-to-listen-on-port-80443","title":"Unable to listen on port (80/443)","text":"

      One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.

      If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.

      "},{"location":"troubleshooting/#create-a-test-pod","title":"Create a test pod","text":"

      The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example:

      apiVersion: v1\nkind: Pod\nmetadata:\n  name: ingress-nginx-sleep\n  namespace: default\n  labels:\n    app: nginx\nspec:\n  containers:\n    - name: nginx\n      image: ##_CONTROLLER_IMAGE_##\n      resources:\n        requests:\n          memory: \"512Mi\"\n          cpu: \"500m\"\n        limits:\n          memory: \"1Gi\"\n          cpu: \"1\"\n      command: [\"sleep\"]\n      args: [\"3600\"]\n      ports:\n      - containerPort: 80\n        name: http\n        protocol: TCP\n      - containerPort: 443\n        name: https\n        protocol: TCP\n      securityContext:\n        allowPrivilegeEscalation: true\n        capabilities:\n          add:\n          - NET_BIND_SERVICE\n          drop:\n          - ALL\n        runAsUser: 101\n  restartPolicy: Never\n  nodeSelector:\n    kubernetes.io/hostname: ##_NODE_NAME_##\n  tolerations:\n  - key: \"node.kubernetes.io/unschedulable\"\n    operator: \"Exists\"\n    effect: NoSchedule\n
      * update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster

      Apply the YAML and open a shell into the pod. Try to manually run the controller process:

      $ /nginx-ingress-controller\n
      You should get the same error as from the ingress controller pod logs.

      Confirm the capabilities are properly surfacing into the pod:

      $ grep CapBnd /proc/1/status\nCapBnd: 0000000000000400\n
      The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container.
      $ capsh --decode=0000000000000400\n0x0000000000000400=cap_net_bind_service\n

      "},{"location":"troubleshooting/#create-a-test-pod-as-root","title":"Create a test pod as root","text":"

      (Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities.

      Some things to try after shelling into this container:

      Try running the controller as the www-data (101) user:

      $ chmod 4755 /nginx-ingress-controller\n$ /nginx-ingress-controller\n
      Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.

      Install the libcap package and check capabilities on the file:

      $ apk add libcap\n(1/1) Installing libcap (2.50-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 26 MiB in 41 packages\n$ getcap /nginx-ingress-controller\n/nginx-ingress-controller cap_net_bind_service=ep\n
      (if missing, see above about purging image on the server and re-pulling)

      Strace the executable to see what system calls are being executed when it fails:

      $ apk add strace\n(1/1) Installing strace (5.12-r0)\nExecuting busybox-1.33.1-r7.trigger\nOK: 28 MiB in 42 packages\n$ strace /nginx-ingress-controller\nexecve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0\narch_prctl(ARCH_SET_FS, 0x29ea690)      = 0\n...\n

      "},{"location":"deploy/","title":"Installation Guide","text":"

      There are multiple ways to install the Ingress-Nginx Controller:

      • with Helm, using the project repository chart;
      • with kubectl apply, using YAML manifests;
      • with specific addons (e.g. for minikube or MicroK8s).

      On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.

      "},{"location":"deploy/#contents","title":"Contents","text":"
      • Quick start

      • Environment-specific instructions

      • ... Docker Desktop
      • ... Rancher Desktop
      • ... minikube
      • ... MicroK8s
      • ... AWS
      • ... GCE - GKE
      • ... Azure
      • ... Digital Ocean
      • ... Scaleway
      • ... Exoscale
      • ... Oracle Cloud Infrastructure
      • ... OVHcloud
      • ... Bare-metal
      • Miscellaneous
      "},{"location":"deploy/#quick-start","title":"Quick start","text":"

      If you have Helm, you can deploy the ingress controller with the following command:

      helm upgrade --install ingress-nginx ingress-nginx \\\n  --repo https://kubernetes.github.io/ingress-nginx \\\n  --namespace ingress-nginx --create-namespace\n

      It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.

      Info

      This command is idempotent:

      • if the ingress controller is not installed, it will install it,
      • if the ingress controller is already installed, it will upgrade it.

      If you want a full list of values that you can set, while installing with Helm, then run:

      helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx\n

      Helm install on AWS/GCP/Azure/Other providers

      The ingress-nginx-controller helm-chart is a generic install out of the box. The default set of helm values is not configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users. See AWS LB Controller. Examples of some annotations needed for the service resource of --type LoadBalancer on AWS are below:

        annotations:\n    service.beta.kubernetes.io/aws-load-balancer-scheme: \"internet-facing\"\n    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp\n    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: \"ip\"\n    service.beta.kubernetes.io/aws-load-balancer-type: nlb\n    service.beta.kubernetes.io/aws-load-balancer-manage-backend-security-group-rules: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: \"true\"\n    service.beta.kubernetes.io/aws-load-balancer-security-groups: \"sg-something1 sg-something2\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: \"somebucket\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: \"ingress-nginx\"\n    service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval: \"5\"\n

      If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

      Info

      The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.

      Attention

      If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.

      "},{"location":"deploy/#firewall-configuration","title":"Firewall configuration","text":"

      To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml. In general, you need:

      • Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller.
      • Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
      "},{"location":"deploy/#pre-flight-check","title":"Pre-flight check","text":"

      A few pods should start in the ingress-nginx namespace:

      kubectl get pods --namespace=ingress-nginx\n

      After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:

      kubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io/component=controller \\\n  --timeout=120s\n
      "},{"location":"deploy/#local-testing","title":"Local testing","text":"

      Let's create a simple web server and the associated service:

      kubectl create deployment demo --image=httpd --port=80\nkubectl expose deployment demo\n

      Then create an ingress resource. The following example uses a host that maps to localhost:

      kubectl create ingress demo-localhost --class=nginx \\\n  --rule=\"demo.localdev.me/*=demo:80\"\n

      Now, forward a local port to the ingress controller:

      kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80\n

      Info

      A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.

      This issue shows a typical DNS problem and its solution.

      At this point, you can access your deployment using curl ;

      curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080\n

      You should see a HTML response containing text like \"It works!\".

      "},{"location":"deploy/#online-testing","title":"Online testing","text":"

      If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller.

      You can see that IP address or FQDN with the following command:

      kubectl get service ingress-nginx-controller --namespace=ingress-nginx\n

      It will be the EXTERNAL-IP field. If that field shows <pending>, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer).

      Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io:

      kubectl create ingress demo --class=nginx \\\n  --rule=\"www.demo.io/*=demo:80\"\n

      Alternatively, the above command can be rewritten as follows for the --rule command and below.

      kubectl create ingress demo --class=nginx \\\n  --rule www.demo.io/=demo:80\n

      You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89

      "},{"location":"deploy/#environment-specific-instructions","title":"Environment-specific instructions","text":""},{"location":"deploy/#local-development-clusters","title":"Local development clusters","text":""},{"location":"deploy/#minikube","title":"minikube","text":"

      The ingress controller can be installed through minikube's addons system:

      minikube addons enable ingress\n
      "},{"location":"deploy/#microk8s","title":"MicroK8s","text":"

      The ingress controller can be installed through MicroK8s's addons system:

      microk8s enable ingress\n

      Please check the MicroK8s documentation page for details.

      "},{"location":"deploy/#docker-desktop","title":"Docker Desktop","text":"

      Kubernetes is available in Docker Desktop:

      • Mac, from version 18.06.0-ce
      • Windows, from version 18.06.0-ce

      First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.

      The ingress controller can be installed on Docker Desktop using the default quick start instructions.

      On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.

      "},{"location":"deploy/#rancher-desktop","title":"Rancher Desktop","text":"

      Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.

      Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.

      Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.

      "},{"location":"deploy/#cloud-deployments","title":"Cloud deployments","text":"

      If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.

      Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.

      In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.

      "},{"location":"deploy/#aws","title":"AWS","text":"

      In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.

      Info

      The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

      "},{"location":"deploy/#network-load-balancer-nlb","title":"Network Load Balancer (NLB)","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/aws/deploy.yaml\n
      "},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","title":"TLS termination in AWS Load Balancer (NLB)","text":"

      By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.

      1. Download the deploy.yaml template
      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml\n
      1. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
      proxy-real-ip-cidr: XXX.XXX.XXX/XX\n
      1. Change the AWS Certificate Manager (ACM) ID as well:
      arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\n
      1. Deploy the manifest:
      kubectl apply -f deploy.yaml\n
      "},{"location":"deploy/#nlb-idle-timeouts","title":"NLB Idle Timeouts","text":"

      Idle timeout value for TCP flows is 350 seconds and cannot be modified.

      For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.

      By default, NGINX keepalive_timeout is set to 75s.

      More information with regard to timeouts can be found in the official AWS documentation

      "},{"location":"deploy/#gce-gke","title":"GCE-GKE","text":"

      First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command:

      kubectl create clusterrolebinding cluster-admin-binding \\\n  --clusterrole cluster-admin \\\n  --user $(gcloud config get-value account)\n

      Then, the ingress controller can be installed like this:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

      Warning

      For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.

      See the GKE documentation on adding rules and the Kubernetes issue for more detail.

      Proxy-protocol is supported in GCE check the Official Documentations on how to enable.

      "},{"location":"deploy/#azure","title":"Azure","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

      More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.

      "},{"location":"deploy/#digital-ocean","title":"Digital Ocean","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/do/deploy.yaml\n
      • By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
      "},{"location":"deploy/#scaleway","title":"Scaleway","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/scw/deploy.yaml\n

      Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer.

      "},{"location":"deploy/#exoscale","title":"Exoscale","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml\n

      The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

      "},{"location":"deploy/#oracle-cloud-infrastructure","title":"Oracle Cloud Infrastructure","text":"
      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/cloud/deploy.yaml\n

      A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

      "},{"location":"deploy/#ovhcloud","title":"OVHcloud","text":"
      helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx\nhelm repo update\nhelm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace\n

      You can find the complete tutorial.

      "},{"location":"deploy/#bare-metal-clusters","title":"Bare metal clusters","text":"

      This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)

      For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/baremetal/deploy.yaml\n

      For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.

      "},{"location":"deploy/#miscellaneous","title":"Miscellaneous","text":""},{"location":"deploy/#checking-ingress-controller-version","title":"Checking ingress controller version","text":"

      Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec:

      POD_NAMESPACE=ingress-nginx\nPOD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)\nkubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version\n
      "},{"location":"deploy/#scope","title":"Scope","text":"

      By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. Although the use of this flag is not popular, one important fact to note is that the secret containing the default-ssl-certificate needs to also be present in the watched namespace(s).

      See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.

      "},{"location":"deploy/#webhook-network-access","title":"Webhook network access","text":"

      Warning

      The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.

      "},{"location":"deploy/#certificate-generation","title":"Certificate generation","text":"

      Attention

      The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook.

      This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

      You can wait until it is ready to run the next command:

       kubectl wait --namespace ingress-nginx \\\n  --for=condition=ready pod \\\n  --selector=app.kubernetes.io/component=controller \\\n  --timeout=120s\n
      "},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","title":"Running on Kubernetes versions older than 1.19","text":"

      Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1, then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1.

      Here is how these Ingress versions are supported in Kubernetes:

      • before Kubernetes 1.19, only v1beta1 Ingress resources are supported
      • from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported
      • in Kubernetes 1.22 and above, only v1 Ingress resources are supported

      And here is how these Ingress versions are supported in Ingress-Nginx Controller:

      • before version 1.0, only v1beta1 Ingress resources are supported
      • in version 1.0 and above, only v1 Ingress resources are

      As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49).

      The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).

      "},{"location":"deploy/baremetal/","title":"Bare-metal considerations","text":"

      In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

      The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.

      "},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","title":"A pure software solution: MetalLB","text":"

      MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

      This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.

      Note

      The description of other supported configuration modes is off-scope for this document.

      Warning

      MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.

      MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.

      MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.

      Example

      Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

      $ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

      After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.

      ---\napiVersion: metallb.io/v1beta1\nkind: IPAddressPool\nmetadata:\n  name: default\n  namespace: metallb-system\nspec:\n  addresses:\n  - 203.0.113.10-203.0.113.15\n  autoAssign: true\n---\napiVersion: metallb.io/v1beta1\nkind: L2Advertisement\nmetadata:\n  name: default\n  namespace: metallb-system\nspec:\n  ipAddressPools:\n  - default\n
      $ kubectl -n ingress-nginx get svc\nNAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)\ndefault-http-backend   ClusterIP     10.0.64.249    <none>       80/TCP\ningress-nginx          LoadBalancer  10.0.220.217   203.0.113.10  80:30100/TCP,443:30101/TCP\n

      As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:

      $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com'\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n

      Tip

      In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.

      "},{"location":"deploy/baremetal/#over-a-nodeport-service","title":"Over a NodePort Service","text":"

      Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.

      Info

      A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.

      In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.

      Example

      Given the NodePort 30100 allocated to the ingress-nginx Service

      $ kubectl -n ingress-nginx get svc\nNAME                   TYPE        CLUSTER-IP     PORT(S)\ndefault-http-backend   ClusterIP   10.0.64.249    80/TCP\ningress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP\n

      and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)

      $ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

      a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.

      Impact on the host system

      While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.

      This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.

      This approach has a few other limitations one ought to be aware of:

      • Source IP address

      Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.

      The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).

      Warning

      This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.

      Example

      In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)

      $ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

      with a ingress-nginx-controller Deployment composed of 2 replicas

      $ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP           NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1   host-2\ningress-nginx-controller-cf9ff8c96-8vvf8   1/1     Running   172.17.0.3   host-3\ningress-nginx-controller-cf9ff8c96-pxsds   1/1     Running   172.17.1.4   host-2\n

      Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.

      • Ingress status

      Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.

      $ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n

      Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.

      Warning

      There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information.

      Example

      Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

      $ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

      one could edit the ingress-nginx Service and add the following field to the object spec

      spec:\n  externalIPs:\n  - 203.0.113.1\n  - 203.0.113.2\n  - 203.0.113.3\n

      which would in turn be reflected on Ingress objects as follows:

      $ kubectl get ingress -o wide\nNAME           HOSTS               ADDRESS                               PORTS\ntest-ingress   myapp.example.com   203.0.113.1,203.0.113.2,203.0.113.3   80\n
      • Redirects

      As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.

      Example

      Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:

      $ curl -D- http://myapp.example.com:30100`\nHTTP/1.1 308 Permanent Redirect\nServer: nginx/1.15.2\nLocation: https://myapp.example.com/  #-> missing NodePort in HTTPS redirect\n
      "},{"location":"deploy/baremetal/#via-the-host-network","title":"Via the host network","text":"

      In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.

      Note

      This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it.

      This can be achieved by enabling the hostNetwork option in the Pods' spec.

      template:\n  spec:\n    hostNetwork: true\n

      Security considerations

      Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.

      Example

      Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.

      $ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP            NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3\ningress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2\n

      One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:

      $ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>\n...\nEvents:\n  Type     Reason            From               Message\n  ----     ------            ----               -------\n  Warning  FailedScheduling  default-scheduler  0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n

      One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.

      Info

      A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.

      Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.

      Like with NodePorts, this approach has a few quirks it is important to be aware of.

      • DNS resolution

      Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.

      • Ingress status

      Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.

      $ kubectl get ingress\nNAME           HOSTS               ADDRESS   PORTS\ntest-ingress   myapp.example.com             80\n

      Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.

      Example

      Given a ingress-nginx-controller DaemonSet composed of 2 replicas

      $ kubectl -n ingress-nginx get pod -o wide\nNAME                                       READY   STATUS    IP            NODE\ndefault-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3\ningress-nginx-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2\n

      the controller sets the status of all Ingress objects it manages to the following value:

      $ kubectl get ingress -o wide\nNAME           HOSTS               ADDRESS                   PORTS\ntest-ingress   myapp.example.com   203.0.113.2,203.0.113.3   80\n

      Note

      Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments.

      "},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","title":"Using a self-provisioned edge","text":"

      Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.

      Such deployment builds upon the NodePort Service described above in Over a NodePort Service, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.

      On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:

      "},{"location":"deploy/baremetal/#external-ips","title":"External IPs","text":"

      Source IP address

      This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity.

      The externalIPs Service option was previously mentioned in the NodePort section.

      As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node.

      Example

      Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

      $ kubectl get node\nNAME     STATUS   ROLES    EXTERNAL-IP\nhost-1   Ready    master   203.0.113.1\nhost-2   Ready    node     203.0.113.2\nhost-3   Ready    node     203.0.113.3\n

      and the following ingress-nginx NodePort Service

      $ kubectl -n ingress-nginx get svc\nNAME                   TYPE        CLUSTER-IP     PORT(S)\ningress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP\n

      One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:

      spec:\n  externalIPs:\n  - 203.0.113.2\n  - 203.0.113.3\n
      $ curl -D- http://myapp.example.com:30100\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n\n$ curl -D- http://myapp.example.com\nHTTP/1.1 200 OK\nServer: nginx/1.15.2\n

      We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.

      "},{"location":"deploy/hardening-guide/","title":"Hardening Guide","text":""},{"location":"deploy/hardening-guide/#overview","title":"Overview","text":"

      There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:

      • nginx CIS Benchmark
      • cipherlist.eu (one of many forks of the now dead project cipherli.st)

      This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.

      Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.

      This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself

      "},{"location":"deploy/hardening-guide/#configuration-guide","title":"Configuration Guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends"},{"location":"deploy/rbac/","title":"Role Based Access Control (RBAC)","text":""},{"location":"deploy/rbac/#overview","title":"Overview","text":"

      This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled.

      Role Based Access Control is comprised of four layers:

      1. ClusterRole - permissions assigned to a role that apply to an entire cluster
      2. ClusterRoleBinding - binding a ClusterRole to a specific account
      3. Role - permissions assigned to a role that apply to a specific namespace
      4. RoleBinding - binding a Role to a specific account

      In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the ingress-nginx-controller.

      "},{"location":"deploy/rbac/#service-accounts-created-in-this-example","title":"Service Accounts created in this example","text":"

      One ServiceAccount is created in this example, ingress-nginx.

      "},{"location":"deploy/rbac/#permissions-granted-in-this-example","title":"Permissions Granted in this example","text":"

      There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx, and namespace specific permissions defined by the Role named ingress-nginx.

      "},{"location":"deploy/rbac/#cluster-permissions","title":"Cluster Permissions","text":"

      These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx

      • configmaps, endpoints, nodes, pods, secrets: list, watch
      • nodes: get
      • services, ingresses, ingressclasses, endpointslices: get, list, watch
      • events: create, patch
      • ingresses/status: update
      • leases: list, watch
      "},{"location":"deploy/rbac/#namespace-permissions","title":"Namespace Permissions","text":"

      These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx

      • configmaps, pods, secrets: get
      • endpoints: get

      Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader

      Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body).

      • leases: get, update (for resourceName ingress-controller-leader)
      • leases: create

      This resourceName is the election-id defined by the ingress-controller, which defaults to:

      • election-id: ingress-controller-leader
      • resourceName : <election-id>

      Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.

      "},{"location":"deploy/rbac/#bindings","title":"Bindings","text":"

      The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx.

      The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.

      "},{"location":"deploy/upgrade/","title":"Upgrading","text":"

      Important

      No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

      "},{"location":"deploy/upgrade/#without-helm","title":"Without Helm","text":"

      To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

      I.e. if your deployment resource looks like (partial example):

      kind: Deployment\nmetadata:\n  name: ingress-nginx-controller\n  namespace: ingress-nginx\nspec:\n  replicas: 1\n  selector: ...\n  template:\n    metadata: ...\n    spec:\n      containers:\n        - name: ingress-nginx-controller\n          image: registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef\n          args: ...\n

      simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):

      kubectl set image deployment/ingress-nginx-controller \\\n  controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\\n  -n ingress-nginx\n

      For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx.

      "},{"location":"deploy/upgrade/#with-helm","title":"With Helm","text":"

      If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx, you should be able to upgrade using

      helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx\n
      "},{"location":"deploy/upgrade/#migrating-from-stablenginx-ingress","title":"Migrating from stable/nginx-ingress","text":"

      See detailed steps in the upgrading section of the ingress-nginx chart README.

      "},{"location":"developer-guide/code-overview/","title":"Ingress NGINX - Code Overview","text":"

      This document provides an overview of Ingress NGINX code.

      "},{"location":"developer-guide/code-overview/#core-golang-code","title":"Core Golang code","text":"

      This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.

      "},{"location":"developer-guide/code-overview/#core-sync-logics","title":"Core Sync Logics:","text":"

      Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that:

      1. One copy is the currently running configuration model
      2. Second copy is the one generated in response to some changes in the cluster

      The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one.

      There are static and dynamic configuration changes.

      All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.

      The following parts of the code can be found:

      "},{"location":"developer-guide/code-overview/#entrypoint","title":"Entrypoint","text":"

      The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory.

      "},{"location":"developer-guide/code-overview/#version","title":"Version","text":"

      Is the package of the code responsible for adding version subcommand, and can be found in version directory.

      "},{"location":"developer-guide/code-overview/#internal-code","title":"Internal code","text":"

      This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:

      "},{"location":"developer-guide/code-overview/#admission-controller","title":"Admission Controller","text":"

      Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it.

      This code can be found in internal/admission/controller directory.

      "},{"location":"developer-guide/code-overview/#file-functions","title":"File functions","text":"

      Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories.

      This code can be found in internal/file directory.

      "},{"location":"developer-guide/code-overview/#ingress-functions","title":"Ingress functions","text":"

      Contains all the logics from Ingress-Nginx Controller, with some examples being:

      • Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go.
      • supported annotations and its parsing logics - internal/ingress/annotations.
      • reconciliation loops and logics - internal/ingress/controller
      • defaults - define the default struct - internal/ingress/defaults.
      • Error interface and types implementation - internal/ingress/errors
      • Metrics collectors for Prometheus exporting - internal/ingress/metric.
      • Resolver - Extracts information from a controller - internal/ingress/resolver.
      • Ingress Object status publisher - internal/ingress/status.

      And other parts of the code that will be written in this document in a future.

      "},{"location":"developer-guide/code-overview/#k8s-functions","title":"K8s functions","text":"

      Contains helper functions for parsing Kubernetes objects.

      This part of the code can be found in internal/k8s directory.

      "},{"location":"developer-guide/code-overview/#networking-functions","title":"Networking functions","text":"

      Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc.

      This part of the code can be found in internal/net directory.

      "},{"location":"developer-guide/code-overview/#nginx-functions","title":"NGINX functions","text":"

      Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts.

      This part of the code can be found in internal/nginx directory.

      "},{"location":"developer-guide/code-overview/#tasks-queue","title":"Tasks / Queue","text":"

      Contains the functions responsible for the sync queue part of the controller.

      This part of the code can be found in internal/task directory.

      "},{"location":"developer-guide/code-overview/#other-parts-of-internal","title":"Other parts of internal","text":"

      Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.

      "},{"location":"developer-guide/code-overview/#e2e-test","title":"E2E Test","text":"

      The e2e tests code is in test directory.

      "},{"location":"developer-guide/code-overview/#other-programs","title":"Other programs","text":"

      Describe here kubectl plugin, dbg, waitshutdown and cover the hack scripts.

      "},{"location":"developer-guide/code-overview/#kubectl-plugin","title":"kubectl plugin","text":"

      It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin

      "},{"location":"developer-guide/code-overview/#deploy-files","title":"Deploy files","text":"

      This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components.

      Those files are in deploy directory.

      "},{"location":"developer-guide/code-overview/#helm-chart","title":"Helm Chart","text":"

      Used to generate the Helm chart published.

      Code is in charts/ingress-nginx.

      "},{"location":"developer-guide/code-overview/#documentationwebsite","title":"Documentation/Website","text":"

      The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/

      This code is available in docs and it's main \"language\" is Markdown, used by mkdocs file to generate static pages.

      "},{"location":"developer-guide/code-overview/#container-images","title":"Container Images","text":"

      Container images used to run ingress-nginx, or to build the final image.

      "},{"location":"developer-guide/code-overview/#base-images","title":"Base Images","text":"

      Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.

      There are other images inside this directory.

      "},{"location":"developer-guide/code-overview/#ingress-controller-image","title":"Ingress Controller Image","text":"

      The image used to build the final ingress controller, used in deploy scripts and Helm charts.

      This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.

      The files are in rootfs directory and contains:

      • The Dockerfile
      • nginx config
      "},{"location":"developer-guide/code-overview/#ingress-nginx-lua-scripts","title":"Ingress NGINX Lua Scripts","text":"

      Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper.

      The directory containing Lua scripts is rootfs/etc/nginx/lua.

      "},{"location":"developer-guide/code-overview/#nginx-go-template-file","title":"Nginx Go template file","text":"

      One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file.

      To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.

      "},{"location":"developer-guide/getting-started/","title":"Getting Started","text":"

      Developing for Ingress-Nginx Controller

      This document explains how to get started with developing for Ingress-Nginx Controller.

      For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide. This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any)

      "},{"location":"developer-guide/getting-started/#prerequisites","title":"Prerequisites","text":"

      Install Go 1.14 or later.

      Note

      The project uses Go Modules

      Install Docker (v19.03.0 or later with experimental feature on)

      Install kubectl (1.24.0 or higher)

      Install Kind

      Important

      The majority of make tasks run as docker containers

      "},{"location":"developer-guide/getting-started/#quick-start","title":"Quick Start","text":"
      1. Fork the repository
      2. Clone the repository to any location in your work station
      3. Add a GO111MODULE environment variable with export GO111MODULE=on
      4. Run go mod download to install dependencies
      "},{"location":"developer-guide/getting-started/#local-build","title":"Local build","text":"

      Start a local Kubernetes cluster using kind, build and deploy the ingress controller

      make dev-env\n
      - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind, and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.

      "},{"location":"developer-guide/getting-started/#testing","title":"Testing","text":"

      Run go unit tests

      make test\n

      Run unit-tests for lua code

      make lua-test\n

      Lua tests are located in the directory rootfs/etc/nginx/lua/test

      Important

      Test files must follow the naming convention <mytest>_test.lua or it will be ignored

      Run e2e test suite

      make kind-e2e-test\n

      To limit the scope of the tests to execute, we can use the environment variable FOCUS

      FOCUS=\"no-auth-locations\" make kind-e2e-test\n

      Note

      The variable FOCUS defines Ginkgo Focused Specs

      Valid values are defined in the describe definition of the e2e tests like Default Backend

      The complete list of tests can be found here

      "},{"location":"developer-guide/getting-started/#custom-docker-image","title":"Custom docker image","text":"

      In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

      This can be done setting two environment variables, REGISTRY and TAG

      export TAG=\"dev\"\nexport REGISTRY=\"$USER\"\n\nmake build image\n

      and then publish such version with

      docker push $REGISTRY/controller:$TAG\n
      "},{"location":"enhancements/","title":"Kubernetes Enhancement Proposals (KEPs)","text":"

      A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

      "},{"location":"enhancements/#quick-start-for-the-kep-process","title":"Quick start for the KEP process","text":"

      Follow the process outlined in the KEP template

      "},{"location":"enhancements/#do-i-have-to-use-the-kep-process","title":"Do I have to use the KEP process?","text":"

      No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

      KEPs are only required when the changes are wide ranging and impact most of the project.

      "},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","title":"Why would I want to use the KEP process?","text":"

      Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

      Benefits to KEP users (in the limit):

      • Exposure on a kubernetes blessed web site that is findable via web search engines.
      • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
      • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

      We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.

      "},{"location":"enhancements/20190724-only-dynamic-ssl/","title":"Remove static SSL configuration mode","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","title":"Table of Contents","text":"
      • Summary
      • Motivation
      • Goals
      • Non-Goals
      • Proposal
      • Implementation Details/Notes/Constraints
      • Drawbacks
      • Alternatives
      "},{"location":"enhancements/20190724-only-dynamic-ssl/#summary","title":"Summary","text":"

      Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

      "},{"location":"enhancements/20190724-only-dynamic-ssl/#motivation","title":"Motivation","text":"

      The static configuration implies reloads, something that affects the majority of the users.

      "},{"location":"enhancements/20190724-only-dynamic-ssl/#goals","title":"Goals","text":"
      • Deprecation of the flag --enable-dynamic-certificates.
      • Cleanup of the codebase.
      "},{"location":"enhancements/20190724-only-dynamic-ssl/#non-goals","title":"Non-Goals","text":"
      • Features related to certificate authentication are not changed in any way.
      "},{"location":"enhancements/20190724-only-dynamic-ssl/#proposal","title":"Proposal","text":"
      • Remove static SSL configuration
      "},{"location":"enhancements/20190724-only-dynamic-ssl/#implementation-detailsnotesconstraints","title":"Implementation Details/Notes/Constraints","text":"
      • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
      • Remove any action of the flag --enable-dynamic-certificates
      "},{"location":"enhancements/20190724-only-dynamic-ssl/#drawbacks","title":"Drawbacks","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#alternatives","title":"Alternatives","text":"

      Keep both implementations

      "},{"location":"enhancements/20190815-zone-aware-routing/","title":"Availability zone aware routing","text":""},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","title":"Table of Contents","text":"
      • Availability zone aware routing
      • Table of Contents
      • Summary
      • Motivation
        • Goals
        • Non-Goals
      • Proposal
      • Implementation History
      • Drawbacks [optional]
      "},{"location":"enhancements/20190815-zone-aware-routing/#summary","title":"Summary","text":"

      Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

      "},{"location":"enhancements/20190815-zone-aware-routing/#motivation","title":"Motivation","text":"

      When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

      At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

      This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

      Arguably inter-zone network latency should also be better than cross-zone.

      "},{"location":"enhancements/20190815-zone-aware-routing/#goals","title":"Goals","text":"
      • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
      • This should not impact canary feature
      • ingress-nginx should be able to operate successfully if there are no zonal endpoints
      "},{"location":"enhancements/20190815-zone-aware-routing/#non-goals","title":"Non-Goals","text":"
      • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
      • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
      "},{"location":"enhancements/20190815-zone-aware-routing/#proposal","title":"Proposal","text":"

      The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

      Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

      How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

      How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

      Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

      How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

      We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

      "},{"location":"enhancements/20190815-zone-aware-routing/#implementation-history","title":"Implementation History","text":"
      • initial version of KEP is shipped
      • proposal and implementation details are done
      "},{"location":"enhancements/20190815-zone-aware-routing/#drawbacks-optional","title":"Drawbacks [optional]","text":"

      More load on the Kubernetes API server.

      "},{"location":"enhancements/20231001-split-containers/","title":"Proposal to split containers","text":"
      • All the NGINX files should live on one container
      • No file other than NGINX files should exist on this container
      • This includes not mounting the service account
      • All the controller files should live on a different container
      • Controller container should have bare minimum to work (just go program)
      • ServiceAccount should be mounted just on controller

      • Inside nginx container, there should be a really small http listener just able to start, stop and reload NGINX

      "},{"location":"enhancements/20231001-split-containers/#roadmap-what-needs-to-be-done","title":"Roadmap (what needs to be done)","text":"
      • Map what needs to be done to mount the SA just on controller container
      • Map all the required files for NGINX to work
      • Map all the required network calls between controller and NGINX
      • eg.: Dynamic lua reconfiguration
      • Map problematic features that will need attention
      • SSLPassthrough today happens on controller process and needs to happen on NGINX
      "},{"location":"enhancements/20231001-split-containers/#ports-and-endpoints-on-nginx-container","title":"Ports and endpoints on NGINX container","text":"
      • Public HTTP/HTTPs port - 80 and 443
      • Lua configuration port - 10246 (HTTP) and 10247 (Stream)
      • 3333 (temp) - Dataplane controller http server
      • /reload - (POST) Reloads the configuration.
        • \"config\" argument is the location of temporary file that should be used / moved to nginx.conf
      • /test - (POST) Test the configuration of a given file location
        • \"config\" argument is the location of temporary file that should be tested
      "},{"location":"enhancements/20231001-split-containers/#mounting-empty-sa-on-controller-container","title":"Mounting empty SA on controller container","text":"
      kind: Pod\napiVersion: v1\nmetadata:\n  name: test\nspec:\n  containers:\n  - name: nginx\n    image: nginx:latest\n    ports:\n    - containerPort: 80\n  - name: othernginx\n    image: alpine:latest\n    command: [\"/bin/sh\"]\n    args: [\"-c\", \"while true; do date; sleep 3; done\"]\n    volumeMounts:\n    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount\n      name: emptysecret\n  volumes:\n  - name: emptysecret\n    emptyDir:\n      sizeLimit: 1Mi\n
      "},{"location":"enhancements/20231001-split-containers/#mapped-folders-on-nginx-configuration","title":"Mapped folders on NGINX configuration","text":"

      WARNING We need to be aware of inter mount containers and inode problems. If we mount a file instead of a directory, it may take time to reflect the file value on the target container

      • \"/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;\"; - Lua scripts
      • \"/var/log/nginx\" - NGINX logs
      • \"/tmp/nginx (nginx.pid)\" - NGINX pid directory / file, fcgi socket, etc
      • \" /etc/nginx/geoip\" - GeoIP database directory - OK - /etc/ingress-controller/geoip
      • /etc/nginx/mime.types - Mime types
      • /etc/ingress-controller/ssl - SSL directory (fake cert, auth cert)
      • /etc/ingress-controller/auth - Authentication files
      • /etc/nginx/modsecurity - Modsecurity configuration
      • /etc/nginx/owasp-modsecurity-crs - Modsecurity rules
      • /etc/nginx/tickets.key - SSL tickets - OK - /etc/ingress-controller/tickets.key
      • /etc/nginx/opentelemetry.toml - OTEL config - OK - /etc/ingress-controller/telemetry
      • /etc/nginx/opentracing.json - Opentracing config - OK - /etc/ingress-controller/telemetry
      • /etc/nginx/modules - NGINX modules
      • /etc/nginx/fastcgi_params (maybe) - fcgi params
      • /etc/nginx/template - Template, may be used by controller only
      "},{"location":"enhancements/20231001-split-containers/#list-of-modules","title":"List of modules","text":"
      ngx_http_auth_digest_module.so    ngx_http_modsecurity_module.so\nngx_http_brotli_filter_module.so  ngx_http_opentracing_module.so\nngx_http_brotli_static_module.so  ngx_stream_geoip2_module.so\nngx_http_geoip2_module.so\n
      "},{"location":"enhancements/20231001-split-containers/#list-of-files-that-may-be-removed","title":"List of files that may be removed","text":"
      -rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf\n-rw-r--r--    1 www-data www-data      1077 Jun 23 19:44 fastcgi.conf.default\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params\n-rw-r--r--    1 www-data www-data      1007 Jun 23 19:44 fastcgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:34 geoip\n-rw-r--r--    1 www-data www-data      2837 Jun 23 19:44 koi-utf\n-rw-r--r--    1 www-data www-data      2223 Jun 23 19:44 koi-win\ndrwxr-xr-x    6 www-data www-data      4096 Sep 19 14:13 lua\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types\n-rw-r--r--    1 www-data www-data      5349 Jun 23 19:44 mime.types.default\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modsecurity\ndrwxr-xr-x    2 www-data www-data      4096 Jun 23 19:44 modules\n-rw-r--r--    1 www-data www-data     18275 Oct  1 21:28 nginx.conf\n-rw-r--r--    1 www-data www-data      2656 Jun 23 19:44 nginx.conf.default\n-rwx------    1 www-data www-data       420 Oct  1 21:28 opentelemetry.toml\n-rw-r--r--    1 www-data www-data         2 Oct  1 21:28 opentracing.json\ndrwxr-xr-x    7 www-data www-data      4096 Jun 23 19:44 owasp-modsecurity-crs\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params\n-rw-r--r--    1 www-data www-data       636 Jun 23 19:44 scgi_params.default\ndrwxr-xr-x    2 www-data www-data      4096 Sep 19 14:13 template\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params\n-rw-r--r--    1 www-data www-data       664 Jun 23 19:44 uwsgi_params.default\n-rw-r--r--    1 www-data www-data      3610 Jun 23 19:44 win-utf\n
      "},{"location":"enhancements/YYYYMMDD-kep-template/","title":"Title","text":"

      This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

      The title should be lowercased and spaces/punctuation should be replaced with -.

      To get started with this template:

      1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
      2. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
      3. Create a PR. Assign it to folks that are sponsoring this process.
      4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
      5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

      The canonical place for the latest set of instructions (and the likely source of this file) is here.

      The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#table-of-contents","title":"Table of Contents","text":"

      A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

      Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

      • Summary
      • Motivation
      • Goals
      • Non-Goals
      • Proposal
      • User Stories [optional]
        • Story 1
        • Story 2
      • Implementation Details/Notes/Constraints [optional]
      • Risks and Mitigations
      • Design Details
      • Test Plan
        • Removing a deprecated flag
      • Implementation History
      • Drawbacks [optional]
      • Alternatives [optional]
      "},{"location":"enhancements/YYYYMMDD-kep-template/#summary","title":"Summary","text":"

      The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

      A good summary is probably at least a paragraph in length.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#motivation","title":"Motivation","text":"

      This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#goals","title":"Goals","text":"

      List the specific goals of the KEP. How will we know that this has succeeded?

      "},{"location":"enhancements/YYYYMMDD-kep-template/#non-goals","title":"Non-Goals","text":"

      What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#proposal","title":"Proposal","text":"

      This is where we get down to the nitty gritty of what the proposal actually is.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#user-stories-optional","title":"User Stories [optional]","text":"

      Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#story-1","title":"Story 1","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#story-2","title":"Story 2","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#implementation-detailsnotesconstraints-optional","title":"Implementation Details/Notes/Constraints [optional]","text":"

      What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#risks-and-mitigations","title":"Risks and Mitigations","text":"

      What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

      How will security be reviewed and by whom? How will UX be reviewed and by whom?

      Consider including folks that also work outside project.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#design-details","title":"Design Details","text":""},{"location":"enhancements/YYYYMMDD-kep-template/#test-plan","title":"Test Plan","text":"

      Note: Section not required until targeted at a release.

      Consider the following in developing a test plan for this enhancement:

      • Will there be e2e and integration tests, in addition to unit tests?
      • How will it be tested in isolation vs with other components?

      No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

      All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#removing-a-deprecated-flag","title":"Removing a deprecated flag","text":"
      • Announce deprecation and support policy of the existing flag
      • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
      • Address feedback on usage/changed behavior, provided on GitHub issues
      • Deprecate the flag
      "},{"location":"enhancements/YYYYMMDD-kep-template/#implementation-history","title":"Implementation History","text":"

      Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

      • the Summary and Motivation sections being merged signaling acceptance
      • the Proposal section being merged signaling agreement on a proposed design
      • the date implementation started
      • the first Kubernetes release where an initial version of the KEP was available
      • the version of Kubernetes where the KEP graduated to general availability
      • when the KEP was retired or superseded
      "},{"location":"enhancements/YYYYMMDD-kep-template/#drawbacks-optional","title":"Drawbacks [optional]","text":"

      Why should this KEP not be implemented.

      "},{"location":"enhancements/YYYYMMDD-kep-template/#alternatives-optional","title":"Alternatives [optional]","text":"

      Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

      "},{"location":"examples/","title":"Ingress examples","text":"

      This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them.

      The examples on these pages include the spec.ingressClassName field which replaces the deprecated kubernetes.io/ingress.class: nginx annotation. Users of ingress-nginx < 1.0.0 (Helm chart < 4.0.0) should use the legacy documentation.

      For more information, check out the Migration to apiVersion networking.k8s.io/v1 guide.

      Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Features Canary Deployments weighted canary routing to a separate deployment Intermediate Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO"},{"location":"examples/PREREQUISITES/","title":"Prerequisites","text":"

      Many of the examples in this directory have common prerequisites.

      "},{"location":"examples/PREREQUISITES/#tls-certificates","title":"TLS certificates","text":"

      Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows

      $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\"\nGenerating a 2048 bit RSA private key\n................+++\n................+++\nwriting new private key to 'tls.key'\n-----\n\n$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt\nsecret \"tls-secret\" created\n

      Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.

      "},{"location":"examples/PREREQUISITES/#client-certificate-authentication","title":"Client Certificate Authentication","text":"

      CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA.

      We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate.

      These instructions are based on the following blog

      Generate the CA Key and Certificate:

      openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority'\n

      Generate the Server Key, and Certificate and Sign with the CA Certificate:

      openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com'\nopenssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt\n

      Generate the Client Key, and Certificate and Sign with the CA Certificate:

      openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client'\nopenssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt\n

      Once this is complete you can continue to follow the instructions here

      "},{"location":"examples/PREREQUISITES/#test-http-service","title":"Test HTTP Service","text":"

      All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows

      $ kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/http-svc.yaml\nservice \"http-svc\" created\nreplicationcontroller \"http-svc\" created\n\n$ kubectl get po\nNAME             READY     STATUS    RESTARTS   AGE\nhttp-svc-p1t3t   1/1       Running   0          1d\n\n$ kubectl get svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301/TCP       1d\n

      You can test that the HTTP Service works by exposing it temporarily

      $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\"http-svc\" patched\n\n$ kubectl get svc http-svc\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\nhttp-svc         10.0.122.116   <pending>     80:30301/TCP       1d\n\n$ kubectl describe svc http-svc\nName:                   http-svc\nNamespace:              default\nLabels:                 app=http-svc\nSelector:               app=http-svc\nType:                   LoadBalancer\nIP:                     10.0.122.116\nLoadBalancer Ingress:   108.59.87.136\nPort:                   http    80/TCP\nNodePort:               http    30301/TCP\nEndpoints:              10.180.1.6:8080\nSession Affinity:       None\nEvents:\n  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message\n  --------- --------    -----   ----            -------------   --------    ------          -------\n  1m        1m      1   {service-controller }           Normal      Type            ClusterIP -> LoadBalancer\n  1m        1m      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer\n  16s       16s     1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer\n\n$ curl 108.59.87.136\nCLIENT VALUES:\nclient_address=10.240.0.3\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://108.59.87.136:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nhost=108.59.87.136\nuser-agent=curl/7.46.0\nBODY:\n-no body in request-\n\n$ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}'\n\"http-svc\" patched\n
      "},{"location":"examples/affinity/cookie/","title":"Sticky sessions","text":"

      This example demonstrates how to achieve session affinity using cookies.

      "},{"location":"examples/affinity/cookie/#deployment","title":"Deployment","text":"

      Session affinity can be configured using the following annotations:

      Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie) nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent nginx.ingress.kubernetes.io/affinity-canary-behavior Defines session affinity behavior of canaries. By default the behavior is sticky, and canaries respect session affinity configuration. Set this to legacy to restore original canary behavior, when session affinity parameters were not respected. sticky (default) or legacy nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-secure Set the cookie as secure regardless the protocol of the incoming request \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path) nginx.ingress.kubernetes.io/session-cookie-domain Domain that will be set on the cookie string nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)

      You can create the session affinity example Ingress to test this:

      kubectl create -f ingress.yaml\n
      "},{"location":"examples/affinity/cookie/#validation","title":"Validation","text":"

      You can confirm that the Ingress works:

      $ kubectl describe ing nginx-test\nName:           nginx-test\nNamespace:      default\nAddress:\nDefault backend:    default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)\nRules:\n  Host                          Path    Backends\n  ----                          ----    --------\n  stickyingress.example.com\n                                /        nginx-service:80 (<none>)\nAnnotations:\n  affinity: cookie\n  session-cookie-name:      INGRESSCOOKIE\n  session-cookie-expires: 172800\n  session-cookie-max-age: 172800\nEvents:\n  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason  Message\n  --------- --------    -----   ----                -------------   --------    ------  -------\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  default/nginx-test\n\n\n$ curl -I http://stickyingress.example.com\nHTTP/1.1 200 OK\nServer: nginx/1.11.9\nDate: Fri, 10 Feb 2017 14:11:12 GMT\nContent-Type: text/html\nContent-Length: 612\nConnection: keep-alive\nSet-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly\nLast-Modified: Tue, 24 Jan 2017 14:02:19 GMT\nETag: \"58875e6b-264\"\nAccept-Ranges: bytes\n

      In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by the Ingress-Nginx Controller, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing) and has an Expires directive. If a client sends a cookie that doesn't correspond to an upstream, NGINX selects an upstream and creates a corresponding cookie.

      If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.

      When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.

      "},{"location":"examples/affinity/cookie/#caveats","title":"Caveats","text":"

      When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.

      "},{"location":"examples/auth/basic/","title":"Basic Authentication","text":"

      This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.

      "},{"location":"examples/auth/basic/#create-htpasswd-file","title":"Create htpasswd file","text":"
      $ htpasswd -c auth foo\nNew password: <bar>\nNew password:\nRe-type new password:\nAdding password for user foo\n
      "},{"location":"examples/auth/basic/#convert-htpasswd-into-a-secret","title":"Convert htpasswd into a secret","text":"
      $ kubectl create secret generic basic-auth --from-file=auth\nsecret \"basic-auth\" created\n
      "},{"location":"examples/auth/basic/#examine-secret","title":"Examine secret","text":"
      $ kubectl get secret basic-auth -o yaml\napiVersion: v1\ndata:\n  auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK\nkind: Secret\nmetadata:\n  name: basic-auth\n  namespace: default\ntype: Opaque\n
      "},{"location":"examples/auth/basic/#using-kubectl-create-an-ingress-tied-to-the-basic-auth-secret","title":"Using kubectl, create an ingress tied to the basic-auth secret","text":"
      $ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-with-auth\n  annotations:\n    # type of authentication\n    nginx.ingress.kubernetes.io/auth-type: basic\n    # name of the secret that contains the user/password definitions\n    nginx.ingress.kubernetes.io/auth-secret: basic-auth\n    # message to display with an appropriate context why the authentication is required\n    nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: foo.bar.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n
      "},{"location":"examples/auth/basic/#use-curl-to-confirm-authorization-is-required-by-the-ingress","title":"Use curl to confirm authorization is required by the ingress","text":"
      $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 05:27:23 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Authentication Required - foo\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.10.0</center>\n</body>\n</html>\n* Connection #0 to host 10.2.29.4 left intact\n
      "},{"location":"examples/auth/basic/#use-curl-with-the-correct-credentials-to-connect-to-the-ingress","title":"Use curl with the correct credentials to connect to the ingress","text":"
      $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'\n*   Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n* Server auth using Basic with user 'foo'\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> Authorization: Basic Zm9vOmJhcg==\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 06:05:26 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n< Vary: Accept-Encoding\n<\nCLIENT VALUES:\nclient_address=10.2.29.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.43.0\nx-request-id=e426c7829ef9f3b18d40730857c3eddb\nx-forwarded-for=10.2.29.1\nx-forwarded-host=foo.bar.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.29.1\nx-scheme=http\nBODY:\n* Connection #0 to host 10.2.29.4 left intact\n-no body in request-\n
      "},{"location":"examples/auth/client-certs/","title":"Client Certificate Authentication","text":"

      It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource.

      Before getting started you must have the following Certificates configured:

      1. CA certificate and Key (Intermediate Certs need to be in CA)
      2. Server Certificate (Signed by CA) and Key (CN should be equal the hostname you will use)
      3. Client Certificate (Signed by CA) and Key

      For more details on the generation process, checkout the Prerequisite docs.

      You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:

      openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n

      Then, you can concatenate them all into one file, named 'ca.crt' with the following:

      cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n

      Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm (Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.

      "},{"location":"examples/auth/client-certs/#creating-certificate-secrets","title":"Creating Certificate Secrets","text":"

      There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.

      • You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA.

        kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt\nkubectl create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key\n
      • You can create a secret containing CA certificate along with the Server Certificate that can be used for both TLS and Client Auth.

        kubectl create secret generic ca-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crt\n
      • If you want to also enable Certificate Revocation List verification you can create the secret also containing the CRL file in PEM format:

        kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --from-file=ca.crl=ca.crl\n

      Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.

      "},{"location":"examples/auth/client-certs/#setup-instructions","title":"Setup Instructions","text":"
      1. Add the annotations as provided in the ingress.yaml example to your own ingress resources as required.
      2. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400.
      3. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.
      "},{"location":"examples/auth/external-auth/","title":"External Basic Authentication","text":""},{"location":"examples/auth/external-auth/#example-1","title":"Example 1","text":"

      Use an external service (Basic Auth) located in https://httpbin.org

      $ kubectl create -f ingress.yaml\ningress \"external-auth\" created\n\n$ kubectl get ing external-auth\nNAME            HOSTS                         ADDRESS       PORTS     AGE\nexternal-auth   external-auth-01.sample.com   172.17.4.99   80        13s\n\n$ kubectl get ing external-auth -o yaml\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd\n  creationTimestamp: 2016-10-03T13:50:35Z\n  generation: 1\n  name: external-auth\n  namespace: default\n  resourceVersion: \"2068378\"\n  selfLink: /apis/networking/v1/namespaces/default/ingresses/external-auth\n  uid: 5c388f1d-8970-11e6-9004-080027d2dc94\nspec:\n  rules:\n  - host: external-auth-01.sample.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service: \n            name: http-svc\n            port: \n              number: 80\nstatus:\n  loadBalancer:\n    ingress:\n    - ip: 172.17.4.99\n$\n
      "},{"location":"examples/auth/external-auth/#test-1-no-usernamepassword-expect-code-401","title":"Test 1: no username/password (expect code 401)","text":"
      $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:08 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.11.3</center>\n</body>\n</html>\n* Connection #0 to host 172.17.4.99 left intact\n
      "},{"location":"examples/auth/external-auth/#test-2-valid-usernamepassword-expect-code-200","title":"Test 2: valid username/password (expect code 200)","text":"
      $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjpwYXNzd2Q=\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:52:50 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n<\nCLIENT VALUES:\nclient_address=10.2.60.2\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://external-auth-01.sample.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nauthorization=Basic dXNlcjpwYXNzd2Q=\nconnection=close\nhost=external-auth-01.sample.com\nuser-agent=curl/7.50.1\nx-forwarded-for=10.2.60.1\nx-forwarded-host=external-auth-01.sample.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.60.1\nBODY:\n* Connection #0 to host 172.17.4.99 left intact\n-no body in request-\n
      "},{"location":"examples/auth/external-auth/#test-3-invalid-usernamepassword-expect-code-401","title":"Test 3: invalid username/password (expect code 401)","text":"
      curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'\n* Rebuilt URL to: http://172.17.4.99/\n*   Trying 172.17.4.99...\n* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)\n* Server auth using Basic with user 'user'\n> GET / HTTP/1.1\n> Host: external-auth-01.sample.com\n> Authorization: Basic dXNlcjp1c2Vy\n> User-Agent: curl/7.50.1\n> Accept: */*\n>\n< HTTP/1.1 401 Unauthorized\n< Server: nginx/1.11.3\n< Date: Mon, 03 Oct 2016 14:53:04 GMT\n< Content-Type: text/html\n< Content-Length: 195\n< Connection: keep-alive\n* Authentication problem. Ignoring this.\n< WWW-Authenticate: Basic realm=\"Fake Realm\"\n<\n<html>\n<head><title>401 Authorization Required</title></head>\n<body bgcolor=\"white\">\n<center><h1>401 Authorization Required</h1></center>\n<hr><center>nginx/1.11.3</center>\n</body>\n</html>\n* Connection #0 to host 172.17.4.99 left intact\n
      "},{"location":"examples/auth/oauth-external-auth/","title":"External OAUTH Authentication","text":""},{"location":"examples/auth/oauth-external-auth/#overview","title":"Overview","text":"

      The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

      Important

      This annotation requires ingress-nginx-controller v0.9.0 or greater.

      "},{"location":"examples/auth/oauth-external-auth/#key-detail","title":"Key Detail","text":"

      This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

      Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

      Sample:

      ...\nmetadata:\n  name: application\n  annotations:\n    nginx.ingress.kubernetes.io/auth-url: \"https://$host/oauth2/auth\"\n    nginx.ingress.kubernetes.io/auth-signin: \"https://$host/oauth2/start?rd=$escaped_request_uri\"\n...\n
      "},{"location":"examples/auth/oauth-external-auth/#example-oauth2-proxy-kubernetes-dashboard","title":"Example: OAuth2 Proxy + Kubernetes-Dashboard","text":"

      This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.

      "},{"location":"examples/auth/oauth-external-auth/#prepare","title":"Prepare","text":"
      1. Install the kubernetes dashboard

        kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml\n
      2. Create a custom GitHub OAuth application

        • Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com
        • Authorization callback URL is the same as the base FQDN plus /oauth2/callback, like https://foo.bar.com/oauth2/callback

      3. Configure values in the file oauth2-proxy.yaml with the values:

        • OAUTH2_PROXY_CLIENT_ID with the github <Client ID>
        • OAUTH2_PROXY_CLIENT_SECRET with the github <Client Secret>
        • OAUTH2_PROXY_COOKIE_SECRET with value of python -c 'import os,base64; print(base64.b64encode(os.urandom(16)).decode(\"ascii\"))'
        • (optional, but recommended) OAUTH2_PROXY_GITHUB_USERS with GitHub usernames to allow to login
        • __INGRESS_HOST__ with a valid FQDN (e.g. foo.bar.com)
        • __INGRESS_SECRET__ with a Secret with a valid SSL certificate
      4. Deploy the oauth2 proxy and the ingress rules by running:

        $ kubectl create -f oauth2-proxy.yaml\n
      "},{"location":"examples/auth/oauth-external-auth/#test","title":"Test","text":"

      Test the integration by accessing the configured URL, e.g. https://foo.bar.com

      "},{"location":"examples/auth/oauth-external-auth/#example-vouch-proxy-kubernetes-dashboard","title":"Example: Vouch Proxy + Kubernetes-Dashboard","text":"

      This example will show you how to deploy Vouch Proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.

      "},{"location":"examples/auth/oauth-external-auth/#prepare_1","title":"Prepare","text":"
      1. Install the kubernetes dashboard

        kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml\n
      2. Create a custom GitHub OAuth application

        • Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com
        • Authorization callback URL is the same as the base FQDN plus /oauth2/auth, like https://foo.bar.com/oauth2/auth

      3. Configure Vouch Proxy values in the file vouch-proxy.yaml with the values:

        • VOUCH_COOKIE_DOMAIN with value of <Ingress Host>
        • OAUTH_CLIENT_ID with the github <Client ID>
        • OAUTH_CLIENT_SECRET with the github <Client Secret>
        • (optional, but recommended) VOUCH_WHITELIST with GitHub usernames to allow to login
        • __INGRESS_HOST__ with a valid FQDN (e.g. foo.bar.com)
        • __INGRESS_SECRET__ with a Secret with a valid SSL certificate
      4. Deploy Vouch Proxy and the ingress rules by running:

        $ kubectl create -f vouch-proxy.yaml\n
      "},{"location":"examples/auth/oauth-external-auth/#test_1","title":"Test","text":"

      Test the integration by accessing the configured URL, e.g. https://foo.bar.com

      "},{"location":"examples/canary/","title":"Canary","text":"

      Ingress Nginx Has the ability to handle canary routing by setting specific annotations, the following is an example of how to configure a canary deployment with weighted canary routing.

      "},{"location":"examples/canary/#create-your-main-deployment-and-service","title":"Create your main deployment and service","text":"

      This is the main deployment of your application with the service that will be used to route to it

      echo \"\n---\n# Deployment\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: production\n  template:\n    metadata:\n      labels:\n        app: production\n    spec:\n      containers:\n      - name: production\n        image: registry.k8s.io/ingress-nginx/e2e-test-echo@sha256:6fc5aa2994c86575975bb20a5203651207029a0d28e3f491d8a127d08baadab4\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: production\n  labels:\n    app: production\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: production\n\" | kubectl apply -f -\n
      "},{"location":"examples/canary/#create-the-canary-deployment-and-service","title":"Create the canary deployment and service","text":"

      This is the canary deployment that will take a weighted amount of requests instead of the main deployment

      echo \"\n---\n# Deployment\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: canary\n  template:\n    metadata:\n      labels:\n        app: canary\n    spec:\n      containers:\n      - name: canary\n        image: registry.k8s.io/ingress-nginx/e2e-test-echo@sha256:6fc5aa2994c86575975bb20a5203651207029a0d28e3f491d8a127d08baadab4\n        ports:\n        - containerPort: 80\n        env:\n          - name: NODE_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: spec.nodeName\n          - name: POD_NAME\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.name\n          - name: POD_NAMESPACE\n            valueFrom:\n              fieldRef:\n                fieldPath: metadata.namespace\n          - name: POD_IP\n            valueFrom:\n              fieldRef:\n                fieldPath: status.podIP\n---\n# Service\napiVersion: v1\nkind: Service\nmetadata:\n  name: canary\n  labels:\n    app: canary\nspec:\n  ports:\n  - port: 80\n    targetPort: 80\n    protocol: TCP\n    name: http\n  selector:\n    app: canary\n\" | kubectl apply -f -\n
      "},{"location":"examples/canary/#create-ingress-pointing-to-your-main-deployment","title":"Create Ingress Pointing To Your Main Deployment","text":"

      Next you will need to expose your main deployment with an ingress resource, note there are no canary specific annotations on this ingress

      echo \"\n---\n# Ingress\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: production\n  annotations:\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: /\n        backend:\n          service:\n            name: production\n            port:\n              number: 80\n\" | kubectl apply -f -\n
      "},{"location":"examples/canary/#create-ingress-pointing-to-your-canary-deployment","title":"Create Ingress Pointing To Your Canary Deployment","text":"

      You will then create an Ingress that has the canary specific configuration, please pay special notice of the following:

      • The host name is identical to the main ingress host name
      • The nginx.ingress.kubernetes.io/canary: \"true\" annotation is required and defines this as a canary annotation (if you do not have this the Ingresses will clash)
      • The nginx.ingress.kubernetes.io/canary-weight: \"50\" annotation dictates the weight of the routing, in this case there is a \"50%\" chance a request will hit the canary deployment over the main deployment
        echo \"\n---\n# Ingress\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: canary\n  annotations:\n    nginx.ingress.kubernetes.io/canary: \\\"true\\\"\n    nginx.ingress.kubernetes.io/canary-weight: \\\"50\\\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: echo.prod.mydomain.com\n    http:\n      paths:\n      - pathType: Prefix\n        path: /\n        backend:\n          service:\n            name: canary\n            port:\n              number: 80\n\" | kubectl apply -f -\n
      "},{"location":"examples/canary/#testing-your-setup","title":"Testing your setup","text":"

      You can use the following command to test your setup (replacing INGRESS_CONTROLLER_IP with your ingresse controllers IP Address)

      for i in $(seq 1 10); do curl -s --resolve echo.prod.mydomain.com:80:$INGRESS_CONTROLLER_IP echo.prod.mydomain.com  | grep \"Hostname\"; done\n

      You will get the following output showing that your canary setup is working as expected:

      Hostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: production-5c5f65d859-phqzc\nHostname: canary-6697778457-zkfjf\nHostname: production-5c5f65d859-phqzc\n
      "},{"location":"examples/customization/configuration-snippets/","title":"Configuration Snippets","text":""},{"location":"examples/customization/configuration-snippets/#ingress","title":"Ingress","text":"

      The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at an example of specifying custom headers.

      kubectl apply -f ingress.yaml\n
      "},{"location":"examples/customization/configuration-snippets/#test","title":"Test","text":"

      Check if the contents of the annotation are present in the nginx.conf file using:

      kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf\n
      "},{"location":"examples/customization/custom-configuration/","title":"Custom Configuration","text":"

      Using a ConfigMap is possible to customize the NGINX configuration

      For example, if we want to change the timeouts we need to create a ConfigMap:

      $ cat configmap.yaml\napiVersion: v1\ndata:\n  proxy-connect-timeout: \"10\"\n  proxy-read-timeout: \"120\"\n  proxy-send-timeout: \"120\"\nkind: ConfigMap\nmetadata:\n  name: ingress-nginx-controller\n
      curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-configuration/configmap.yaml \\\n    | kubectl apply -f -\n

      If the Configmap is updated, NGINX will be reloaded with the new configuration.

      "},{"location":"examples/customization/custom-errors/","title":"Custom Errors","text":"

      This example demonstrates how to use a custom backend to render custom error pages.

      If you are using Helm Chart, look at example values and don't forget to add configMap to your deployment, otherwise continue with Customized default backend manual deployment.

      "},{"location":"examples/customization/custom-errors/#customized-default-backend","title":"Customized default backend","text":"

      First, create the custom default-backend. It will be used by the Ingress controller later on. To do that, you can take a look at the example manifest in this project's GitHub repository.

      $ kubectl create -f custom-default-backend.yaml\nservice \"nginx-errors\" created\ndeployment.apps \"nginx-errors\" created\n

      This should have created a Deployment and a Service with the name nginx-errors.

      $ kubectl get deploy,svc\nNAME                           DESIRED   CURRENT   READY     AGE\ndeployment.apps/nginx-errors   1         1         1         10s\n\nNAME                   TYPE        CLUSTER-IP  EXTERNAL-IP   PORT(S)   AGE\nservice/nginx-errors   ClusterIP   10.0.0.12   <none>        80/TCP    10s\n
      "},{"location":"examples/customization/custom-errors/#ingress-controller-configuration","title":"Ingress controller configuration","text":"

      If you do not already have an instance of the Ingress-Nginx Controller running, deploy it according to the deployment guide, then follow these steps:

      1. Edit the ingress-nginx-controller Deployment and set the value of the --default-backend-service flag to the name of the newly created error backend.

      2. Edit the ingress-nginx-controller ConfigMap and create the key custom-http-errors with a value of 404,503.

      3. Take note of the IP address assigned to the Ingress-Nginx Controller Service.

        $ kubectl get svc ingress-nginx\nNAME            TYPE        CLUSTER-IP  EXTERNAL-IP   PORT(S)          AGE\ningress-nginx   ClusterIP   10.0.0.13   <none>        80/TCP,443/TCP   10m\n

      Note

      The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.

      "},{"location":"examples/customization/custom-errors/#testing-error-pages","title":"Testing error pages","text":"

      Let us send a couple of HTTP requests using cURL and validate everything is working as expected.

      A request to the default backend returns a 404 error with a custom message:

      $ curl -D- http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:11:24 GMT\nContent-Type: */*\nTransfer-Encoding: chunked\nConnection: keep-alive\n\n<span>The page you're looking for could not be found.</span>\n

      A request with a custom Accept header returns the corresponding document type (JSON):

      $ curl -D- -H 'Accept: application/json' http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:12:36 GMT\nContent-Type: application/json\nTransfer-Encoding: chunked\nConnection: keep-alive\nVary: Accept-Encoding\n\n{ \"message\": \"The page you're looking for could not be found\" }\n

      To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).

      "},{"location":"examples/customization/custom-headers/","title":"Custom Headers","text":""},{"location":"examples/customization/custom-headers/#caveats","title":"Caveats","text":"

      Changes to the custom header config maps do not force a reload of the ingress-nginx-controllers.

      "},{"location":"examples/customization/custom-headers/#workaround","title":"Workaround","text":"

      To work around this limitation, perform a rolling restart of the deployment.

      "},{"location":"examples/customization/custom-headers/#example","title":"Example","text":"

      This example demonstrates configuration of the Ingress-Nginx Controller via a ConfigMap to pass a custom list of headers to the upstream server.

      custom-headers.yaml defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/custom-headers.yaml\n

      configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap.yaml\n

      The Ingress-Nginx Controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.

      The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/custom-headers/configmap-client-response.yaml\n
      "},{"location":"examples/customization/custom-headers/#test","title":"Test","text":"

      Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf

      "},{"location":"examples/customization/external-auth-headers/","title":"External authentication, authentication service response headers propagation","text":"

      This example demonstrates propagation of selected authentication service response headers to a backend service.

      Sample configuration includes:

      • Sample authentication service producing several response headers
      • Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
      • After successful authentication service generates response headers UserID and UserRole
      • Sample echo service displaying header information
      • Two ingress objects pointing to echo service
      • Public, which allows access from unauthenticated users
      • Private, which allows access from authenticated users only

      You can deploy the controller as follows:

      $ kubectl create -f deploy/\ndeployment \"demo-auth-service\" created\nservice \"demo-auth-service\" created\ningress \"demo-auth-service\" created\ndeployment \"demo-echo-service\" created\nservice \"demo-echo-service\" created\ningress \"public-demo-echo-service\" created\ningress \"secure-demo-echo-service\" created\n\n$ kubectl get po\nNAME                                        READY     STATUS    RESTARTS   AGE\ndemo-auth-service-2769076528-7g9mh          1/1       Running            0          30s\ndemo-echo-service-3636052215-3vw8c          1/1       Running            0          29s\n\nkubectl get ing\nNAME                       HOSTS                                 ADDRESS   PORTS     AGE\npublic-demo-echo-service   public-demo-echo-service.kube.local             80        1m\nsecure-demo-echo-service   secure-demo-echo-service.kube.local             80        1m\n
      "},{"location":"examples/customization/external-auth-headers/#test-1-public-service-with-no-auth-header","title":"Test 1: public service with no auth header","text":"
      $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:21 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 20\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: , UserRole:\n
      "},{"location":"examples/customization/external-auth-headers/#test-2-secure-service-with-no-auth-header","title":"Test 2: secure service with no auth header","text":"
      $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 403 Forbidden\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:18:48 GMT\n< Content-Type: text/html\n< Content-Length: 170\n< Connection: keep-alive\n<\n<html>\n<head><title>403 Forbidden</title></head>\n<body bgcolor=\"white\">\n<center><h1>403 Forbidden</h1></center>\n<hr><center>nginx/1.11.10</center>\n</body>\n</html>\n* Connection #0 to host 192.168.99.100 left intact\n
      "},{"location":"examples/customization/external-auth-headers/#test-3-public-service-with-valid-auth-header","title":"Test 3: public service with valid auth header","text":"
      $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:59 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 44\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 1443635317331776148, UserRole: admin\n
      "},{"location":"examples/customization/external-auth-headers/#test-4-secure-service-with-valid-auth-header","title":"Test 4: secure service with valid auth header","text":"
      $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n*   Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:17:23 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 43\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 605394647632969758, UserRole: admin\n
      "},{"location":"examples/customization/jwt/","title":"Accommodation for JWT","text":"

      JWT (short for Json Web Token) is an authentication method widely used. Basically an authentication server generates a JWT and you then use this token in every request you make to a backend service. The JWT can be quite big and is present in every http headers. This means you may have to adapt the max-header size of your nginx-ingress in order to support it.

      "},{"location":"examples/customization/jwt/#symptoms","title":"Symptoms","text":"

      If you use JWT and you get http 502 error from your ingress, it may be a sign that the buffer size is not big enough.

      To be 100% sure look at the logs of the ingress-nginx-controller pod, you should see something like this:

      upstream sent too big header while reading response header from upstream...\n
      "},{"location":"examples/customization/jwt/#increase-buffer-size-for-headers","title":"Increase buffer size for headers","text":"

      In nginx, we want to modify the property proxy-buffer-size. The size is arbitrary. It depends on your needs. Be aware that a high value can lower the performance of your ingress proxy. In general a value of 16k should get you covered.

      "},{"location":"examples/customization/jwt/#using-helm","title":"Using helm","text":"

      If you're using helm you can simply use the config properties.

       # -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/\n  config: \n    proxy-buffer-size: 16k\n

      "},{"location":"examples/customization/jwt/#manually-in-kubernetes-config-files","title":"Manually in kubernetes config files","text":"

      If you use an already generated config from for a provider, you will have to change the controller-configmap.yaml

      ---\n# Source: ingress-nginx/templates/controller-configmap.yaml\napiVersion: v1\nkind: ConfigMap\n# ...\ndata:\n  #...\n  proxy-buffer-size: \"16k\"\n

      References: * Custom Configuration

      "},{"location":"examples/customization/ssl-dh-param/","title":"Custom DH parameters for perfect forward secrecy","text":"

      This example aims to demonstrate the deployment of an Ingress-Nginx Controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".

      "},{"location":"examples/customization/ssl-dh-param/#custom-configuration","title":"Custom configuration","text":"
      $ cat configmap.yaml\napiVersion: v1\ndata:\n  ssl-dh-param: \"ingress-nginx/lb-dhparam\"\nkind: ConfigMap\nmetadata:\n  name: ingress-nginx-controller\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
      $ kubectl create -f configmap.yaml\n
      "},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-secret","title":"Custom DH parameters secret","text":"
      $ openssl dhparam 4096 2> /dev/null | base64\nLS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\n
      $ cat ssl-dh-param.yaml\napiVersion: v1\ndata:\n  dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\"\nkind: Secret\nmetadata:\n  name: lb-dhparam\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
      $ kubectl create -f ssl-dh-param.yaml\n
      "},{"location":"examples/customization/ssl-dh-param/#test","title":"Test","text":"

      Check the contents of the configmap is present in the nginx.conf file using:

      $ kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf\n

      "},{"location":"examples/customization/sysctl/","title":"Sysctl tuning","text":"

      This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch.

      kubectl patch deployment -n ingress-nginx ingress-nginx-controller \\\n    --patch=\"$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/sysctl/patch.json)\"\n

      Changes:

      • Backlog Queue setting net.core.somaxconn from 128 to 32768
      • Ephemeral Ports setting net.ipv4.ip_local_port_range from 32768 60999 to 1024 65000

      In a post from the NGINX blog, it is possible to see an explanation for the changes.

      "},{"location":"examples/docker-registry/","title":"Docker registry","text":"

      This example demonstrates how to deploy a docker registry in the cluster and configure Ingress to enable access from the Internet.

      "},{"location":"examples/docker-registry/#deployment","title":"Deployment","text":"

      First we deploy the docker registry in the cluster:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/deployment.yaml\n

      Important

      DO NOT RUN THIS IN PRODUCTION

      This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

      The next required step is creation of the ingress rules. To do this we have two options: with and without TLS

      "},{"location":"examples/docker-registry/#without-tls","title":"Without TLS","text":"

      Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-without-tls.yaml\n

      Important

      Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.

      Please check deploy a plain http registry

      "},{"location":"examples/docker-registry/#with-tls","title":"With TLS","text":"

      Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/docker-registry/ingress-with-tls.yaml\n

      Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.

      "},{"location":"examples/docker-registry/#testing","title":"Testing","text":"

      To test the registry is working correctly we download a known image from docker hub, create a tag pointing to the new registry and upload the image:

      docker pull ubuntu:16.04\ndocker tag ubuntu:16.04 `registry.<your domain>/ubuntu:16.04`\ndocker push `registry.<your domain>/ubuntu:16.04`\n

      Please replace registry.<your domain> with your domain.

      "},{"location":"examples/grpc/","title":"gRPC","text":"

      This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.

      "},{"location":"examples/grpc/#prerequisites","title":"Prerequisites","text":"
      1. You have a kubernetes cluster running.
      2. You have a domain name such as example.com that is configured to route traffic to the Ingress-NGINX controller.
      3. You have the ingress-nginx-controller installed as per docs.
      4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
      5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
      "},{"location":"examples/grpc/#step-1-create-a-kubernetes-deployment-for-grpc-app","title":"Step 1: Create a Kubernetes Deployment for gRPC app","text":"
      • Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
        $ kubectl get po -A -o wide | grep go-grpc-greeter-server\n
      • If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.

      • As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go.

      • To create a container image for this app, you can use this Dockerfile.

      • If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.

      cat <<EOF | kubectl apply -f -\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n  labels:\n    app: go-grpc-greeter-server\n  name: go-grpc-greeter-server\nspec:\n  replicas: 1\n  selector:\n    matchLabels:\n      app: go-grpc-greeter-server\n  template:\n    metadata:\n      labels:\n        app: go-grpc-greeter-server\n    spec:\n      containers:\n      - image: <reponame>/go-grpc-greeter-server   # Edit this for your reponame\n        resources:\n          limits:\n            cpu: 100m\n            memory: 100Mi\n          requests:\n            cpu: 50m\n            memory: 50Mi\n        name: go-grpc-greeter-server\n        ports:\n        - containerPort: 50051\nEOF\n
      "},{"location":"examples/grpc/#step-2-create-the-kubernetes-service-for-the-grpc-app","title":"Step 2: Create the Kubernetes Service for the gRPC app","text":"
      • You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
        cat <<EOF | kubectl apply -f -\napiVersion: v1\nkind: Service\nmetadata:\n  labels:\n    app: go-grpc-greeter-server\n  name: go-grpc-greeter-server\nspec:\n  ports:\n  - port: 80\n    protocol: TCP\n    targetPort: 50051\n  selector:\n    app: go-grpc-greeter-server\n  type: ClusterIP\nEOF\n
      • You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
      $ kubectl create -f service.go-grpc-greeter-server.yaml\n
      "},{"location":"examples/grpc/#step-3-create-the-kubernetes-ingress-resource-for-the-grpc-app","title":"Step 3: Create the Kubernetes Ingress resource for the gRPC app","text":"
      • Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type \"kubernetes.io/tls\" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
      cat <<EOF | kubectl apply -f -\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/ssl-redirect: \"true\"\n    nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\"\n  name: fortune-ingress\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: grpctest.dev.mydomain.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: go-grpc-greeter-server\n            port:\n              number: 80\n  tls:\n  # This secret must exist beforehand\n  # The cert must also contain the subj-name grpctest.dev.mydomain.com\n  # https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates\n  - secretName: wildcard.dev.mydomain.com\n    hosts:\n      - grpctest.dev.mydomain.com\nEOF\n
      • If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this:
      $ kubectl create -f ingress.go-grpc-greeter-server.yaml\n
      • The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive \"insecure\").

      • For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\".

      • A few more things to note:

      • We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.

      • We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.

      "},{"location":"examples/grpc/#step-4-test-the-connection","title":"Step 4: test the connection","text":"
      • Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
      $ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello\n{\n  \"message\": \"Hello \"\n}\n
      "},{"location":"examples/grpc/#debugging-hints","title":"Debugging Hints","text":"
      1. Obviously, watch the logs on your app.
      2. Watch the logs for the ingress-nginx-controller (increasing verbosity as needed).
      3. Double-check your address and ports.
      4. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
      5. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.

      If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.

      See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html

      "},{"location":"examples/grpc/#notes-on-using-responserequest-streams","title":"Notes on using response/request streams","text":"

      grpc_read_timeout and grpc_send_timeout will be set as proxy_read_timeout and proxy_send_timeout when you set backend protocol to GRPC or GRPCS.

      1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate this.
      2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
      3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
      "},{"location":"examples/multi-tls/","title":"Multi TLS certificate termination","text":"

      This example uses 2 different certificates to terminate SSL for 2 hostnames.

      1. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
      2. Create multi-tls.yaml

      This should generate a segment like:

      $ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35\n    server {\n        listen 80;\n        listen 443 ssl http2;\n        ssl_certificate /etc/nginx-ssl/default-foobar.pem;\n        ssl_certificate_key /etc/nginx-ssl/default-foobar.pem;\n\n\n        server_name foo.bar.com;\n\n\n        if ($scheme = http) {\n            return 301 https://$host$request_uri;\n        }\n\n\n\n        location / {\n            proxy_set_header Host                   $host;\n\n            # Pass Real IP\n            proxy_set_header X-Real-IP              $remote_addr;\n\n            # Allow websocket connections\n            proxy_set_header                        Upgrade           $http_upgrade;\n            proxy_set_header                        Connection        $connection_upgrade;\n\n            proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;\n            proxy_set_header X-Forwarded-Host       $host;\n            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;\n\n            proxy_connect_timeout                   5s;\n            proxy_send_timeout                      60s;\n            proxy_read_timeout                      60s;\n\n            proxy_redirect                          off;\n            proxy_buffering                         off;\n\n            proxy_http_version                      1.1;\n\n            proxy_pass http://default-http-svc-80;\n        }\n

      And you should be able to reach your nginx service or http-svc service using a hostname switch:

      $  kubectl get ing\nNAME      RULE          BACKEND   ADDRESS                         AGE\nfoo-tls   -                       104.154.30.67                   13m\n          foo.bar.com\n          /             http-svc:80\n          bar.baz.com\n          /             nginx:80\n\n$ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k\nCLIENT VALUES:\nclient_address=10.245.0.6\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.35.0\nx-forwarded-for=10.245.0.1\nx-forwarded-host=foo.bar.com\nx-forwarded-proto=https\n\n$ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx on Debian!</title>\n\n$ curl 104.154.30.67\ndefault backend - 404\n

      "},{"location":"examples/openpolicyagent/","title":"OpenPolicyAgent and pathType enforcing","text":"

      Ingress API allows users to specify different pathType on Ingress object.

      While pathType Exact and Prefix should allow only a small set of characters, pathType ImplementationSpecific allows any characters, as it may contain regexes, variables and other features that may be specific of the Ingress Controller being used.

      This means that the Ingress Admins (the persona who deployed the Ingress Controller) should trust the users allowed to use pathType: ImplementationSpecific, as this may allow arbitrary configuration, and this configuration may end on the proxy (aka Nginx) configuration.

      "},{"location":"examples/openpolicyagent/#example","title":"Example","text":"

      The example in this repo uses Gatekeeper to block the usage of pathType: ImplementationSpecific, allowing just a specific list of namespaces to use it.

      It is recommended that the admin modifies this rules to enforce a specific set of characters when the usage of ImplementationSpecific is allowed, or in ways that best suits their needs.

      First, the ConstraintTemplate from template.yaml will define a rule that validates if the Ingress object is being created on an exempted namespace, and case not, will validate its pathType.

      Then, the rule K8sBlockIngressPathType contained in rule.yaml will define the parameters: what kind of object should be verified (Ingress), what are the exempted namespaces, and what kinds of pathType are blocked.

      "},{"location":"examples/psp/","title":"Pod Security Policy (PSP)","text":"

      In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).

      PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.

      Before applying any objects, first apply the PSP permissions by running:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml\n

      Note: PSP permissions must be granted before the creation of the Deployment and the ReplicaSet.

      "},{"location":"examples/rewrite/","title":"Rewrite","text":"

      This example demonstrates how to use Rewrite annotations.

      "},{"location":"examples/rewrite/#prerequisites","title":"Prerequisites","text":"

      You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      "},{"location":"examples/rewrite/#deployment","title":"Deployment","text":"

      Rewriting can be controlled using the following annotations:

      Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in / context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool"},{"location":"examples/rewrite/#examples","title":"Examples","text":""},{"location":"examples/rewrite/#rewrite-target","title":"Rewrite Target","text":"

      Attention

      Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

      Note

      Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

      Note

      Please see the FAQ for Validation Of path

      Create an Ingress rule with a rewrite annotation:

      $ echo '\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\n    nginx.ingress.kubernetes.io/rewrite-target: /$2\n  name: rewrite\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: rewrite.bar.com\n    http:\n      paths:\n      - path: /something(/|$)(.*)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n' | kubectl create -f -\n

      In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.

      For example, the ingress definition above will result in the following rewrites:

      • rewrite.bar.com/something rewrites to rewrite.bar.com/
      • rewrite.bar.com/something/ rewrites to rewrite.bar.com/
      • rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
      "},{"location":"examples/rewrite/#app-root","title":"App Root","text":"

      Create an Ingress rule with an app-root annotation:

      $ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/app-root: /app1\n  name: approot\n  namespace: default\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: approot.bar.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: http-svc\n            port: \n              number: 80\n\" | kubectl create -f -\n

      Check the rewrite is working

      $ curl -I -k http://approot.bar.com/\nHTTP/1.1 302 Moved Temporarily\nServer: nginx/1.11.10\nDate: Mon, 13 Mar 2017 14:57:15 GMT\nContent-Type: text/html\nContent-Length: 162\nLocation: http://approot.bar.com/app1\nConnection: keep-alive\n
      "},{"location":"examples/static-ip/","title":"Static IPs","text":"

      This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.

      "},{"location":"examples/static-ip/#prerequisites","title":"Prerequisites","text":"

      You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      "},{"location":"examples/static-ip/#acquiring-an-ip","title":"Acquiring an IP","text":"

      Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades.

      To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer.

      First, create a loadbalancer Service and wait for it to acquire an IP:

      $ kubectl create -f static-ip-svc.yaml\nservice \"ingress-nginx-lb\" created\n\n$ kubectl get svc ingress-nginx-lb\nNAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE\ningress-nginx-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m\n

      Then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"ingress-nginx-lb\").

      $ kubectl create -f ingress-nginx-controller.yaml\ndeployment \"ingress-nginx-controller\" created\n
      "},{"location":"examples/static-ip/#assigning-the-ip-to-an-ingress","title":"Assigning the IP to an Ingress","text":"

      From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step.

      $ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n\n$ curl 104.154.109.191 -kL\nCLIENT VALUES:\nclient_address=10.180.1.25\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://104.154.109.191:8080/\n...\n
      "},{"location":"examples/static-ip/#retaining-the-ip","title":"Retaining the IP","text":"

      You can test retention by deleting the Ingress:

      $ kubectl delete ing ingress-nginx\ningress \"ingress-nginx\" deleted\n\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME            HOSTS     ADDRESS           PORTS     AGE\ningress-nginx   *         104.154.109.191   80, 443   13m\n

      Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.

      "},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","title":"Promote ephemeral to static IP","text":"

      To promote the allocated IP to static, you can update the Service manifest:

      $ kubectl patch svc ingress-nginx-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}'\n\"ingress-nginx-lb\" patched\n

      ... and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE):

      $ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1\nCreated [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].\n---\naddress: 104.154.109.191\ncreationTimestamp: '2017-01-31T16:34:50.089-08:00'\ndescription: ''\nid: '5208037144487826373'\nkind: compute#address\nname: ingress-nginx-lb\nregion: us-central1\nselfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb\nstatus: IN_USE\nusers:\n- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000\n

      Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191.

      "},{"location":"examples/tls-termination/","title":"TLS termination","text":"

      This example demonstrates how to terminate TLS through the Ingress-Nginx Controller.

      "},{"location":"examples/tls-termination/#prerequisites","title":"Prerequisites","text":"

      You need a TLS cert and a test HTTP service for this example.

      "},{"location":"examples/tls-termination/#deployment","title":"Deployment","text":"

      Create a ingress.yaml file.

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: nginx-test\nspec:\n  tls:\n    - hosts:\n      - foo.bar.com\n      # This assumes tls-secret exists and the SSL\n      # certificate contains a CN for foo.bar.com\n      secretName: tls-secret\n  ingressClassName: nginx\n  rules:\n    - host: foo.bar.com\n      http:\n        paths:\n        - path: /\n          pathType: Prefix\n          backend:\n            # This assumes http-svc exists and routes to healthy endpoints\n            service:\n              name: http-svc\n              port:\n                number: 80\n

      The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.

      kubectl apply -f ingress.yaml\n
      "},{"location":"examples/tls-termination/#validation","title":"Validation","text":"

      You can confirm that the Ingress works.

      $ kubectl describe ing nginx-test\nName:           nginx-test\nNamespace:      default\nAddress:        104.198.183.6\nDefault backend:    default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)\nTLS:\n  tls-secret terminates\nRules:\n  Host  Path    Backends\n  ----  ----    --------\n  *\n            http-svc:80 (<none>)\nAnnotations:\nEvents:\n  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason  Message\n  --------- --------    -----   ----                -------------   --------    ------  -------\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  default/nginx-test\n  7s        7s      1   {ingress-nginx-controller }         Normal      UPDATE  default/nginx-test\n  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  ip: 104.198.183.6\n  7s        7s      1   {ingress-nginx-controller }         Warning     MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming /\n\n$ curl 104.198.183.6 -L\ncurl: (60) SSL certificate problem: self signed certificate\nMore details here: http://curl.haxx.se/docs/sslcerts.html\n\n$ curl 104.198.183.6 -Lk\nCLIENT VALUES:\nclient_address=10.240.0.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://35.186.221.137:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=Keep-Alive\nhost=35.186.221.137\nuser-agent=curl/7.46.0\nvia=1.1 google\nx-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106\nx-forwarded-for=104.132.0.80, 35.186.221.137\nx-forwarded-proto=https\nBODY:\n
      "},{"location":"user-guide/basic-usage/","title":"Basic usage - host based routing","text":"

      ingress-nginx can be used for many use cases, inside various cloud providers and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.

      First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed, myServiceA, myServiceB, and configured as type: ClusterIP.

      Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org.

      If the cluster version is < 1.19, you can create two ingress resources like this:

      apiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-myservicea\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: myservicea.foo.org\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: myservicea\n          servicePort: 80\n---\napiVersion: networking.k8s.io/v1beta1\nkind: Ingress\nmetadata:\n  name: ingress-myserviceb\n  annotations:\n    # use the shared ingress-nginx\n    kubernetes.io/ingress.class: \"nginx\"\nspec:\n  rules:\n  - host: myserviceb.foo.org\n    http:\n      paths:\n      - path: /\n        backend:\n          serviceName: myserviceb\n          servicePort: 80\n

      If the cluster uses Kubernetes version >= 1.19.x, then its suggested to create 2 ingress resources, using yaml examples shown below. These examples are in conformity with the networking.kubernetes.io/v1 api.

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-myservicea\nspec:\n  rules:\n  - host: myservicea.foo.org\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: myservicea\n            port:\n              number: 80\n  ingressClassName: nginx\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-myserviceb\nspec:\n  rules:\n  - host: myserviceb.foo.org\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: myserviceb\n            port:\n              number: 80\n  ingressClassName: nginx\n

      When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation or where ingressClassName: nginx is present. Please note that the ingress resource should be placed inside the same namespace of the backend resource.

      On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myservicea.foo.org and myserviceb.foo.org to the nginx external IP. Get the external IP by running:

      kubectl get services -n ingress-nginx\n

      To test inside minikube refer to this documentation: Set up Ingress on Minikube with the NGINX Ingress Controller

      "},{"location":"user-guide/cli-arguments/","title":"Command line arguments","text":"

      The following command line arguments are accepted by the Ingress controller executable.

      They are set in the container spec of the ingress-nginx-controller Deployment manifest

      Argument Description --annotations-prefix Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --bucket-factor Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0) --certificate-authority Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified. --configmap Name of the ConfigMap containing custom global configurations for the controller. --controller-class Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched. --deep-inspect Enables ingress object security deep inspector. (default true) --default-backend-service Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --enable-annotation-validation If true, will enable the annotation validation feature. Defaults to true --disable-catch-all Disable support for catch-all Ingresses. (default false) --disable-full-test Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default). --disable-svc-external-name Disable support for Services of type ExternalName. (default false) --disable-sync-events Disables the creation of 'Sync' Event resources, but still logs them --dynamic-configuration-retries Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15) --election-id Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --election-ttl Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s) --enable-metrics Enables the collection of NGINX metrics. (default true) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default false) --enable-ssl-passthrough Enable SSL Passthrough. (default false) --disable-leader-election Disable Leader Election on Nginx Controller. (default false) --enable-topology-aware-routing Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false) --exclude-socket-metrics Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size' --health-check-path URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port Port to use for the healthz endpoint. (default 10254) --healthz-host Address to bind the healthz endpoint. --http-port Port to use for servicing HTTP traffic. (default 80) --https-port Port to use for servicing HTTPS traffic. (default 443) --ingress-class Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation \"kubernetes.io/ingress.class\" (deprecated). If this parameter is not set, or set to the default value of \"nginx\", it will handle ingresses with either an empty or \"nginx\" class name. --ingress-class-by-name Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false). --internal-logger-address Address to be used when binding internal syslogger. (default 127.0.0.1:11514) --kubeconfig Path to a kubeconfig file containing authorization and API server information. --length-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) --max-buckets Maximum number of buckets for native histograms. (default 100) --maxmind-edition-ids Maxmind edition ids to download GeoLite2 Databases. (default \"GeoLite2-City,GeoLite2-ASN\") --maxmind-retries-timeout Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s) --maxmind-retries-count Number of attempts to download the GeoIP DB. (default 1) --maxmind-license-key Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ . --maxmind-mirror Maxmind mirror url (example: http://geoip.local/databases. --metrics-per-host Export metrics per-host. (default true) --metrics-per-undefined-host Export metrics per-host even if the host is not defined in an ingress. Requires --metrics-per-host to be set to true. (default false) --monitor-max-batch-size Max batch size of NGINX metrics. (default 10000) --post-shutdown-grace-period Additional delay in seconds before controller container exits. (default 10) --profiler-port Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) --profiling Enable profiling via web interface host:port/debug/pprof/ . (default true) --publish-service Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false) --report-status-classes If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false) --ssl-passthrough-proxy-port Port to use internally for SSL Passthrough. (default 442) --status-port Port to use for the lua HTTP endpoint configuration. (default 10246) --status-update-interval Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) --stream-port Port to use for the lua TCP/UDP endpoint configuration. (default 10247) --sync-period Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit Define the sync frequency upper limit. (default 0.3) --tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --time-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]) --udp-services-configmap Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --shutdown-grace-period Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0) --size-buckets Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07]) -v, --v Level number for the log level verbosity --validating-webhook The address to start an admission controller on to validate incoming ingresses. Takes the form \":port\". If not provided, no admission controller is started. --validating-webhook-certificate The path of the validating webhook certificate PEM. --validating-webhook-key The path of the validating webhook key PEM. --version Show release information about the Ingress-Nginx Controller and exit. --watch-ingress-without-class Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false) --watch-namespace Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty. --watch-namespace-selector The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty."},{"location":"user-guide/custom-errors/","title":"Custom errors","text":"

      When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error:

      Header Value X-Code HTTP status code returned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service

      A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json, a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML.

      Important

      The custom backend is expected to return the correct HTTP status code instead of 200. NGINX does not change the response from the custom default backend.

      An example of such custom backend is available inside the source repository at images/custom-error-pages.

      See also the Custom errors example.

      "},{"location":"user-guide/default-backend/","title":"Default backend","text":"

      The default backend is a service which handles all URL paths and hosts the Ingress-NGINX controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).

      Basically a default backend exposes two URLs:

      • /healthz that returns 200
      • / that returns 404

      Example

      The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.

      "},{"location":"user-guide/exposing-tcp-udp-services/","title":"Exposing TCP and UDP services","text":"

      While the Kubernetes Ingress resource only officially supports routing external HTTP(s) traffic to services, ingress-nginx can be configured to receive external TCP/UDP traffic from non-HTTP protocols and route them to internal services using TCP/UDP port mappings that are specified within a ConfigMap.

      To support this, the --tcp-services-configmap and --udp-services-configmap flags can be used to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <service port>:<namespace/service name>:[PROXY]:[PROXY]

      It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service. The first PROXY controls the decode of the proxy protocol and the second PROXY controls the encoding using proxy protocol. This allows an incoming connection to be decoded or an outgoing connection to be encoded. It is also possible to arbitrate between two different proxies by turning on the decode and encode on a TCP service.

      The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000

      apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: tcp-services\n  namespace: ingress-nginx\ndata:\n  9000: \"default/example-go:8080\"\n

      Since 1.9.13 NGINX provides UDP Load Balancing. The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53

      apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: udp-services\n  namespace: ingress-nginx\ndata:\n  53: \"kube-system/kube-dns:53\"\n

      If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress.

      apiVersion: v1\nkind: Service\nmetadata:\n  name: ingress-nginx\n  namespace: ingress-nginx\n  labels:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\nspec:\n  type: LoadBalancer\n  ports:\n    - name: http\n      port: 80\n      targetPort: 80\n      protocol: TCP\n    - name: https\n      port: 443\n      targetPort: 443\n      protocol: TCP\n    - name: proxied-tcp-9000\n      port: 9000\n      targetPort: 9000\n      protocol: TCP\n  selector:\n    app.kubernetes.io/name: ingress-nginx\n    app.kubernetes.io/part-of: ingress-nginx\n
      Then, the configmap should be added into ingress controller's deployment args.
       args:\n    - /nginx-ingress-controller\n    - --tcp-services-configmap=ingress-nginx/tcp-services\n

      "},{"location":"user-guide/external-articles/","title":"External Articles","text":"
      • Pain(less) NGINX Ingress
      • Accessing Kubernetes Pods from Outside of the Cluster
      • Kubernetes - Redirect HTTP to HTTPS with ELB and the Ingress-Nginx Controller
      • Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure
      • Secure your Nginx Ingress controller behind Google Cloud Armor or Identity-Aware Proxy (IAP)
      "},{"location":"user-guide/fcgi-services/","title":"Exposing FastCGI Servers","text":"

      FastCGI is a binary protocol for interfacing interactive programs with a web server. [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.

      \u2014 Wikipedia

      The ingress-nginx ingress controller can be used to directly expose FastCGI servers. Enabling FastCGI in your Ingress only requires setting the backend-protocol annotation to FCGI, and with a couple more annotations you can customize the way ingress-nginx handles the communication with your FastCGI server.

      For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.

      This post in a FactCGI feature issue describes a test for the FastCGI feature. The same test is described below here.

      "},{"location":"user-guide/fcgi-services/#example-objects-to-expose-a-fastcgi-server-pod","title":"Example Objects to expose a FastCGI server pod","text":""},{"location":"user-guide/fcgi-services/#the-fasctcgi-server-pod","title":"The FasctCGI server pod","text":"

      The Pod object example below exposes port 9000, which is the conventional FastCGI port.

      apiVersion: v1\nkind: Pod\nmetadata:\n  name: example-app\n  labels:\n    app: example-app\nspec:\n  containers:\n  - name: example-app\n    image: php:fpm-alpine\n    ports:\n    - containerPort: 9000\n      name: fastcgi\n
      • For this example to work, a HTML response should be received from the FastCGI server being exposed
      • A HTTP request to the FastCGI server pod should be sent
      • The response should be generated by a php script as that is what we are demonstrating here

      The image we are using here php:fpm-alpine does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.

      • Use kubectl exec to get into the example-app pod
      • You will land at the path /var/www/html
      • Create a simple php script there at the path /var/www/html called index.php
      • Make the index.php file look like this
      <!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test</title>\n    </head>\n    <body>\n        <?php echo '<p>FastCGI Test Worked!</p>'; ?>\n    </body>\n</html>\n
      • Save and exit from the shell in the pod
      • If you delete the pod, then you will have to recreate the file as this method is not persistent
      "},{"location":"user-guide/fcgi-services/#the-fastcgi-service","title":"The FastCGI service","text":"

      The Service object example below matches port 9000 from the Pod object above.

      apiVersion: v1\nkind: Service\nmetadata:\n  name: example-service\nspec:\n  selector:\n    app: example-app\n  ports:\n  - port: 9000\n    targetPort: 9000\n    name: fastcgi\n
      "},{"location":"user-guide/fcgi-services/#the-configmap-object-and-the-ingress-object","title":"The configMap object and the ingress object","text":"

      The Ingress and ConfigMap objects below demonstrate the supported FastCGI specific annotations.

      Important

      NGINX actually has 50 FastCGI directives All of the nginx directives have not been exposed in the ingress yet

      "},{"location":"user-guide/fcgi-services/#the-configmap-object","title":"The ConfigMap object","text":"

      This configMap object is required to set the parameters of FastCGI directives

      Attention

      • The ConfigMap must be created before creating the ingress object
      • The Ingress Controller needs to find the configMap when the Ingress object with the FastCGI annotations is created
      • So create the configMap before the ingress
      • If the configMap is created after the ingress is created, then you will need to restart the Ingress Controller pods.
      apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-cm\ndata:\n  SCRIPT_FILENAME: \"/var/www/html/index.php\"\n
      "},{"location":"user-guide/fcgi-services/#the-ingress-object","title":"The ingress object","text":"
      • Do not create the ingress shown below until you have created the configMap seen above.
      • You can see that this ingress matches the service example-service, and the port named fastcgi from above.
      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/backend-protocol: \"FCGI\"\n    nginx.ingress.kubernetes.io/fastcgi-index: \"index.php\"\n    nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-cm\"\n  name: example-app\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: app.example.com\n    http:\n      paths:\n      - path: /\n        pathType: Prefix\n        backend:\n          service:\n            name: example-service\n            port:\n              name: fastcgi\n
      "},{"location":"user-guide/fcgi-services/#send-a-request-to-the-exposed-fastcgi-server","title":"Send a request to the exposed FastCGI server","text":"

      You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.

      % curl 172.19.0.2 -H \"Host: app.example.com\" -vik\n*   Trying 172.19.0.2:80...\n* Connected to 172.19.0.2 (172.19.0.2) port 80\n> GET / HTTP/1.1\n> Host: app.example.com\n> User-Agent: curl/8.6.0\n> Accept: */*\n> \n< HTTP/1.1 200 OK\nHTTP/1.1 200 OK\n< Date: Wed, 12 Jun 2024 07:11:59 GMT\nDate: Wed, 12 Jun 2024 07:11:59 GMT\n< Content-Type: text/html; charset=UTF-8\nContent-Type: text/html; charset=UTF-8\n< Transfer-Encoding: chunked\nTransfer-Encoding: chunked\n< Connection: keep-alive\nConnection: keep-alive\n< X-Powered-By: PHP/8.3.8\nX-Powered-By: PHP/8.3.8\n\n< \n<!DOCTYPE html>\n<html>\n    <head>\n        <title>PHP Test</title>\n    </head>\n    <body>\n        <p>FastCGI Test Worked</p>    </body>\n</html>\n
      "},{"location":"user-guide/fcgi-services/#fastcgi-ingress-annotations","title":"FastCGI Ingress Annotations","text":"

      To enable FastCGI, the nginx.ingress.kubernetes.io/backend-protocol annotation needs to be set to FCGI, which overrides the default HTTP value.

      nginx.ingress.kubernetes.io/backend-protocol: \"FCGI\"

      This enables the FastCGI mode for all paths defined in the Ingress object

      "},{"location":"user-guide/fcgi-services/#the-nginxingresskubernetesiofastcgi-index-annotation","title":"The nginx.ingress.kubernetes.io/fastcgi-index Annotation","text":"

      To specify an index file, the fastcgi-index annotation value can optionally be set. In the example below, the value is set to index.php. This annotation corresponds to the NGINX fastcgi_index directive.

      nginx.ingress.kubernetes.io/fastcgi-index: \"index.php\"

      "},{"location":"user-guide/fcgi-services/#the-nginxingresskubernetesiofastcgi-params-configmap-annotation","title":"The nginx.ingress.kubernetes.io/fastcgi-params-configmap Annotation","text":"

      To specify NGINX fastcgi_param directives, the fastcgi-params-configmap annotation is used, which in turn must lead to a ConfigMap object containing the NGINX fastcgi_param directives as key/values.

      nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-configmap\"

      And the ConfigMap object to specify the SCRIPT_FILENAME and HTTP_PROXY NGINX's fastcgi_param directives will look like the following:

      apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: example-configmap\ndata:\n  SCRIPT_FILENAME: \"/example/index.php\"\n  HTTP_PROXY: \"\"\n

      Using the namespace/ prefix is also supported, for example:

      nginx.ingress.kubernetes.io/fastcgi-params-configmap: \"example-namespace/example-configmap\"

      "},{"location":"user-guide/ingress-path-matching/","title":"Ingress Path Matching","text":""},{"location":"user-guide/ingress-path-matching/#regular-expression-support","title":"Regular Expression Support","text":"

      Important

      Regular expressions is not supported in the spec.rules.host field. The wildcard character '*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"*\").

      Note

      Please see the FAQ for Validation Of path

      The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).

      Hint

      Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.

      See the description of the use-regex annotation for more details.

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/.*\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port:\n              number: 80\n

      The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server:

      location ~* \"^/foo/.*\" {\n  ...\n}\n
      "},{"location":"user-guide/ingress-path-matching/#path-priority","title":"Path Priority","text":"

      In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.

      Please read the warning before using regular expressions in your ingress definitions.

      "},{"location":"user-guide/ingress-path-matching/#example","title":"Example","text":"

      Let the following two ingress definitions be created:

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: service1\n            port:\n              number: 80\n      - path: /foo/bar/\n        pathType: Prefix\n        backend:\n          service:\n            name: service2\n            port:\n              number: 80\n
      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-2\n  annotations:\n    nginx.ingress.kubernetes.io/rewrite-target: /$1\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar/(.+)\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: service3\n            port: \n              number: 80\n

      The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server:

      location ~* ^/foo/bar/.+ {\n  ...\n}\n\nlocation ~* \"^/foo/bar/\" {\n  ...\n}\n\nlocation ~* \"^/foo/bar\" {\n  ...\n}\n

      The following request URI's would match the corresponding location blocks:

      • test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
      • test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
      • test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.

      IMPORTANT NOTES:

      • If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
      "},{"location":"user-guide/ingress-path-matching/#warning","title":"Warning","text":"

      The following example describes a case that may inflict unwanted path matching behavior.

      This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier. For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\".

      "},{"location":"user-guide/ingress-path-matching/#example_1","title":"Example","text":"

      Let the following ingress be defined:

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: test-ingress-3\n  annotations:\n    nginx.ingress.kubernetes.io/use-regex: \"true\"\nspec:\n  ingressClassName: nginx\n  rules:\n  - host: test.com\n    http:\n      paths:\n      - path: /foo/bar/bar\n        pathType: Prefix\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n      - path: /foo/bar/[A-Z0-9]{3}\n        pathType: ImplementationSpecific\n        backend:\n          service:\n            name: test\n            port: \n              number: 80\n

      The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server:

      location ~* \"^/foo/bar/[A-Z0-9]{3}\" {\n  ...\n}\n\nlocation ~* \"^/foo/bar/bar\" {\n  ...\n}\n

      A request to test.com/foo/bar/bar would match the ^/foo/bar/[A-Z0-9]{3} location block instead of the longest EXACT matching path.

      "},{"location":"user-guide/k8s-122-migration/","title":"FAQ - Migration to Kubernetes 1.22 and apiVersion networking.k8s.io/v1","text":"

      If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetes v1.22, this page is relevant to you.

      • Please read this official blog on deprecated Ingress API versions
      • Please read this official documentation on the IngressClass object
      "},{"location":"user-guide/k8s-122-migration/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now","title":"What is an IngressClass and why is it important for users of ingress-nginx controller now?","text":"

      IngressClass is a Kubernetes resource. See the description below. It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object. From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.

      On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that.

      kubectl explain ingressclass\n
      KIND:     IngressClass\nVERSION:  networking.k8s.io/v1\nDESCRIPTION:\n     IngressClass represents the class of the Ingress, referenced by the Ingress\n     Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be\n     used to indicate that an IngressClass should be considered default. When a\n     single IngressClass resource has this annotation set to true, new Ingress\n     resources without a class specified will be assigned this default class.\nFIELDS:\n   apiVersion   <string>\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n   kind <string>\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n   metadata     <Object>\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n   spec <Object>\n     Spec is the desired state of the IngressClass. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`\n
      "},{"location":"user-guide/k8s-122-migration/#what-has-caused-this-change-in-behavior","title":"What has caused this change in behavior?","text":"

      There are 2 primary reasons.

      "},{"location":"user-guide/k8s-122-migration/#reason-1","title":"Reason 1","text":"

      Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:

      • extensions/v1beta1
      • networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created.

      From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions.

      "},{"location":"user-guide/k8s-122-migration/#reason-2","title":"Reason #2","text":"

      If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22, there are several scenarios where your existing Ingress objects will not work how you expect.

      Read this FAQ to check which scenario matches your use case.

      "},{"location":"user-guide/k8s-122-migration/#what-is-the-ingressclassname-field","title":"What is the ingressClassName field?","text":"

      ingressClassName is a field in the spec of an Ingress object.

      kubectl explain ingress.spec.ingressClassName\n
      KIND:     Ingress\nVERSION:  networking.k8s.io/v1\nFIELD:    ingressClassName <string>\nDESCRIPTION:\n     IngressClassName is the name of the IngressClass cluster resource. The\n     associated IngressClass defines which controller will implement the\n     resource. This replaces the deprecated `kubernetes.io/ingress.class`\n     annotation. For backwards compatibility, when that annotation is set, it\n     must be given precedence over this field. The controller may emit a warning\n     if the field and annotation have different values. Implementations of this\n     API should ignore Ingresses without a class specified. An IngressClass\n     resource may be marked as default, which can be used to set a default value\n     for this field. For more information, refer to the IngressClass\n     documentation.\n

      The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation.

      "},{"location":"user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do","title":"I have only one ingress controller in my cluster. What should I do?","text":"

      If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation \"ingressclass.kubernetes.io/is-default-class\" in your IngressClass, so any new Ingress objects will have this one as default IngressClass.

      When using Helm, you can enable this annotation by setting .controller.ingressClassResource.default: true in your Helm chart installation's values file.

      If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:

      • You can manually set the .spec.ingressClassName field in the manifest of your own Ingress resources.
      • You can re-create them after setting the ingressclass.kubernetes.io/is-default-class annotation to true on the IngressClass
      • Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag --watch-ingress-without-class=true. When using Helm, you can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true.

      We recommend that you create the IngressClass as shown below:

      ---\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n  labels:\n    app.kubernetes.io/component: controller\n  name: nginx\n  annotations:\n    ingressclass.kubernetes.io/is-default-class: \"true\"\nspec:\n  controller: k8s.io/ingress-nginx\n

      and add the value spec.ingressClassName=nginx in your Ingress objects.

      "},{"location":"user-guide/k8s-122-migration/#i-have-many-ingress-objects-in-my-cluster-what-should-i-do","title":"I have many ingress objects in my cluster. What should I do?","text":"

      If you have a lot of ingress objects without ingressClass configuration, you can run the ingress controller with the flag --watch-ingress-without-class=true.

      "},{"location":"user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class","title":"What is the flag --watch-ingress-without-class?","text":"

      It's a flag that is passed, as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this:

      # ...\nargs:\n  - /nginx-ingress-controller\n  - --watch-ingress-without-class=true\n  - --controller-class=k8s.io/ingress-nginx\n  # ...\n# ...\n
      "},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-in-my-cluster-and-im-already-using-the-annotation","title":"I have more than one controller in my cluster, and I'm already using the annotation","text":"

      No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the ingress-nginx controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName.

      "},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-api","title":"I have more than one controller running in my cluster, and I want to use the new API","text":"

      In this scenario, you need to create multiple IngressClasses (see the example above).

      Be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value.

      Let's see an example, supposing that you have three IngressClasses:

      • IngressClass ingress-nginx-one, with .spec.controller equal to example.com/ingress-nginx1
      • IngressClass ingress-nginx-two, with .spec.controller equal to example.com/ingress-nginx2
      • IngressClass ingress-nginx-three, with .spec.controller equal to example.com/ingress-nginx1

      For private use, you can also use a controller name that doesn't contain a /, e.g. ingress-nginx1.

      When deploying your ingress controllers, you will have to change the --controller-class field as follows:

      • Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1
      • Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2

      When you create an Ingress object with its ingressClassName set to ingress-nginx-two, only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object.

      Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.

      Bear in mind that if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true, it will serve:

      1. Ingresses without any ingressClassName set
      2. Ingresses where the deprecated annotation (kubernetes.io/ingress.class) matches the value set in the command line argument --ingress-class
      3. Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class
      4. If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two ingress-nginx controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict.
      "},{"location":"user-guide/k8s-122-migration/#why-am-i-seeing-ingress-class-annotation-is-not-equal-to-the-expected-by-ingress-controller-in-my-controller-logs","title":"Why am I seeing \"ingress class annotation is not equal to the expected by Ingress Controller\" in my controller logs?","text":"

      It is highly likely that you will also see the name of the ingress resource in the same error message. This error message has been observed on use the deprecated annotation (kubernetes.io/ingress.class) in an Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.

      "},{"location":"user-guide/miscellaneous/","title":"Miscellaneous","text":""},{"location":"user-guide/miscellaneous/#source-ip-address","title":"Source IP address","text":"

      By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.

      If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.

      Another option is to enable proxy protocol using use-proxy-protocol: \"true\".

      In this mode NGINX does not use the content of the header to get the source IP address of the connection.

      "},{"location":"user-guide/miscellaneous/#path-types","title":"Path types","text":"

      Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. By default NGINX path type is Prefix to not break existing definitions

      "},{"location":"user-guide/miscellaneous/#proxy-protocol","title":"Proxy Protocol","text":"

      If you are using a L4 proxy to forward the traffic to the Ingress NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the PROXY Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.

      Amongst others ELBs in AWS and HAProxy support Proxy Protocol.

      "},{"location":"user-guide/miscellaneous/#websockets","title":"Websockets","text":"

      Support for websockets is provided by NGINX out of the box. No special configuration required.

      The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.

      The default value of these settings is 60 seconds.

      A more adequate value to support websockets is a value higher than one hour (3600).

      Important

      If the Ingress-Nginx Controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.

      "},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","title":"Optimizing TLS Time To First Byte (TTTFB)","text":"

      NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.

      This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).

      "},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","title":"Retries in non-idempotent methods","text":"

      Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.

      "},{"location":"user-guide/miscellaneous/#limitations","title":"Limitations","text":"
      • Ingress rules for TLS require the definition of the field host
      "},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","title":"Why endpoints and not services","text":"

      The Ingress-Nginx Controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.

      "},{"location":"user-guide/monitoring/","title":"Monitoring","text":"

      Two different methods to install and configure Prometheus and Grafana are described in this doc. * Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress * Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.

      "},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-pod-annotations","title":"Prometheus and Grafana installation using Pod Annotations","text":"

      This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the Ingress-Nginx Controller.

      Important

      This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.

      "},{"location":"user-guide/monitoring/#before-you-begin","title":"Before You Begin","text":"
      • The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.

      • The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :

      • controller.metrics.enabled=true
      • controller.podAnnotations.\"prometheus.io/scrape\"=\"true\"
      • controller.podAnnotations.\"prometheus.io/port\"=\"10254\"

      • The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :

        helm upgrade ingress-nginx ingress-nginx \\\n--repo https://kubernetes.github.io/ingress-nginx \\\n--namespace ingress-nginx \\\n--set controller.metrics.enabled=true \\\n--set-string controller.podAnnotations.\"prometheus\\.io/scrape\"=\"true\" \\\n--set-string controller.podAnnotations.\"prometheus\\.io/port\"=\"10254\"\n

      • You can validate that the controller is configured for metrics by looking at the values of the installed release, like this:
        helm get values ingress-nginx --namespace ingress-nginx\n
      • You should be able to see the values shown below:
        ..\ncontroller:\n  metrics:\n    enabled: true\n  podAnnotations:\n    prometheus.io/port: \"10254\"\n    prometheus.io/scrape: \"true\"\n..\n
      • If you are not using helm, you will have to edit your manifests like this:
        • Service manifest:
          apiVersion: v1\nkind: Service\n..\nspec:\n  ports:\n    - name: prometheus\n      port: 10254\n      targetPort: prometheus\n      ..\n
        • Deployment manifest:
          apiVersion: v1\nkind: Deployment\n..\nspec:\n  template:\n    metadata:\n      annotations:\n        prometheus.io/scrape: \"true\"\n        prometheus.io/port: \"10254\"\n    spec:\n      containers:\n        - name: controller\n          ports:\n            - name: prometheus\n              containerPort: 10254\n            ..\n
      "},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","title":"Deploy and configure Prometheus Server","text":"

      Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.

      • The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.

      • If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.

      • Running the following command deploys prometheus in Kubernetes:

      kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/\n
      "},{"location":"user-guide/monitoring/#prometheus-dashboard","title":"Prometheus Dashboard","text":"
      • Open Prometheus dashboard in a web browser:
      kubectl get svc -n ingress-nginx\nNAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\ndefault-http-backend   ClusterIP   10.103.59.201   <none>        80/TCP                                       3d\ningress-nginx          NodePort    10.97.44.72     <none>        80:30100/TCP,443:30154/TCP,10254:32049/TCP   5h\nprometheus-server      NodePort    10.98.233.86    <none>        9090:32630/TCP                               1m\n
      • Obtain the IP address of the nodes in the running cluster:
      kubectl get nodes -o wide\n
      • In some cases where the node only have internal IP addresses we need to execute:
      kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address}\n10.192.0.2 10.192.0.3 10.192.0.4\n
      • Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard.

      • According to the above example, this URL will be http://10.192.0.3:32630

      "},{"location":"user-guide/monitoring/#grafana","title":"Grafana","text":"
      • Install grafana using the below command
        kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/\n
      • Look at the services

        kubectl get svc -n ingress-nginx\nNAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE\ndefault-http-backend   ClusterIP   10.103.59.201   <none>        80/TCP                                       3d\ningress-nginx          NodePort    10.97.44.72     <none>        80:30100/TCP,443:30154/TCP,10254:32049/TCP   5h\nprometheus-server      NodePort    10.98.233.86    <none>        9090:32630/TCP                               10m\ngrafana                NodePort    10.98.233.87    <none>        3000:31086/TCP                               10m\n

      • Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086

      The username and password is admin

      • After the login you can import the Grafana dashboard from official dashboards, by following steps given below :

        • Navigate to lefthand panel of grafana
        • Hover on the gearwheel icon for Configuration and click \"Data Sources\"
        • Click \"Add data source\"
        • Select \"Prometheus\"
        • Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
        • Left menu (hover over +) -> Dashboard
        • Click \"Import\"
        • Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
        • Click Import JSON
        • Select the Prometheus data source
        • Click \"Import\"

      "},{"location":"user-guide/monitoring/#caveats","title":"Caveats","text":""},{"location":"user-guide/monitoring/#wildcard-ingresses","title":"Wildcard ingresses","text":"
      • By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:
        • Run the ingress controller with --metrics-per-host=false. You will lose labeling by hostname, but still have labeling by ingress.
        • Run the ingress controller with --metrics-per-undefined-host=true --metrics-per-host=true. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames.
      "},{"location":"user-guide/monitoring/#grafana-dashboard-using-ingress-resource","title":"Grafana dashboard using ingress resource","text":"
      • If you want to expose the dashboard for grafana using an ingress resource, then you can :
        • change the service type of the prometheus-server service and the grafana service to \"ClusterIP\" like this :
          kubectl -n ingress-nginx edit svc grafana\n
        • This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other)
        • scroll down to line 34 that looks like \"type: NodePort\"
        • change it to look like \"type: ClusterIP\". Save and exit.
        • create an ingress resource with backend as \"grafana\" and port as \"3000\"
      • Similarly, you can edit the service \"prometheus-server\" and add an ingress resource.
      "},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-service-monitors","title":"Prometheus and Grafana installation using Service Monitors","text":"

      This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.

      "},{"location":"user-guide/monitoring/#verify-ingress-nginx-controller-is-installed","title":"Verify Ingress-Nginx Controller is installed","text":"
      • The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.

      • To check if Ingress controller is deployed,

        kubectl get pods -n ingress-nginx\n

      • The result should look something like: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h
      "},{"location":"user-guide/monitoring/#verify-prometheus-is-installed","title":"Verify Prometheus is installed","text":"
      • To check if Prometheus is already deployed, run the following command:

      helm ls -A\n
      NAME          NAMESPACE       REVISION    UPDATED                                 STATUS      CHART                           APP VERSION\ningress-nginx ingress-nginx   10          2022-01-20 18:08:55.267373 -0800 PST    deployed    ingress-nginx-4.0.16            1.1.1\nprometheus    prometheus      1           2022-01-20 16:07:25.086828 -0800 PST    deployed    kube-prometheus-stack-30.1.0    0.53.1\n
      - Notice that prometheus is installed in a differenet namespace than ingress-nginx

      • If prometheus is not installed, then you can install from here
      "},{"location":"user-guide/monitoring/#re-configure-ingress-nginx-controller","title":"Re-configure Ingress-Nginx Controller","text":"
      • The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :
        controller.metrics.enabled=true\ncontroller.metrics.serviceMonitor.enabled=true\ncontroller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n
      • The easiest way of doing this is to helm upgrade
        helm upgrade ingress-nginx ingress-nginx/ingress-nginx \\\n--namespace ingress-nginx \\\n--set controller.metrics.enabled=true \\\n--set controller.metrics.serviceMonitor.enabled=true \\\n--set controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\"\n
      • Here controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" should match the name of the helm release of the kube-prometheus-stack

      • You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:

        helm get values ingress-nginx --namespace ingress-nginx\n
        controller:\n  metrics:\n    enabled: true\n    serviceMonitor:\n      additionalLabels:\n        release: prometheus\n      enabled: true\n

      "},{"location":"user-guide/monitoring/#configure-prometheus","title":"Configure Prometheus","text":"
      • Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set serviceMonitorSelectorNilUsesHelmValues flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting podMonitorSelectorNilUsesHelmValues to false
      • The configurations required are:
        prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false\nprometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n
      • The easiest way of doing this is to use helm upgrade ...
        helm upgrade prometheus prometheus-community/kube-prometheus-stack \\\n--namespace prometheus  \\\n--set prometheus.prometheusSpec.podMonitorSelectorNilUsesHelmValues=false \\\n--set prometheus.prometheusSpec.serviceMonitorSelectorNilUsesHelmValues=false\n
      • You can validate that Prometheus has been reconfigured by looking at the values of the installed release, like this:
        helm get values prometheus --namespace prometheus\n
      • You should be able to see the values shown below:
        prometheus:\n  prometheusSpec:\n    podMonitorSelectorNilUsesHelmValues: false\n    serviceMonitorSelectorNilUsesHelmValues: false\n
      "},{"location":"user-guide/monitoring/#connect-and-view-prometheus-dashboard","title":"Connect and view Prometheus dashboard","text":"
      • Port forward to Prometheus service. Find out the name of the prometheus service by using the following command:
        kubectl get svc -n prometheus\n

      The result of this command would look like:

      NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\nalertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   7h46m\nprometheus-grafana                        ClusterIP   10.106.28.162    <none>        80/TCP                       7h46m\nprometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093/TCP                     7h46m\nprometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443/TCP                      7h46m\nprometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090/TCP                     7h46m\nprometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080/TCP                     7h46m\nprometheus-operated                       ClusterIP   None             <none>        9090/TCP                     7h46m\nprometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100/TCP                     7h46m\n
      prometheus-kube-prometheus-prometheus is the service we want to port forward to. We can do so using the following command:
      kubectl port-forward svc/prometheus-kube-prometheus-prometheus -n prometheus 9090:9090\n
      When you run the above command, you should see something like:
      Forwarding from 127.0.0.1:9090 -> 9090\nForwarding from [::1]:9090 -> 9090\n
      - Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090

      "},{"location":"user-guide/monitoring/#connect-and-view-grafana-dashboard","title":"Connect and view Grafana dashboard","text":"
      • Port forward to Grafana service. Find out the name of the Grafana service by using the following command:
        kubectl get svc -n prometheus\n

      The result of this command would look like:

      NAME                                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE\nalertmanager-operated                     ClusterIP   None             <none>        9093/TCP,9094/TCP,9094/UDP   7h46m\nprometheus-grafana                        ClusterIP   10.106.28.162    <none>        80/TCP                       7h46m\nprometheus-kube-prometheus-alertmanager   ClusterIP   10.108.125.245   <none>        9093/TCP                     7h46m\nprometheus-kube-prometheus-operator       ClusterIP   10.110.220.1     <none>        443/TCP                      7h46m\nprometheus-kube-prometheus-prometheus     ClusterIP   10.102.72.134    <none>        9090/TCP                     7h46m\nprometheus-kube-state-metrics             ClusterIP   10.104.231.181   <none>        8080/TCP                     7h46m\nprometheus-operated                       ClusterIP   None             <none>        9090/TCP                     7h46m\nprometheus-prometheus-node-exporter       ClusterIP   10.96.247.128    <none>        9100/TCP                     7h46m\n
      prometheus-grafana is the service we want to port forward to. We can do so using the following command:
      kubectl port-forward svc/prometheus-grafana  3000:80 -n prometheus\n
      When you run the above command, you should see something like:
      Forwarding from 127.0.0.1:3000 -> 3000\nForwarding from [::1]:3000 -> 3000\n
      - Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000 The default username/ password is admin/prom-operator - After the login you can import the Grafana dashboard from official dashboards, by following steps given below :

      • Navigate to lefthand panel of grafana
      • Hover on the gearwheel icon for Configuration and click \"Data Sources\"
      • Click \"Add data source\"
      • Select \"Prometheus\"
      • Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)
      • Left menu (hover over +) -> Dashboard
      • Click \"Import\"
      • Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
      • Click Import JSON
      • Select the Prometheus data source
      • Click \"Import\"

      "},{"location":"user-guide/monitoring/#exposed-metrics","title":"Exposed metrics","text":"

      Prometheus metrics are exposed on port 10254.

      "},{"location":"user-guide/monitoring/#request-metrics","title":"Request metrics","text":"
      • nginx_ingress_controller_request_duration_seconds Histogram\\ The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\\ nginx var: request_time

      • nginx_ingress_controller_response_duration_seconds Histogram\\ The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\\ Note: can be up to several millis bigger than the nginx_ingress_controller_request_duration_seconds because of the different measuring method. nginx var: upstream_response_time

      • nginx_ingress_controller_header_duration_seconds Histogram\\ The time spent on receiving first header from the upstream server\\ nginx var: upstream_header_time

      • nginx_ingress_controller_connect_duration_seconds Histogram\\ The time spent on establishing a connection with the upstream server\\ nginx var: upstream_connect_time

      • nginx_ingress_controller_response_size Histogram\\ The response length (including request line, header, and request body)\\ nginx var: bytes_sent

      • nginx_ingress_controller_request_size Histogram\\ The request length (including request line, header, and request body)\\ nginx var: request_length

      • nginx_ingress_controller_requests Counter\\ The total number of client requests

      • nginx_ingress_controller_bytes_sent Histogram\\ The number of bytes sent to a client. Deprecated, use nginx_ingress_controller_response_size\\ nginx var: bytes_sent

      # HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size\n# TYPE nginx_ingress_controller_bytes_sent histogram\n# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server\n# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds\n* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server\n# TYPE nginx_ingress_controller_header_duration_seconds histogram\n# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds\n# TYPE nginx_ingress_controller_request_duration_seconds histogram\n# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_request_size histogram\n# HELP nginx_ingress_controller_requests The total number of client requests.\n# TYPE nginx_ingress_controller_requests counter\n# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server\n# TYPE nginx_ingress_controller_response_duration_seconds histogram\n# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_response_size histogram\n
      "},{"location":"user-guide/monitoring/#nginx-process-metrics","title":"Nginx process metrics","text":"
      # HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}\n# TYPE nginx_ingress_controller_nginx_process_connections gauge\n# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}\n# TYPE nginx_ingress_controller_nginx_process_connections_total counter\n# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds\n# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter\n# HELP nginx_ingress_controller_nginx_process_num_procs number of processes\n# TYPE nginx_ingress_controller_nginx_process_num_procs gauge\n# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970/01/01\n# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge\n# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read\n# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter\n# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests\n# TYPE nginx_ingress_controller_nginx_process_requests_total counter\n# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written\n# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter\n
      "},{"location":"user-guide/monitoring/#controller-metrics","title":"Controller metrics","text":"
      # HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.\n# TYPE nginx_ingress_controller_build_info gauge\n# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations\n# TYPE nginx_ingress_controller_check_success counter\n# HELP nginx_ingress_controller_config_hash Running configuration hash actually running\n# TYPE nginx_ingress_controller_config_hash gauge\n# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful\n# TYPE nginx_ingress_controller_config_last_reload_successful gauge\n# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.\n# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge\n# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate\n# TYPE nginx_ingress_controller_ssl_certificate_info gauge\n# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations\n# TYPE nginx_ingress_controller_success counter\n# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity\n# TYPE nginx_ingress_controller_orphan_ingress gauge\n
      "},{"location":"user-guide/monitoring/#admission-metrics","title":"Admission metrics","text":"
      # HELP nginx_ingress_controller_admission_config_size The size of the tested configuration\n# TYPE nginx_ingress_controller_admission_config_size gauge\n# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)\n# TYPE nginx_ingress_controller_admission_render_duration gauge\n# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller\n# TYPE nginx_ingress_controller_admission_render_ingresses gauge\n# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)\n# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge\n# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)\n# TYPE nginx_ingress_controller_admission_tested_duration gauge\n# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller\n# TYPE nginx_ingress_controller_admission_tested_ingresses gauge\n
      "},{"location":"user-guide/monitoring/#histogram-buckets","title":"Histogram buckets","text":"

      You can configure buckets for histogram metrics using these command line options (here are their default values): * --time-buckets=[0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10] * --length-buckets=[10, 20, 30, 40, 50, 60, 70, 80, 90, 100] * --size-buckets=[10, 100, 1000, 10000, 100000, 1e+06, 1e+07]

      "},{"location":"user-guide/multiple-ingress/","title":"Multiple Ingress controllers","text":"

      By default, deploying multiple Ingress controllers (e.g., ingress-nginx & gce) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.

      To fix this problem, use IngressClasses. The kubernetes.io/ingress.class annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field ingress.spec.ingressClassName. But, when user has deployed with scope.enabled, then the ingress class resource field is not used.

      "},{"location":"user-guide/multiple-ingress/#using-ingressclasses","title":"Using IngressClasses","text":"

      If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName.

      First, ensure the --controller-class= and --ingress-class are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique --election-id for the new instance of the controller.

      # ingress-nginx Deployment/Statefulset\nspec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - /nginx-ingress-controller\n             - '--election-id=ingress-controller-leader'\n             - '--controller-class=k8s.io/internal-ingress-nginx'\n             - '--ingress-class=k8s.io/internal-nginx'\n            ...\n

      Then use the same value in the IngressClass:

      # ingress-nginx IngressClass\napiVersion: networking.k8s.io/v1\nkind: IngressClass\nmetadata:\n  name: internal-nginx\nspec:\n  controller: k8s.io/internal-ingress-nginx\n  ...\n

      And refer to that IngressClass in your Ingress:

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: my-ingress\nspec:\n  ingressClassName: internal-nginx\n  ...\n

      or if installing with Helm:

      controller:\n  electionID: ingress-controller-leader\n  ingressClass: internal-nginx  # default: nginx\n  ingressClassResource:\n    name: internal-nginx  # default: nginx\n    enabled: true\n    default: false\n    controllerValue: \"k8s.io/internal-ingress-nginx\"  # default: k8s.io/ingress-nginx\n

      Important

      When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --controller-class value (see IsValid method in internal/ingress/annotations/class/main.go), otherwise the class annotation becomes required.

      If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.

      "},{"location":"user-guide/multiple-ingress/#using-the-kubernetesioingressclass-annotation-in-deprecation","title":"Using the kubernetes.io/ingress.class annotation (in deprecation)","text":"

      If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like ingress-nginx to claim.

      For instance,

      metadata:\n  name: foo\n  annotations:\n    kubernetes.io/ingress.class: \"gce\"\n

      will target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like:

      metadata:\n  name: foo\n  annotations:\n    kubernetes.io/ingress.class: \"nginx\"\n

      will target the Ingress-NGINX controller, forcing the GCE controller to ignore it.

      You can change the value \"nginx\" to something else by setting the --ingress-class flag:

      spec:\n  template:\n     spec:\n       containers:\n         - name: ingress-nginx-internal-controller\n           args:\n             - /nginx-ingress-controller\n             - --ingress-class=internal-nginx\n

      then setting the corresponding kubernetes.io/ingress.class: \"internal-nginx\" annotation on your Ingresses.

      To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress. If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string.

      Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.

      "},{"location":"user-guide/tls/","title":"TLS/HTTPS","text":""},{"location":"user-guide/tls/#tls-secrets","title":"TLS Secrets","text":"

      Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

      Warning

      Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs W1012 09:15:45.920000 6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key

      You can generate a self-signed certificate and private key with:

      $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj \"/CN=${HOST}/O=${HOST}\" -addext \"subjectAltName = DNS:${HOST}\"\n

      Then create the secret in the cluster via:

      kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}\n

      The resulting secret will be of type kubernetes.io/tls.

      "},{"location":"user-guide/tls/#host-names","title":"Host names","text":"

      Ensure that the relevant ingress rules specify a matching hostname.

      "},{"location":"user-guide/tls/#default-ssl-certificate","title":"Default SSL Certificate","text":"

      NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.

      For this reason the Ingress controller provides the flag --default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.

      For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

      If the tls: section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.

      On the other hand, if the tls: section is set - even without specifying a secretName option - NGINX will force HTTPS redirect.

      To force redirects for Ingresses that do not specify a TLS-block at all, take a look at force-ssl-redirect in ConfigMap.

      "},{"location":"user-guide/tls/#ssl-passthrough","title":"SSL Passthrough","text":"

      The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

      Warning

      This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.

      SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.

      If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.

      Note

      Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

      "},{"location":"user-guide/tls/#http-strict-transport-security","title":"HTTP Strict Transport Security","text":"

      HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

      HSTS is enabled by default.

      To disable this behavior use hsts: \"false\" in the configuration ConfigMap.

      "},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"

      By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

      This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.

      Tip

      When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.

      "},{"location":"user-guide/tls/#automated-certificate-management-with-cert-manager","title":"Automated Certificate Management with cert-manager","text":"

      cert-manager automatically requests missing or expired certificates from a range of supported issuers (including Let's Encrypt) by monitoring ingress resources.

      To set up cert-manager you should take a look at this full example.

      To enable it for an ingress resource you have to deploy cert-manager, configure a certificate issuer update the manifest:

      apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  name: ingress-demo\n  annotations:\n    cert-manager.io/issuer: \"letsencrypt-staging\" # Replace this with a production issuer once you've tested it\n    [..]\nspec:\n  tls:\n    - hosts:\n        - ingress-demo.example.com\n      secretName: ingress-demo-tls\n    [...]\n
      "},{"location":"user-guide/tls/#default-tls-version-and-ciphers","title":"Default TLS Version and Ciphers","text":"

      To provide the most secure baseline configuration possible,

      ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.

      "},{"location":"user-guide/tls/#legacy-tls","title":"Legacy TLS","text":"

      The default configuration, though secure, does not support some older browsers and operating systems.

      For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with ingress-nginx's default configuration.

      To change this default behavior, use a ConfigMap.

      A sample ConfigMap fragment to allow these older clients to connect could look something like the following (generated using the Mozilla SSL Configuration Generator)mozilla-ssl-config-old:

      kind: ConfigMap\napiVersion: v1\nmetadata:\n  name: nginx-config\ndata:\n  ssl-ciphers: \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA\"\n  ssl-protocols: \"TLSv1.2 TLSv1.3\"\n
      "},{"location":"user-guide/nginx-configuration/","title":"NGINX Configuration","text":"

      There are three ways to customize NGINX:

      1. ConfigMap: using a Configmap to set global configurations in NGINX.
      2. Annotations: use this if you want a specific configuration for a particular Ingress rule.
      3. Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.
      "},{"location":"user-guide/nginx-configuration/annotations-risk/","title":"Annotations Scope and Risk","text":"Group Annotation Risk Scope Aliases server-alias High ingress Allowlist allowlist-source-range Medium location BackendProtocol backend-protocol Low location BasicDigestAuth auth-realm Medium location BasicDigestAuth auth-secret Medium location BasicDigestAuth auth-secret-type Low location BasicDigestAuth auth-type Low location Canary canary Low ingress Canary canary-by-cookie Medium ingress Canary canary-by-header Medium ingress Canary canary-by-header-pattern Medium ingress Canary canary-by-header-value Medium ingress Canary canary-weight Low ingress Canary canary-weight-total Low ingress CertificateAuth auth-tls-error-page High location CertificateAuth auth-tls-match-cn High location CertificateAuth auth-tls-pass-certificate-to-upstream Low location CertificateAuth auth-tls-secret Medium location CertificateAuth auth-tls-verify-client Medium location CertificateAuth auth-tls-verify-depth Low location ClientBodyBufferSize client-body-buffer-size Low location ConfigurationSnippet configuration-snippet Critical location Connection connection-proxy-header Low location CorsConfig cors-allow-credentials Low ingress CorsConfig cors-allow-headers Medium ingress CorsConfig cors-allow-methods Medium ingress CorsConfig cors-allow-origin Medium ingress CorsConfig cors-expose-headers Medium ingress CorsConfig cors-max-age Low ingress CorsConfig enable-cors Low ingress CustomHTTPErrors custom-http-errors Low location CustomHeaders custom-headers Medium location DefaultBackend default-backend Low location Denylist denylist-source-range Medium location DisableProxyInterceptErrors disable-proxy-intercept-errors Low location EnableGlobalAuth enable-global-auth Low location ExternalAuth auth-always-set-cookie Low location ExternalAuth auth-cache-duration Medium location ExternalAuth auth-cache-key Medium location ExternalAuth auth-keepalive Low location ExternalAuth auth-keepalive-requests Low location ExternalAuth auth-keepalive-share-vars Low location ExternalAuth auth-keepalive-timeout Low location ExternalAuth auth-method Low location ExternalAuth auth-proxy-set-headers Medium location ExternalAuth auth-request-redirect Medium location ExternalAuth auth-response-headers Medium location ExternalAuth auth-signin High location ExternalAuth auth-signin-redirect-param Medium location ExternalAuth auth-snippet Critical location ExternalAuth auth-url High location FastCGI fastcgi-index Medium location FastCGI fastcgi-params-configmap Medium location HTTP2PushPreload http2-push-preload Low location LoadBalancing load-balance Low location Logs enable-access-log Low location Logs enable-rewrite-log Low location Mirror mirror-host High ingress Mirror mirror-request-body Low ingress Mirror mirror-target High ingress ModSecurity enable-modsecurity Low ingress ModSecurity enable-owasp-core-rules Low ingress ModSecurity modsecurity-snippet Critical ingress ModSecurity modsecurity-transaction-id High ingress Opentelemetry enable-opentelemetry Low location Opentelemetry opentelemetry-operation-name Medium location Opentelemetry opentelemetry-trust-incoming-span Low location Proxy proxy-body-size Medium location Proxy proxy-buffer-size Low location Proxy proxy-buffering Low location Proxy proxy-buffers-number Low location Proxy proxy-connect-timeout Low location Proxy proxy-cookie-domain Medium location Proxy proxy-cookie-path Medium location Proxy proxy-http-version Low location Proxy proxy-max-temp-file-size Low location Proxy proxy-next-upstream Medium location Proxy proxy-next-upstream-timeout Low location Proxy proxy-next-upstream-tries Low location Proxy proxy-read-timeout Low location Proxy proxy-redirect-from Medium location Proxy proxy-redirect-to Medium location Proxy proxy-request-buffering Low location Proxy proxy-send-timeout Low location ProxySSL proxy-ssl-ciphers Medium ingress ProxySSL proxy-ssl-name High ingress ProxySSL proxy-ssl-protocols Low ingress ProxySSL proxy-ssl-secret Medium ingress ProxySSL proxy-ssl-server-name Low ingress ProxySSL proxy-ssl-verify Low ingress ProxySSL proxy-ssl-verify-depth Low ingress RateLimit limit-allowlist Low location RateLimit limit-burst-multiplier Low location RateLimit limit-connections Low location RateLimit limit-rate Low location RateLimit limit-rate-after Low location RateLimit limit-rpm Low location RateLimit limit-rps Low location Redirect from-to-www-redirect Low location Redirect permanent-redirect Medium location Redirect permanent-redirect-code Low location Redirect temporal-redirect Medium location Redirect temporal-redirect-code Low location Rewrite app-root Medium location Rewrite force-ssl-redirect Medium location Rewrite preserve-trailing-slash Medium location Rewrite rewrite-target Medium ingress Rewrite ssl-redirect Low location Rewrite use-regex Low location SSLCipher ssl-ciphers Low ingress SSLCipher ssl-prefer-server-ciphers Low ingress SSLPassthrough ssl-passthrough Low ingress Satisfy satisfy Low location ServerSnippet server-snippet Critical ingress ServiceUpstream service-upstream Low ingress SessionAffinity affinity Low ingress SessionAffinity affinity-canary-behavior Low ingress SessionAffinity affinity-mode Medium ingress SessionAffinity session-cookie-change-on-failure Low ingress SessionAffinity session-cookie-conditional-samesite-none Low ingress SessionAffinity session-cookie-domain Medium ingress SessionAffinity session-cookie-expires Medium ingress SessionAffinity session-cookie-max-age Medium ingress SessionAffinity session-cookie-name Medium ingress SessionAffinity session-cookie-path Medium ingress SessionAffinity session-cookie-samesite Low ingress SessionAffinity session-cookie-secure Low ingress StreamSnippet stream-snippet Critical ingress UpstreamHashBy upstream-hash-by High location UpstreamHashBy upstream-hash-by-subset Low location UpstreamHashBy upstream-hash-by-subset-size Low location UpstreamVhost upstream-vhost Low location UsePortInRedirects use-port-in-redirects Low location XForwardedPrefix x-forwarded-prefix Medium location"},{"location":"user-guide/nginx-configuration/annotations/","title":"Annotations","text":"

      You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.

      Tip

      Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\", \"false\", \"100\".

      Note

      The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.

      Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/affinity-mode \"balanced\" or \"persistent\" nginx.ingress.kubernetes.io/affinity-canary-behavior \"sticky\" or \"legacy\" nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-secret-type string nginx.ingress.kubernetes.io/auth-type \"basic\" or \"digest\" nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-tls-match-cn string nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-cache-key string nginx.ingress.kubernetes.io/auth-cache-duration string nginx.ingress.kubernetes.io/auth-keepalive number nginx.ingress.kubernetes.io/auth-keepalive-share-vars \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-keepalive-requests number nginx.ingress.kubernetes.io/auth-keepalive-timeout number nginx.ingress.kubernetes.io/auth-proxy-set-headers string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/enable-global-auth \"true\" or \"false\" nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-header-pattern string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/canary-weight-total number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/custom-headers string nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-expose-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/temporal-redirect-code number nginx.ingress.kubernetes.io/preserve-trailing-slash \"true\" or \"false\" nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/proxy-http-version \"1.0\" or \"1.1\" nginx.ingress.kubernetes.io/proxy-ssl-secret string nginx.ingress.kubernetes.io/proxy-ssl-ciphers string nginx.ingress.kubernetes.io/proxy-ssl-name string nginx.ingress.kubernetes.io/proxy-ssl-protocols string nginx.ingress.kubernetes.io/proxy-ssl-verify string nginx.ingress.kubernetes.io/proxy-ssl-verify-depth number nginx.ingress.kubernetes.io/proxy-ssl-server-name string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-change-on-failure \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-domain string nginx.ingress.kubernetes.io/session-cookie-expires string nginx.ingress.kubernetes.io/session-cookie-max-age string nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/session-cookie-samesite string nginx.ingress.kubernetes.io/session-cookie-secure string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/stream-snippet string nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/denylist-source-range CIDR nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/proxy-max-temp-file-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers \"true\" or \"false\" nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-opentelemetry \"true\" or \"false\" nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span \"true\" or \"false\" nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string nginx.ingress.kubernetes.io/mirror-request-body string nginx.ingress.kubernetes.io/mirror-target string nginx.ingress.kubernetes.io/mirror-host string"},{"location":"user-guide/nginx-configuration/annotations/#canary","title":"Canary","text":"

      In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set:

      • nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.

      • nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with nginx.ingress.kubernetes.io/canary-by-header. The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.

      • nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.

      • nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.

      • nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of <weight-total> means implies all requests will be sent to the alternative service specified in the Ingress. <weight-total> defaults to 100, and can be increased via nginx.ingress.kubernetes.io/canary-weight-total.

      • nginx.ingress.kubernetes.io/canary-weight-total: The total weight of traffic. If unspecified, it defaults to 100.

      • Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight

        Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity. If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the canary ingress definition.

        Known Limitations

        Currently a maximum of one canary ingress can be applied per Ingress rule.

        "},{"location":"user-guide/nginx-configuration/annotations/#rewrite","title":"Rewrite","text":"

        In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.

        If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.

        Example

        Please check the rewrite example.

        "},{"location":"user-guide/nginx-configuration/annotations/#session-affinity","title":"Session Affinity","text":"

        The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.

        The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.

        The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.

        Attention

        If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.

        Example

        Please check the affinity example.

        "},{"location":"user-guide/nginx-configuration/annotations/#cookie-affinity","title":"Cookie affinity","text":"

        If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.

        The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.

        Use nginx.ingress.kubernetes.io/session-cookie-domain to set the Domain attribute of the sticky cookie.

        Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: \"true\".

        Use nginx.ingress.kubernetes.io/session-cookie-expires to control the cookie expires, its value is a number of seconds until the cookie expires.

        Use nginx.ingress.kubernetes.io/session-cookie-path to control the cookie path when use-regex is set to true.

        Use nginx.ingress.kubernetes.io/session-cookie-change-on-failure to control the cookie change after request failure.

        "},{"location":"user-guide/nginx-configuration/annotations/#authentication","title":"Authentication","text":"

        It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.

        The annotations are:

        nginx.ingress.kubernetes.io/auth-type: [basic|digest]\n

        Indicates the HTTP Authentication Type: Basic or Digest Access Authentication.

        nginx.ingress.kubernetes.io/auth-secret: secretName\n

        The name of the Secret that contains the usernames and passwords which are granted access to the paths defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.

        nginx.ingress.kubernetes.io/auth-secret-type: [auth-file|auth-map]\n

        The auth-secret can have two forms:

        • auth-file - default, an htpasswd file in the key auth within the secret
        • auth-map - the keys of the secret are the usernames, and the values are the hashed passwords
        nginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n

        Example

        Please check the auth example.

        "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-hashing","title":"Custom NGINX upstream hashing","text":"

        NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.

        There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.

        To enable consistent hashing for a backend:

        nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri$host\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"${request_uri}-text-value\" to consistently hash upstream requests by the current request URI.

        \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: \"true\". This maps requests to subset of nodes instead of a single one. nginx.ingress.kubernetes.io/upstream-hash-by-subset-size determines the size of each subset (default 3).

        Please check the chashsubset example.

        "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-load-balancing","title":"Custom NGINX load balancing","text":"

        This is similar to load-balance in ConfigMap, but configures load balancing algorithm per ingress.

        Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.

        "},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-vhost","title":"Custom NGINX upstream vhost","text":"

        This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.

        "},{"location":"user-guide/nginx-configuration/annotations/#client-certificate-authentication","title":"Client Certificate Authentication","text":"

        It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.

        Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.

        To enable, add the annotation nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. This secret must have a file named ca.crt containing the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress.

        You can further customize client certificate authentication and behavior with these annotations:

        • nginx.ingress.kubernetes.io/auth-tls-verify-depth: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
        • nginx.ingress.kubernetes.io/auth-tls-verify-client: Enables verification of client certificates. Possible values are:
          • on: Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. Failed certificate verification will result in a status code 400 (Bad Request) (default)
          • off: Don't request client certificates and don't do client certificate verification.
          • optional: Do optional client certificate validation against the CAs from auth-tls-secret. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
          • optional_no_ca: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret. Certificate verification result is sent to the upstream service.
        • nginx.ingress.kubernetes.io/auth-tls-error-page: The URL/Page that user should be redirected in case of a Certificate Authentication Error
        • nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert. Possible values are \"true\" or \"false\" (default).
        • nginx.ingress.kubernetes.io/auth-tls-match-cn: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with \"CN=\", example: \"CN=myvalidclient\". If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: \"CN=(option1|option2|myvalidclient)\". In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.

        The following headers are sent to the upstream service according to the auth-tls-* annotations:

        • ssl-client-issuer-dn: The issuer information of the client certificate. Example: \"CN=My CA\"
        • ssl-client-subject-dn: The subject information of the client certificate. Example: \"CN=My Client\"
        • ssl-client-verify: The result of the client verification. Possible values: \"SUCCESS\", \"FAILED: \"
        • ssl-client-cert: The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to \"true\". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A
        • Example

          Please check the client-certs example.

          Attention

          TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior.

          Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/

          Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls

          "},{"location":"user-guide/nginx-configuration/annotations/#backend-certificate-authentication","title":"Backend Certificate Authentication","text":"

          It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.

          • nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form \"namespace/secretName\".
          • nginx.ingress.kubernetes.io/proxy-ssl-verify: Enables or disables verification of the proxied HTTPS server certificate. (default: off)
          • nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
          • nginx.ingress.kubernetes.io/proxy-ssl-ciphers: Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
          • nginx.ingress.kubernetes.io/proxy-ssl-name: Allows to set proxy_ssl_name. This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
          • nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
          • nginx.ingress.kubernetes.io/proxy-ssl-server-name: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
          "},{"location":"user-guide/nginx-configuration/annotations/#configuration-snippet","title":"Configuration snippet","text":"

          Using this annotation you can add additional configuration to the NGINX location. For example:

          nginx.ingress.kubernetes.io/configuration-snippet: |\n  more_set_headers \"Request-Id: $req_id\";\n

          Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the related issue on github for more information.

          "},{"location":"user-guide/nginx-configuration/annotations/#custom-http-errors","title":"Custom HTTP Errors","text":"

          Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.

          Example usage:

          nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\"\n

          "},{"location":"user-guide/nginx-configuration/annotations/#custom-headers","title":"Custom Headers","text":"

          This annotation is of the form nginx.ingress.kubernetes.io/custom-headers: custom-headers-configmap to specify a configmap name that contains custom headers. This annotation uses more_set_headers nginx directive.

          Example configmap:

          apiVersion: v1\ndata:\n  Content-Type: application/json\nkind: ConfigMap\nmetadata:\n  name: custom-headers-configmap\n

          Attention

          First define the allowed response headers in global-allowed-response-headers.

          "},{"location":"user-guide/nginx-configuration/annotations/#default-backend","title":"Default Backend","text":"

          This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has multiple ports, the first one is the one which will receive the backend traffic.

          This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the custom-http-errors annotation are set.

          "},{"location":"user-guide/nginx-configuration/annotations/#enable-cors","title":"Enable CORS","text":"

          To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\". This will add a section in the server location enabling this functionality.

          CORS can be controlled with the following annotations:

          • nginx.ingress.kubernetes.io/cors-allow-methods: Controls which methods are accepted.

            This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).

            • Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
            • Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"
          • nginx.ingress.kubernetes.io/cors-allow-headers: Controls which headers are accepted.

            This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.

            • Default: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization
            • Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"
          • nginx.ingress.kubernetes.io/cors-expose-headers: Controls which headers are exposed to response.

            This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.

            • Default: empty
            • Example: nginx.ingress.kubernetes.io/cors-expose-headers: \"*, X-CustomResponseHeader\"
          • nginx.ingress.kubernetes.io/cors-allow-origin: Controls what's the accepted Origin for CORS.

            This is a multi-valued field, separated by ','. It must follow this format: protocol://origin-site.com or protocol://origin-site.com:port

            • Default: *
            • Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443, http://origin-site.com, myprotocol://example.org:1199\"

            It also supports single level wildcard subdomains and follows this format: protocol://*.foo.bar, protocol://*.bar.foo:8080 or protocol://*.abc.bar.foo:9000 - Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://*.origin-site.com:4443, http://*.origin-site.com, myprotocol://example.org:1199\"

          • nginx.ingress.kubernetes.io/cors-allow-credentials: Controls if credentials can be passed during CORS operations.

            • Default: true
            • Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\"
          • nginx.ingress.kubernetes.io/cors-max-age: Controls how long preflight requests can be cached.

            • Default: 1728000
            • Example: nginx.ingress.kubernetes.io/cors-max-age: 600

          Note

          For more information please see https://enable-cors.org

          "},{"location":"user-guide/nginx-configuration/annotations/#http2-push-preload","title":"HTTP2 Push Preload.","text":"

          Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests.

          Example

          • nginx.ingress.kubernetes.io/http2-push-preload: \"true\"
          "},{"location":"user-guide/nginx-configuration/annotations/#server-alias","title":"Server Alias","text":"

          Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: \"<alias 1>,<alias 2>\". This will create a server with the same configuration, but adding new values to the server_name directive.

          Note

          A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.

          For more information please see the server_name documentation.

          "},{"location":"user-guide/nginx-configuration/annotations/#server-snippet","title":"Server snippet","text":"

          Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.

          apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/server-snippet: |\n        set $agentflag 0;\n\n        if ($http_user_agent ~* \"(Mobile)\" ){\n          set $agentflag 1;\n        }\n\n        if ( $agentflag = 1 ) {\n          return 301 https://m.example.com;\n        }\n

          Attention

          This annotation can be used only once per host.

          "},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","title":"Client Body Buffer Size","text":"

          Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.

          Note

          The annotation value must be given in a format understood by Nginx.

          Example

          • nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes
          • nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte
          • nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte
          • nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte
          • nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte

          For more information please see https://nginx.org

          "},{"location":"user-guide/nginx-configuration/annotations/#external-authentication","title":"External Authentication","text":"

          To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.

          nginx.ingress.kubernetes.io/auth-url: \"URL to the authentication service\"\n

          Additionally it is possible to set:

          • nginx.ingress.kubernetes.io/auth-keepalive: <Connections> to specify the maximum number of keepalive connections to auth-url. Only takes effect when no variables are used in the host part of the URL. Defaults to 0 (keepalive disabled).

          Note: does not work with HTTP/2 listener because of a limitation in Lua subrequests. UseHTTP2 configuration should be disabled!

          • nginx.ingress.kubernetes.io/auth-keepalive-share-vars: Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to \"true\" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to \"false\".
          • nginx.ingress.kubernetes.io/auth-keepalive-requests: <Requests> to specify the maximum number of requests that can be served through one keepalive connection. Defaults to 1000 and only applied if auth-keepalive is set to higher than 0.
          • nginx.ingress.kubernetes.io/auth-keepalive-timeout: <Timeout> to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to 60 and only applied if auth-keepalive is set to higher than 0.
          • nginx.ingress.kubernetes.io/auth-method: <Method> to specify the HTTP method to use.
          • nginx.ingress.kubernetes.io/auth-signin: <SignIn_URL> to specify the location of the error page.
          • nginx.ingress.kubernetes.io/auth-signin-redirect-param: <SignIn_URL> to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
          • nginx.ingress.kubernetes.io/auth-response-headers: <Response_Header_1, ..., Response_Header_n> to specify headers to pass to backend once authentication request completes.
          • nginx.ingress.kubernetes.io/auth-proxy-set-headers: <ConfigMap> the name of a ConfigMap that specifies headers to pass to the authentication service
          • nginx.ingress.kubernetes.io/auth-request-redirect: <Request_Redirect_URL> to specify the X-Auth-Request-Redirect header value.
          • nginx.ingress.kubernetes.io/auth-cache-key: <Cache_Key> this enables caching for auth requests. specify a lookup key for auth responses. e.g. $remote_user$http_authorization. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
          • nginx.ingress.kubernetes.io/auth-cache-duration: <Cache_duration> to specify a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
          • nginx.ingress.kubernetes.io/auth-always-set-cookie: <Boolean_Flag> to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
          • nginx.ingress.kubernetes.io/auth-snippet: <Auth_Snippet> to specify a custom snippet to use with external authentication, e.g.
          nginx.ingress.kubernetes.io/auth-url: http://foo.com/external-auth\nnginx.ingress.kubernetes.io/auth-snippet: |\n    proxy_set_header Foo-Header 42;\n

          Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set

          Example

          Please check the external-auth example.

          "},{"location":"user-guide/nginx-configuration/annotations/#global-external-authentication","title":"Global External Authentication","text":"

          By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use enable-global-auth: \"false\" in the NGINX ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to \"true\".

          Note

          For more information please see global-auth-url.

          "},{"location":"user-guide/nginx-configuration/annotations/#rate-limiting","title":"Rate Limiting","text":"

          These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks.

          • nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
          • nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
          • nginx.ingress.kubernetes.io/limit-rpm: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
          • nginx.ingress.kubernetes.io/limit-burst-multiplier: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.
          • nginx.ingress.kubernetes.io/limit-rate-after: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.
          • nginx.ingress.kubernetes.io/limit-rate: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.
          • nginx.ingress.kubernetes.io/limit-whitelist: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.

          If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections, limit-rpm, limit-rps.

          To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap. The value set in an Ingress annotation will override the global setting.

          The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.

          "},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect","title":"Permanent Redirect","text":"

          This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.

          "},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect-code","title":"Permanent Redirect Code","text":"

          This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.

          "},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect","title":"Temporal Redirect","text":"

          This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)

          "},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect-code","title":"Temporal Redirect Code","text":"

          This annotation allows you to modify the status code used for temporal redirects. For example nginx.ingress.kubernetes.io/temporal-redirect-code: '307' would return your temporal-redirect with a 307.

          "},{"location":"user-guide/nginx-configuration/annotations/#ssl-passthrough","title":"SSL Passthrough","text":"

          The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.

          Note

          SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.

          Attention

          Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.

          "},{"location":"user-guide/nginx-configuration/annotations/#service-upstream","title":"Service Upstream","text":"

          By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

          The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

          This can be desirable for things like zero-downtime deployments . See issue #257.

          "},{"location":"user-guide/nginx-configuration/annotations/#known-issues","title":"Known Issues","text":"

          If the service-upstream annotation is specified the following things should be taken into consideration:

          • Sticky Sessions will not work as only round-robin load balancing is supported.
          • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.
          "},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"

          By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap.

          To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.

          When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.

          To preserve the trailing slash in the URI with ssl-redirect, set nginx.ingress.kubernetes.io/preserve-trailing-slash: \"true\" annotation for that particular resource.

          "},{"location":"user-guide/nginx-configuration/annotations/#redirect-fromto-www","title":"Redirect from/to www","text":"

          In some scenarios, it is required to redirect from www.domain.com to domain.com or vice versa, which way the redirect is performed depends on the configured host value in the Ingress object.

          For example, if .spec.rules.host is configured with a value like www.example.com, then this annotation will redirect from example.com to www.example.com. If .spec.rules.host is configured with a value like example.com, so without a www, then this annotation will redirect from www.example.com to example.com instead.

          To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"

          Attention

          If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.

          Attention

          For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.

          "},{"location":"user-guide/nginx-configuration/annotations/#denylist-source-range","title":"Denylist source range","text":"

          You can specify blocked client IP source ranges through the nginx.ingress.kubernetes.io/denylist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

          To configure this setting globally for all Ingress rules, the denylist-source-range value may be set in the NGINX ConfigMap.

          Note

          Adding an annotation to an Ingress rule overrides any global restriction.

          "},{"location":"user-guide/nginx-configuration/annotations/#whitelist-source-range","title":"Whitelist source range","text":"

          You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

          To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

          Note

          Adding an annotation to an Ingress rule overrides any global restriction.

          "},{"location":"user-guide/nginx-configuration/annotations/#custom-timeouts","title":"Custom timeouts","text":"

          Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:

          • nginx.ingress.kubernetes.io/proxy-connect-timeout
          • nginx.ingress.kubernetes.io/proxy-send-timeout
          • nginx.ingress.kubernetes.io/proxy-read-timeout
          • nginx.ingress.kubernetes.io/proxy-next-upstream
          • nginx.ingress.kubernetes.io/proxy-next-upstream-timeout
          • nginx.ingress.kubernetes.io/proxy-next-upstream-tries
          • nginx.ingress.kubernetes.io/proxy-request-buffering

          If you indicate Backend Protocol as GRPC or GRPCS, the following grpc values will be set and inherited from proxy timeouts:

          • grpc_connect_timeout=5s, from nginx.ingress.kubernetes.io/proxy-connect-timeout
          • grpc_send_timeout=60s, from nginx.ingress.kubernetes.io/proxy-send-timeout
          • grpc_read_timeout=60s, from nginx.ingress.kubernetes.io/proxy-read-timeout

          Note: All timeout values are unitless and in seconds e.g. nginx.ingress.kubernetes.io/proxy-read-timeout: \"120\" sets a valid 120 seconds proxy read timeout.

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-redirect","title":"Proxy redirect","text":"

          The annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response

          Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.

          By default the value of each annotation is \"off\".

          "},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","title":"Custom max body size","text":"

          For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.

          To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:

          nginx.ingress.kubernetes.io/proxy-body-size: 8m\n
          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-domain","title":"Proxy cookie domain","text":"

          Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response.

          To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap.

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-path","title":"Proxy cookie path","text":"

          Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response.

          To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap.

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffering","title":"Proxy buffering","text":"

          Enable or disable proxy buffering proxy_buffering. By default proxy buffering is disabled in the NGINX config.

          To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:

          nginx.ingress.kubernetes.io/proxy-buffering: \"on\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffers-number","title":"Proxy buffers Number","text":"

          Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4

          To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

          nginx.ingress.kubernetes.io/proxy-buffers-number: \"4\"\n

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffer-size","title":"Proxy buffer size","text":"

          Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\"

          To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

          nginx.ingress.kubernetes.io/proxy-buffer-size: \"8k\"\n

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-max-temp-file-size","title":"Proxy max temp file size","text":"

          When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.

          The zero value disables buffering of responses to temporary files.

          To use custom values in an Ingress rule, define this annotation:

          nginx.ingress.kubernetes.io/proxy-max-temp-file-size: \"1024m\"\n

          "},{"location":"user-guide/nginx-configuration/annotations/#proxy-http-version","title":"Proxy HTTP version","text":"

          Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to \"1.1\".

          nginx.ingress.kubernetes.io/proxy-http-version: \"1.0\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#ssl-ciphers","title":"SSL ciphers","text":"

          Specifies the enabled ciphers.

          Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host.

          nginx.ingress.kubernetes.io/ssl-ciphers: \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n

          The following annotation will set the ssl_prefer_server_ciphers directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.

          nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers: \"true\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#connection-proxy-header","title":"Connection proxy header","text":"

          Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation:

          nginx.ingress.kubernetes.io/connection-proxy-header: \"keep-alive\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#enable-access-log","title":"Enable Access Log","text":"

          Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:

          nginx.ingress.kubernetes.io/enable-access-log: \"false\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#enable-rewrite-log","title":"Enable Rewrite Log","text":"

          Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:

          nginx.ingress.kubernetes.io/enable-rewrite-log: \"true\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#enable-opentelemetry","title":"Enable Opentelemetry","text":"

          Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)

          nginx.ingress.kubernetes.io/enable-opentelemetry: \"true\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#opentelemetry-trust-incoming-span","title":"Opentelemetry Trust Incoming Span","text":"

          The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)

          nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-spans: \"true\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header","title":"X-Forwarded-Prefix Header","text":"

          To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used:

          nginx.ingress.kubernetes.io/x-forwarded-prefix: \"/path\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#modsecurity","title":"ModSecurity","text":"

          ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path must be disabled manually.

          It can be enabled using the following annotation:

          nginx.ingress.kubernetes.io/enable-modsecurity: \"true\"\n
          ModSecurity will run in \"Detection-Only\" mode using the recommended configuration.

          You can enable the OWASP Core Rule Set by setting the following annotation:

          nginx.ingress.kubernetes.io/enable-owasp-core-rules: \"true\"\n

          You can pass transactionIDs from nginx by setting up the following:

          nginx.ingress.kubernetes.io/modsecurity-transaction-id: \"$request_id\"\n

          You can also add your own set of modsecurity rules via a snippet:

          nginx.ingress.kubernetes.io/modsecurity-snippet: |\nSecRuleEngine On\nSecDebugLog /tmp/modsec_debug.log\n

          Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:

          nginx 0.24.1 and below

          nginx.ingress.kubernetes.io/modsecurity-snippet: |\nInclude /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\nInclude /etc/nginx/modsecurity/modsecurity.conf\n
          nginx 0.25.0 and above
          nginx.ingress.kubernetes.io/modsecurity-snippet: |\nInclude /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n

          "},{"location":"user-guide/nginx-configuration/annotations/#backend-protocol","title":"Backend Protocol","text":"

          Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI

          By default NGINX uses HTTP.

          Example:

          nginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#use-regex","title":"Use Regex","text":"

          Attention

          When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie, nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex.

          Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false.

          The following will indicate that regular expression paths are being used:

          nginx.ingress.kubernetes.io/use-regex: \"true\"\n

          The following will indicate that regular expression paths are not being used:

          nginx.ingress.kubernetes.io/use-regex: \"false\"\n

          When this annotation is set to true, the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

          Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

          Please read about ingress path matching before using this modifier.

          "},{"location":"user-guide/nginx-configuration/annotations/#satisfy","title":"Satisfy","text":"

          By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.

          nginx.ingress.kubernetes.io/satisfy: \"any\"\n
          "},{"location":"user-guide/nginx-configuration/annotations/#mirror","title":"Mirror","text":"

          Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in \"test\" backends.

          The mirror backend can be set by applying:

          nginx.ingress.kubernetes.io/mirror-target: https://test.env.com$request_uri\n

          By default the request-body is sent to the mirror backend, but can be turned off by applying:

          nginx.ingress.kubernetes.io/mirror-request-body: \"off\"\n

          Also by default header Host for mirrored requests will be set the same as a host part of uri in the \"mirror-target\" annotation. You can override it by \"mirror-host\" annotation:

          nginx.ingress.kubernetes.io/mirror-target: https://1.2.3.4$request_uri\nnginx.ingress.kubernetes.io/mirror-host: \"test.env.com\"\n

          Note: The mirror directive will be applied to all paths within the ingress resource.

          The request sent to the mirror is linked to the original request. If you have a slow mirror backend, then the original request will throttle.

          For more information on the mirror module see ngx_http_mirror_module

          "},{"location":"user-guide/nginx-configuration/annotations/#stream-snippet","title":"Stream snippet","text":"

          Using the annotation nginx.ingress.kubernetes.io/stream-snippet it is possible to add custom stream configuration.

          apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/stream-snippet: |\n      server {\n        listen 8000;\n        proxy_pass 127.0.0.1:80;\n      }\n
          "},{"location":"user-guide/nginx-configuration/configmap/","title":"ConfigMaps","text":"

          ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.

          The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.

          In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:

          data:\n  map-hash-bucket-size: \"128\"\n  ssl-protocols: SSLv2\n

          Important

          The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\".

          \"Slice\" types (defined below as []string or []int) can be provided as a comma-delimited string.

          "},{"location":"user-guide/nginx-configuration/configmap/#configuration-options","title":"Configuration options","text":"

          The following table shows a configuration option's name, type, and the default value:

          name type default notes add-headers string \"\" allow-backend-server-header bool \"false\" allow-cross-namespace-resources bool \"false\" allow-snippet-annotations bool \"false\" annotations-risk-level string High annotation-value-word-blocklist string array \"\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" http-access-log-path string \"\" stream-access-log-path string \"\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-modsecurity bool \"false\" modsecurity-snippet string \"\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool \"false\" disable-ipv6 bool \"false\" disable-ipv6-dns bool \"false\" enable-underscores-in-headers bool \"false\" enable-ocsp bool \"false\" ignore-invalid-headers bool \"true\" retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"\" DEPRECATED in favour of large_client_header_buffers http2-max-header-size string \"\" DEPRECATED in favour of large_client_header_buffers http2-max-requests int 0 DEPRECATED in favour of keepalive_requests http2-max-concurrent-streams int 128 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"31536000\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 1000 large-client-header-buffers string \"4 8k\" log-format-escape-none bool \"false\" log-format-escape-json bool \"false\" log-format-upstream string $remote_addr - $remote_user [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int <size of the processor\u2019s cache line> proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"false\" ssl-ciphers string \"ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2 TLSv1.3\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"false\" ssl-session-ticket-key string <Randomly Generated> ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" enable-aio-write bool \"true\" use-gzip bool \"false\" use-geoip bool \"true\" use-geoip2 bool \"false\" geoip2-autoreload-in-minutes int \"0\" enable-brotli bool \"false\" brotli-level int 4 brotli-min-length int 20 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component\" use-http2 bool \"true\" gzip-disable string \"\" gzip-level int 1 gzip-min-length int 256 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component\" worker-processes string <Number of CPUs> worker-cpu-affinity string \"\" worker-shutdown-timeout string \"240s\" enable-serial-reloads bool \"false\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 320 upstream-keepalive-time string \"1h\" upstream-keepalive-timeout int 60 upstream-keepalive-requests int 10000 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-next-upstream bool \"true\" proxy-stream-next-upstream-timeout string \"600s\" proxy-stream-next-upstream-tries int 3 proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" enable-real-ip bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"false\" generate-request-id bool \"true\" jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-endpoint string \"\" jaeger-service-name string \"nginx\" jaeger-propagation-format string \"jaeger\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" jaeger-sampler-host string \"http://127.0.0.1\" jaeger-sampler-port int 5778 jaeger-trace-context-header-name string uber-trace-id jaeger-debug-header string uber-debug-id jaeger-baggage-header string jaeger-baggage jaeger-trace-baggage-header-prefix string uberctx- datadog-collector-host string \"\" datadog-collector-port int 8126 datadog-service-name string \"nginx\" datadog-environment string \"prod\" datadog-operation-name-override string \"nginx.handle\" datadog-priority-sampling bool \"true\" datadog-sample-rate float 1.0 enable-opentelemetry bool \"false\" opentelemetry-trust-incoming-span bool \"true\" opentelemetry-operation-name string \"\" opentelemetry-config string \"/etc/nginx/opentelemetry.toml\" otlp-collector-host string \"\" otlp-collector-port int 4317 otel-max-queuesize int otel-schedule-delay-millis int otel-max-export-batch-size int otel-service-name string \"nginx\" otel-sampler string \"AlwaysOff\" otel-sampler-parent-based bool \"false\" otel-sampler-ratio float 0.01 main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" stream-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-timeout int 0 proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" force-ssl-redirect bool \"false\" denylist-source-range []string []string{} whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 lua-shared-dicts string \"\" http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 enable-syslog bool \"false\" syslog-host string \"\" syslog-port int 514 no-tls-redirect-locations string \"/.well-known/acme-challenge\" global-allowed-response-headers string \"\" global-auth-url string \"\" global-auth-method string \"\" global-auth-signin string \"\" global-auth-signin-redirect-param string \"rd\" global-auth-response-headers string \"\" global-auth-request-redirect string \"\" global-auth-snippet string \"\" global-auth-cache-key string \"\" global-auth-cache-duration string \"200 202 401 5m\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\" proxy-ssl-location-only bool \"false\" default-type string \"text/html\" service-upstream bool \"false\" ssl-reject-handshake bool \"false\" debug-connections []string \"127.0.0.1,1.1.1.1/24\" strict-validate-path-type bool \"true\" grpc-buffer-size-kb int 0"},{"location":"user-guide/nginx-configuration/configmap/#add-headers","title":"add-headers","text":"

          Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

          "},{"location":"user-guide/nginx-configuration/configmap/#allow-backend-server-header","title":"allow-backend-server-header","text":"

          Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#allow-cross-namespace-resources","title":"allow-cross-namespace-resources","text":"

          Enables users to consume cross namespace resource on annotations, when was previously enabled . default: false

          Annotations that may be impacted with this change:

          • auth-secret
          • auth-proxy-set-header
          • auth-tls-secret
          • fastcgi-params-configmap
          • proxy-ssl-secret
          "},{"location":"user-guide/nginx-configuration/configmap/#allow-snippet-annotations","title":"allow-snippet-annotations","text":"

          Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ false

          Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file

          "},{"location":"user-guide/nginx-configuration/configmap/#annotations-risk-level","title":"annotations-risk-level","text":"

          Represents the risk accepted on an annotation. If the risk is, for instance Medium, annotations with risk High and Critical will not be accepted.

          Accepted values are Critical, High, Medium and Low.

          default: High

          "},{"location":"user-guide/nginx-configuration/configmap/#annotation-value-word-blocklist","title":"annotation-value-word-blocklist","text":"

          Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742

          When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.

          default: \"\"

          When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.

          suggested: \"load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\\\"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#hide-headers","title":"hide-headers","text":"

          Sets additional header that will not be passed from the upstream server to the client response. default: empty

          References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

          "},{"location":"user-guide/nginx-configuration/configmap/#access-log-params","title":"access-log-params","text":"

          Additional params for access_log. For example, buffer=16k, gzip, flush=1m

          References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

          "},{"location":"user-guide/nginx-configuration/configmap/#access-log-path","title":"access-log-path","text":"

          Access log path for both http and stream context. Goes to /var/log/nginx/access.log by default.

          Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

          "},{"location":"user-guide/nginx-configuration/configmap/#http-access-log-path","title":"http-access-log-path","text":"

          Access log path for http context globally. default: \"\"

          Note: If not specified, the access-log-path will be used.

          "},{"location":"user-guide/nginx-configuration/configmap/#stream-access-log-path","title":"stream-access-log-path","text":"

          Access log path for stream context globally. default: \"\"

          Note: If not specified, the access-log-path will be used.

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-access-log-for-default-backend","title":"enable-access-log-for-default-backend","text":"

          Enables logging access to default backend. default: is disabled.

          "},{"location":"user-guide/nginx-configuration/configmap/#error-log-path","title":"error-log-path","text":"

          Error log path. Goes to /var/log/nginx/error.log by default.

          Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

          References: https://nginx.org/en/docs/ngx_core_module.html#error_log

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-modsecurity","title":"enable-modsecurity","text":"

          Enables the modsecurity module for NGINX. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-owasp-modsecurity-crs","title":"enable-owasp-modsecurity-crs","text":"

          Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#modsecurity-snippet","title":"modsecurity-snippet","text":"

          Adds custom rules to modsecurity section of nginx configuration

          "},{"location":"user-guide/nginx-configuration/configmap/#client-header-buffer-size","title":"client-header-buffer-size","text":"

          Allows to configure a custom buffer size for reading client request header.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

          "},{"location":"user-guide/nginx-configuration/configmap/#client-header-timeout","title":"client-header-timeout","text":"

          Defines a timeout for reading client request header, in seconds.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

          "},{"location":"user-guide/nginx-configuration/configmap/#client-body-buffer-size","title":"client-body-buffer-size","text":"

          Sets buffer size for reading client request body.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

          "},{"location":"user-guide/nginx-configuration/configmap/#client-body-timeout","title":"client-body-timeout","text":"

          Defines a timeout for reading client request body, in seconds.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

          "},{"location":"user-guide/nginx-configuration/configmap/#disable-access-log","title":"disable-access-log","text":"

          Disables the Access Log from the entire Ingress Controller. default: false

          References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

          "},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6","title":"disable-ipv6","text":"

          Disable listening on IPV6. default: false; IPv6 listening is enabled

          "},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6-dns","title":"disable-ipv6-dns","text":"

          Disable IPV6 for nginx DNS resolver. default: false; IPv6 resolving enabled.

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-underscores-in-headers","title":"enable-underscores-in-headers","text":"

          Enables underscores in header names. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-ocsp","title":"enable-ocsp","text":"

          Enables Online Certificate Status Protocol stapling (OCSP) support. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#ignore-invalid-headers","title":"ignore-invalid-headers","text":"

          Set if header fields with invalid names should be ignored. default: is enabled

          "},{"location":"user-guide/nginx-configuration/configmap/#retry-non-idempotent","title":"retry-non-idempotent","text":"

          Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".

          "},{"location":"user-guide/nginx-configuration/configmap/#error-log-level","title":"error-log-level","text":"

          Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

          References: https://nginx.org/en/docs/ngx_core_module.html#error_log

          "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-field-size","title":"http2-max-field-size","text":"

          Warning

          This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

          Limits the maximum size of an HPACK-compressed request header field.

          References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

          "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-header-size","title":"http2-max-header-size","text":"

          Warning

          This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

          Limits the maximum size of the entire request header list after HPACK decompression.

          References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

          "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-requests","title":"http2-max-requests","text":"

          Warning

          This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.

          Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.

          References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests

          "},{"location":"user-guide/nginx-configuration/configmap/#http2-max-concurrent-streams","title":"http2-max-concurrent-streams","text":"

          Sets the maximum number of concurrent HTTP/2 streams in a connection.

          References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams

          "},{"location":"user-guide/nginx-configuration/configmap/#hsts","title":"hsts","text":"

          Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

          References:

          • https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
          • https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
          "},{"location":"user-guide/nginx-configuration/configmap/#hsts-include-subdomains","title":"hsts-include-subdomains","text":"

          Enables or disables the use of HSTS in all the subdomains of the server-name.

          "},{"location":"user-guide/nginx-configuration/configmap/#hsts-max-age","title":"hsts-max-age","text":"

          Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.

          "},{"location":"user-guide/nginx-configuration/configmap/#hsts-preload","title":"hsts-preload","text":"

          Enables or disables the preload attribute in the HSTS feature (when it is enabled).

          "},{"location":"user-guide/nginx-configuration/configmap/#keep-alive","title":"keep-alive","text":"

          Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

          Important

          Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7

          Changes with nginx 1.19.7                                        16 Feb 2021\n\n    *) Change: connections handling in HTTP/2 has been changed to better\n       match HTTP/1.x; the \"http2_recv_timeout\", \"http2_idle_timeout\", and\n       \"http2_max_requests\" directives have been removed, the\n       \"keepalive_timeout\" and \"keepalive_requests\" directives should be\n       used instead.\n

          References: nginx change log nginx issue tracker nginx mailing list

          "},{"location":"user-guide/nginx-configuration/configmap/#keep-alive-requests","title":"keep-alive-requests","text":"

          Sets the maximum number of requests that can be served through one keep-alive connection.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

          "},{"location":"user-guide/nginx-configuration/configmap/#large-client-header-buffers","title":"large-client-header-buffers","text":"

          Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers

          "},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-none","title":"log-format-escape-none","text":"

          Sets if the escape parameter is disabled entirely for character escaping in variables (\"true\") or controlled by log-format-escape-json (\"false\") Sets the nginx log format.

          "},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-json","title":"log-format-escape-json","text":"

          Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format.

          "},{"location":"user-guide/nginx-configuration/configmap/#log-format-upstream","title":"log-format-upstream","text":"

          Sets the nginx log format. Example for json output:

          log-format-upstream: '{\"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\", \"x_forwarded_for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\",\n  \"remote_user\": \"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\": $status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\",\n  \"path\": \"$uri\", \"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\",\n  \"http_user_agent\": \"$http_user_agent\" }'\n

          Please check the log-format for definition of each field.

          "},{"location":"user-guide/nginx-configuration/configmap/#log-format-stream","title":"log-format-stream","text":"

          Sets the nginx stream format.

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-multi-accept","title":"enable-multi-accept","text":"

          If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true

          References: https://nginx.org/en/docs/ngx_core_module.html#multi_accept

          "},{"location":"user-guide/nginx-configuration/configmap/#max-worker-connections","title":"max-worker-connections","text":"

          Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files. default: 16384

          Tip

          Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).

          "},{"location":"user-guide/nginx-configuration/configmap/#max-worker-open-files","title":"max-worker-open-files","text":"

          Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) - 1024\". default: 0

          "},{"location":"user-guide/nginx-configuration/configmap/#map-hash-bucket-size","title":"map-hash-bucket-size","text":"

          Sets the bucket size for the map variables hash tables. The details of setting up hash tables are provided in a separate document.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr","title":"proxy-real-ip-cidr","text":"

          If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: \"0.0.0.0/0\"

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-set-headers","title":"proxy-set-headers","text":"

          Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example

          "},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-max-size","title":"server-name-hash-max-size","text":"

          Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.

          References: https://nginx.org/en/docs/hash.html

          "},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size","title":"server-name-hash-bucket-size","text":"

          Sets the size of the bucket for the server names hash tables.

          References:

          • https://nginx.org/en/docs/hash.html
          • https://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-max-size","title":"proxy-headers-hash-max-size","text":"

          Sets the maximum size of the proxy headers hash tables.

          References:

          • https://nginx.org/en/docs/hash.html
          • https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size
          "},{"location":"user-guide/nginx-configuration/configmap/#reuse-port","title":"reuse-port","text":"

          Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-bucket-size","title":"proxy-headers-hash-bucket-size","text":"

          Sets the size of the bucket for the proxy headers hash tables.

          References:

          • https://nginx.org/en/docs/hash.html
          • https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size
          "},{"location":"user-guide/nginx-configuration/configmap/#server-tokens","title":"server-tokens","text":"

          Send NGINX Server header in responses and display NGINX version in error pages. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-ciphers","title":"ssl-ciphers","text":"

          Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.

          The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.

          The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

          DHE-based cyphers will not be available until DH parameter is configured Custom DH parameters for perfect forward secrecy

          Please check the Mozilla SSL Configuration Generator.

          Note: ssl_prefer_server_ciphers directive will be enabled by default for http context.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-ecdh-curve","title":"ssl-ecdh-curve","text":"

          Specifies a curve for ECDHE ciphers.

          References: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-dh-param","title":"ssl-dh-param","text":"

          Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".

          References:

          • https://wiki.openssl.org/index.php/Diffie-Hellman_parameters
          • https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
          • https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-protocols","title":"ssl-protocols","text":"

          Sets the SSL protocols to use. The default is: TLSv1.2 TLSv1.3.

          Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-early-data","title":"ssl-early-data","text":"

          Enables or disables TLS 1.3 early data, also known as Zero Round Trip Time Resumption (0-RTT).

          This requires ssl-protocols to have TLSv1.3 enabled. Enable this with caution, because requests sent within early data are subject to replay attacks.

          ssl_early_data. The default is: false.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache","title":"ssl-session-cache","text":"

          Enables or disables the use of shared SSL cache among worker processes.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache-size","title":"ssl-session-cache-size","text":"

          Sets the size of the SSL shared session cache between all worker processes.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-tickets","title":"ssl-session-tickets","text":"

          Enables or disables session resumption through TLS session tickets.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-ticket-key","title":"ssl-session-ticket-key","text":"

          Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64

          TLS session ticket-key, by default, a randomly generated key is used.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-timeout","title":"ssl-session-timeout","text":"

          Sets the time during which a client may reuse the session parameters stored in a cache.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-buffer-size","title":"ssl-buffer-size","text":"

          Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).

          References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/

          "},{"location":"user-guide/nginx-configuration/configmap/#use-proxy-protocol","title":"use-proxy-protocol","text":"

          Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-protocol-header-timeout","title":"proxy-protocol-header-timeout","text":"

          Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-aio-write","title":"enable-aio-write","text":"

          Enables or disables the directive aio_write that writes files asynchronously. default: true

          "},{"location":"user-guide/nginx-configuration/configmap/#use-gzip","title":"use-gzip","text":"

          Enables or disables compression of HTTP responses using the \"gzip\" module. MIME types to compress are controlled by gzip-types. default: false

          "},{"location":"user-guide/nginx-configuration/configmap/#use-geoip","title":"use-geoip","text":"

          Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true

          Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.

          "},{"location":"user-guide/nginx-configuration/configmap/#use-geoip2","title":"use-geoip2","text":"

          Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/ingress-controller/geoip/GeoLite2-City.mmdb and /etc/ingress-controller/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.

          Important

          If the feature is enabled but the files are missing, GeoIP2 will not be enabled.

          default: false

          "},{"location":"user-guide/nginx-configuration/configmap/#geoip2-autoreload-in-minutes","title":"geoip2-autoreload-in-minutes","text":"

          Enables the geoip2 module autoreload in MaxMind databases setting the interval in minutes.

          default: 0

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-brotli","title":"enable-brotli","text":"

          Enables or disables compression of HTTP responses using the \"brotli\" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: false

          Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli

          "},{"location":"user-guide/nginx-configuration/configmap/#brotli-level","title":"brotli-level","text":"

          Sets the Brotli Compression Level that will be used. default: 4

          "},{"location":"user-guide/nginx-configuration/configmap/#brotli-min-length","title":"brotli-min-length","text":"

          Minimum length of responses, in bytes, that will be eligible for brotli compression. default: 20

          "},{"location":"user-guide/nginx-configuration/configmap/#brotli-types","title":"brotli-types","text":"

          Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component

          "},{"location":"user-guide/nginx-configuration/configmap/#use-http2","title":"use-http2","text":"

          Enables or disables HTTP/2 support in secure connections.

          "},{"location":"user-guide/nginx-configuration/configmap/#gzip-disable","title":"gzip-disable","text":"

          Disables gzipping of responses for requests with \"User-Agent\" header fields matching any of the specified regular expressions.

          "},{"location":"user-guide/nginx-configuration/configmap/#gzip-level","title":"gzip-level","text":"

          Sets the gzip Compression Level that will be used. default: 1

          "},{"location":"user-guide/nginx-configuration/configmap/#gzip-min-length","title":"gzip-min-length","text":"

          Minimum length of responses to be returned to the client before it is eligible for gzip compression, in bytes. default: 256

          "},{"location":"user-guide/nginx-configuration/configmap/#gzip-types","title":"gzip-types","text":"

          Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. default: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.

          "},{"location":"user-guide/nginx-configuration/configmap/#worker-processes","title":"worker-processes","text":"

          Sets the number of worker processes. The default of \"auto\" means number of available CPU cores.

          "},{"location":"user-guide/nginx-configuration/configmap/#worker-cpu-affinity","title":"worker-cpu-affinity","text":"

          Binds worker processes to the sets of CPUs. worker_cpu_affinity. By default worker processes are not bound to any specific CPUs. The value can be:

          • \"\": empty string indicate no affinity is applied.
          • cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus.
          • auto: binding worker processes automatically to available CPUs.
          "},{"location":"user-guide/nginx-configuration/configmap/#worker-shutdown-timeout","title":"worker-shutdown-timeout","text":"

          Sets a timeout for Nginx to wait for worker to gracefully shutdown. default: \"240s\"

          "},{"location":"user-guide/nginx-configuration/configmap/#load-balance","title":"load-balance","text":"

          Sets the algorithm to use for load balancing. The value can either be:

          • round_robin: to use the default round robin loadbalancer
          • ewma: to use the Peak EWMA method for routing (implementation)

          The default is round_robin.

          • To load balance using consistent hashing of IP or other variables, consider the nginx.ingress.kubernetes.io/upstream-hash-by annotation.
          • To load balance using session cookies, consider the nginx.ingress.kubernetes.io/affinity annotation.

          References: https://nginx.org/en/docs/http/load_balancing.html

          "},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-bucket-size","title":"variables-hash-bucket-size","text":"

          Sets the bucket size for the variables hash table.

          References: https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size

          "},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-max-size","title":"variables-hash-max-size","text":"

          Sets the maximum size of the variables hash table.

          References: https://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size

          "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-connections","title":"upstream-keepalive-connections","text":"

          Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320

          References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

          "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-time","title":"upstream-keepalive-time","text":"

          Sets the maximum time during which requests can be processed through one keepalive connection. default: \"1h\"

          References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_time

          "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-timeout","title":"upstream-keepalive-timeout","text":"

          Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60

          References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout

          "},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-requests","title":"upstream-keepalive-requests","text":"

          Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000

          References: https://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests

          "},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-zone-variable","title":"limit-conn-zone-variable","text":"

          Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-timeout","title":"proxy-stream-timeout","text":"

          Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.

          References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream","title":"proxy-stream-next-upstream","text":"

          When a connection to the proxied server cannot be established, determines whether a client connection will be passed to the next server.

          References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream-timeout","title":"proxy-stream-next-upstream-timeout","text":"

          Limits the time allowed to pass a connection to the next server. The 0 value turns off this limitation.

          References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_timeout

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-next-upstream-tries","title":"proxy-stream-next-upstream-tries","text":"

          Limits the number of possible tries a request should be passed to the next server. The 0 value turns off this limitation.

          References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_next_upstream_tries

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-responses","title":"proxy-stream-responses","text":"

          Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.

          References: https://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses

          "},{"location":"user-guide/nginx-configuration/configmap/#bind-address","title":"bind-address","text":"

          Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

          "},{"location":"user-guide/nginx-configuration/configmap/#use-forwarded-headers","title":"use-forwarded-headers","text":"

          If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

          If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-real-ip","title":"enable-real-ip","text":"

          enable-real-ip enables the configuration of https://nginx.org/en/docs/http/ngx_http_realip_module.html. Specific attributes of the module can be configured further by using forwarded-for-header and proxy-real-ip-cidr settings.

          "},{"location":"user-guide/nginx-configuration/configmap/#forwarded-for-header","title":"forwarded-for-header","text":"

          Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For

          "},{"location":"user-guide/nginx-configuration/configmap/#compute-full-forwarded-for","title":"compute-full-forwarded-for","text":"

          Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-add-original-uri-header","title":"proxy-add-original-uri-header","text":"

          Adds an X-Original-Uri header with the original request URI to the backend request

          "},{"location":"user-guide/nginx-configuration/configmap/#generate-request-id","title":"generate-request-id","text":"

          Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-host","title":"jaeger-collector-host","text":"

          Specifies the host to use when uploading traces. It must be a valid URL.

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-port","title":"jaeger-collector-port","text":"

          Specifies the port to use when uploading traces. default: 6831

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-endpoint","title":"jaeger-endpoint","text":"

          Specifies the endpoint to use when uploading traces to a collector. This takes priority over jaeger-collector-host if both are specified.

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-service-name","title":"jaeger-service-name","text":"

          Specifies the service name to use for any traces created. default: nginx

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-propagation-format","title":"jaeger-propagation-format","text":"

          Specifies the traceparent/tracestate propagation format. default: jaeger

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-type","title":"jaeger-sampler-type","text":"

          Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-param","title":"jaeger-sampler-param","text":"

          Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-host","title":"jaeger-sampler-host","text":"

          Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-port","title":"jaeger-sampler-port","text":"

          Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. default: 5778

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-trace-context-header-name","title":"jaeger-trace-context-header-name","text":"

          Specifies the header name used for passing trace context. default: uber-trace-id

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-debug-header","title":"jaeger-debug-header","text":"

          Specifies the header name used for force sampling. default: jaeger-debug-id

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-baggage-header","title":"jaeger-baggage-header","text":"

          Specifies the header name used to submit baggage if there is no root span. default: jaeger-baggage

          "},{"location":"user-guide/nginx-configuration/configmap/#jaeger-tracer-baggage-header-prefix","title":"jaeger-tracer-baggage-header-prefix","text":"

          Specifies the header prefix used to propagate baggage. default: uberctx-

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-collector-host","title":"datadog-collector-host","text":"

          Specifies the datadog agent host to use when uploading traces. It must be a valid URL.

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-collector-port","title":"datadog-collector-port","text":"

          Specifies the port to use when uploading traces. default: 8126

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-service-name","title":"datadog-service-name","text":"

          Specifies the service name to use for any traces created. default: nginx

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-environment","title":"datadog-environment","text":"

          Specifies the environment this trace belongs to. default: prod

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-operation-name-override","title":"datadog-operation-name-override","text":"

          Overrides the operation name to use for any traces crated. default: nginx.handle

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-priority-sampling","title":"datadog-priority-sampling","text":"

          Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true

          "},{"location":"user-guide/nginx-configuration/configmap/#datadog-sample-rate","title":"datadog-sample-rate","text":"

          Specifies sample rate for any traces created. This is effective only when datadog-priority-sampling is false default: 1.0

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-opentelemetry","title":"enable-opentelemetry","text":"

          Enables the nginx OpenTelemetry extension. default: is disabled

          References: https://github.com/open-telemetry/opentelemetry-cpp-contrib

          "},{"location":"user-guide/nginx-configuration/configmap/#opentelemetry-operation-name","title":"opentelemetry-operation-name","text":"

          Specifies a custom name for the server span. default: is empty

          For example, set to \"HTTP $request_method $uri\".

          "},{"location":"user-guide/nginx-configuration/configmap/#otlp-collector-host","title":"otlp-collector-host","text":"

          Specifies the host to use when uploading traces. It must be a valid URL.

          "},{"location":"user-guide/nginx-configuration/configmap/#otlp-collector-port","title":"otlp-collector-port","text":"

          Specifies the port to use when uploading traces. default: 4317

          "},{"location":"user-guide/nginx-configuration/configmap/#otel-service-name","title":"otel-service-name","text":"

          Specifies the service name to use for any traces created. default: nginx

          "},{"location":"user-guide/nginx-configuration/configmap/#opentelemetry-trust-incoming-span-true","title":"opentelemetry-trust-incoming-span: \"true\"","text":"

          Enables or disables using spans from incoming requests as parent for created ones. default: true

          "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler-parent-based","title":"otel-sampler-parent-based","text":"

          Uses sampler implementation which by default will take a sample if parent Activity is sampled. default: false

          "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler-ratio","title":"otel-sampler-ratio","text":"

          Specifies sample rate for any traces created. default: 0.01

          "},{"location":"user-guide/nginx-configuration/configmap/#otel-sampler","title":"otel-sampler","text":"

          Specifies the sampler to be used when sampling traces. The available samplers are: AlwaysOff, AlwaysOn, TraceIdRatioBased, remote. default: AlwaysOff

          "},{"location":"user-guide/nginx-configuration/configmap/#main-snippet","title":"main-snippet","text":"

          Adds custom configuration to the main section of the nginx configuration.

          "},{"location":"user-guide/nginx-configuration/configmap/#http-snippet","title":"http-snippet","text":"

          Adds custom configuration to the http section of the nginx configuration.

          "},{"location":"user-guide/nginx-configuration/configmap/#server-snippet","title":"server-snippet","text":"

          Adds custom configuration to all the servers in the nginx configuration.

          "},{"location":"user-guide/nginx-configuration/configmap/#stream-snippet","title":"stream-snippet","text":"

          Adds custom configuration to the stream section of the nginx configuration.

          "},{"location":"user-guide/nginx-configuration/configmap/#location-snippet","title":"location-snippet","text":"

          Adds custom configuration to all the locations in the nginx configuration.

          You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.

          "},{"location":"user-guide/nginx-configuration/configmap/#custom-http-errors","title":"custom-http-errors","text":"

          Enables which HTTP codes should be passed for processing with the error_page directive

          Setting at least one code also enables proxy_intercept_errors which are required to process error_page.

          Example usage: custom-http-errors: 404,415

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-body-size","title":"proxy-body-size","text":"

          Sets the maximum allowed size of the client request body. See NGINX client_max_body_size.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-connect-timeout","title":"proxy-connect-timeout","text":"

          Sets the timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.

          It will also set the grpc_connect_timeout for gRPC connections.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-read-timeout","title":"proxy-read-timeout","text":"

          Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

          It will also set the grpc_read_timeout for gRPC connections.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-send-timeout","title":"proxy-send-timeout","text":"

          Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.

          It will also set the grpc_send_timeout for gRPC connections.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffers-number","title":"proxy-buffers-number","text":"

          Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffer-size","title":"proxy-buffer-size","text":"

          Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-path","title":"proxy-cookie-path","text":"

          Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-domain","title":"proxy-cookie-domain","text":"

          Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream","title":"proxy-next-upstream","text":"

          Specifies in which cases a request should be passed to the next server.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-timeout","title":"proxy-next-upstream-timeout","text":"

          Limits the time in seconds during which a request can be passed to the next server.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-tries","title":"proxy-next-upstream-tries","text":"

          Limit the number of possible tries a request should be passed to the next server.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-redirect-from","title":"proxy-redirect-from","text":"

          Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off

          References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-request-buffering","title":"proxy-request-buffering","text":"

          Enables or disables buffering of a client request body.

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-redirect","title":"ssl-redirect","text":"

          Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\"

          "},{"location":"user-guide/nginx-configuration/configmap/#force-ssl-redirect","title":"force-ssl-redirect","text":"

          Sets the global value of redirects (308) to HTTPS if the server has a default TLS certificate (defined in extra-args). default: \"false\"

          "},{"location":"user-guide/nginx-configuration/configmap/#denylist-source-range","title":"denylist-source-range","text":"

          Sets the default denylisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

          "},{"location":"user-guide/nginx-configuration/configmap/#whitelist-source-range","title":"whitelist-source-range","text":"

          Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

          "},{"location":"user-guide/nginx-configuration/configmap/#skip-access-log-urls","title":"skip-access-log-urls","text":"

          Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty

          "},{"location":"user-guide/nginx-configuration/configmap/#limit-rate","title":"limit-rate","text":"

          Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

          "},{"location":"user-guide/nginx-configuration/configmap/#limit-rate-after","title":"limit-rate-after","text":"

          Sets the initial amount after which the further transmission of a response to a client will be rate limited.

          "},{"location":"user-guide/nginx-configuration/configmap/#lua-shared-dicts","title":"lua-shared-dicts","text":"

          Customize default Lua shared dictionaries or define more. You can use the following syntax to do so:

          lua-shared-dicts: \"<my dict name>: <my dict size>, [<my dict name>: <my dict size>], ...\"\n

          For example following will set default certificate_data dictionary to 100M and will introduce a new dictionary called my_custom_plugin:

          lua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 5\"\n

          You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the my_custom_plugin dict is only 512KB.

          lua-shared-dicts: \"certificate_data: 100, my_custom_plugin: 512k\"\n

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after

          "},{"location":"user-guide/nginx-configuration/configmap/#http-redirect-code","title":"http-redirect-code","text":"

          Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 default: 308

          Why the default code is 308?

          RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffering","title":"proxy-buffering","text":"

          Enables or disables buffering of responses from the proxied server.

          "},{"location":"user-guide/nginx-configuration/configmap/#limit-req-status-code","title":"limit-req-status-code","text":"

          Sets the status code to return in response to rejected requests. default: 503

          "},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-status-code","title":"limit-conn-status-code","text":"

          Sets the status code to return in response to rejected connections. default: 503

          "},{"location":"user-guide/nginx-configuration/configmap/#enable-syslog","title":"enable-syslog","text":"

          Enable syslog feature for access log and error log. default: false

          "},{"location":"user-guide/nginx-configuration/configmap/#syslog-host","title":"syslog-host","text":"

          Sets the address of syslog server. The address can be specified as a domain name or IP address.

          "},{"location":"user-guide/nginx-configuration/configmap/#syslog-port","title":"syslog-port","text":"

          Sets the port of syslog server. default: 514

          "},{"location":"user-guide/nginx-configuration/configmap/#no-tls-redirect-locations","title":"no-tls-redirect-locations","text":"

          A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-allowed-response-headers","title":"global-allowed-response-headers","text":"

          A comma-separated list of allowed response headers inside the custom headers annotations

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-url","title":"global-auth-url","text":"

          A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to \"false\". default: \"\"

          References: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#external-authentication

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-method","title":"global-auth-method","text":"

          A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: \"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-signin","title":"global-auth-signin","text":"

          Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: \"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-signin-redirect-param","title":"global-auth-signin-redirect-param","text":"

          Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: \"rd\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-response-headers","title":"global-auth-response-headers","text":"

          Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: \"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-request-redirect","title":"global-auth-request-redirect","text":"

          Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: \"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-snippet","title":"global-auth-snippet","text":"

          Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-snippet. default: \"\"

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-cache-key","title":"global-auth-cache-key","text":"

          Enables caching for global auth requests. Specify a lookup key for auth responses, e.g. $remote_user$http_authorization.

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-cache-duration","title":"global-auth-cache-duration","text":"

          Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.

          "},{"location":"user-guide/nginx-configuration/configmap/#global-auth-always-set-cookie","title":"global-auth-always-set-cookie","text":"

          Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. default: false

          "},{"location":"user-guide/nginx-configuration/configmap/#no-auth-locations","title":"no-auth-locations","text":"

          A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\"

          "},{"location":"user-guide/nginx-configuration/configmap/#block-cidrs","title":"block-cidrs","text":"

          A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally.

          References: https://nginx.org/en/docs/http/ngx_http_access_module.html#deny

          "},{"location":"user-guide/nginx-configuration/configmap/#block-user-agents","title":"block-user-agents","text":"

          A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

          References: https://nginx.org/en/docs/http/ngx_http_map_module.html#map

          "},{"location":"user-guide/nginx-configuration/configmap/#block-referers","title":"block-referers","text":"

          A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

          References: https://nginx.org/en/docs/http/ngx_http_map_module.html#map

          "},{"location":"user-guide/nginx-configuration/configmap/#proxy-ssl-location-only","title":"proxy-ssl-location-only","text":"

          Set if proxy-ssl parameters should be applied only on locations and not on servers. default: is disabled

          "},{"location":"user-guide/nginx-configuration/configmap/#default-type","title":"default-type","text":"

          Sets the default MIME type of a response. default: text/html

          References: https://nginx.org/en/docs/http/ngx_http_core_module.html#default_type

          "},{"location":"user-guide/nginx-configuration/configmap/#service-upstream","title":"service-upstream","text":"

          Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule. default: \"false\"

          "},{"location":"user-guide/nginx-configuration/configmap/#ssl-reject-handshake","title":"ssl-reject-handshake","text":"

          Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress. default: \"false\"

          References: https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_reject_handshake

          "},{"location":"user-guide/nginx-configuration/configmap/#debug-connections","title":"debug-connections","text":"

          Enables debugging log for selected client connections. default: \"\"

          References: http://nginx.org/en/docs/ngx_core_module.html#debug_connection

          "},{"location":"user-guide/nginx-configuration/configmap/#strict-validate-path-type","title":"strict-validate-path-type","text":"

          Ingress objects contains a field called pathType that defines the proxy behavior. It can be Exact, Prefix and ImplementationSpecific.

          When pathType is configured as Exact or Prefix, there should be a more strict validation, allowing only paths starting with \"/\" and containing only alphanumeric characters and \"-\", \"_\" and additional \"/\".

          When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType ImplementationSpecific and containing invalid characters to be denied.

          This means that Ingress objects that rely on paths containing regex characters should use ImplementationSpecific pathType.

          The cluster admin should establish validation rules using mechanisms like Open Policy Agent to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used.

          default: \"true\"

          "},{"location":"user-guide/nginx-configuration/configmap/#grpc-buffer-size-kb","title":"grpc-buffer-size-kb","text":"

          Sets the configuration for the GRPC Buffer Size parameter. If not set it will use the default from NGINX.

          References: https://nginx.org/en/docs/http/ngx_http_grpc_module.html#grpc_buffer_size

          "},{"location":"user-guide/nginx-configuration/custom-template/","title":"Custom NGINX template","text":"

          The NGINX template is located in the file /etc/nginx/template/nginx.tmpl.

          Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template

                  volumeMounts:\n          - mountPath: /etc/nginx/template\n            name: nginx-template-volume\n            readOnly: true\n      volumes:\n        - name: nginx-template-volume\n          configMap:\n            name: nginx-template\n            items:\n            - key: nginx.tmpl\n              path: nginx.tmpl\n

          Please note the template is tied to the Go code. Do not change names in the variable $cfg.

          For more information about the template syntax please check the Go template package. In addition to the built-in functions provided by the Go package the following functions are also available:

          • empty: returns true if the specified parameter (string) is empty
          • contains: strings.Contains
          • hasPrefix: strings.HasPrefix
          • hasSuffix: strings.HasSuffix
          • toUpper: strings.ToUpper
          • toLower: strings.ToLower
          • split: strings.Split
          • quote: wraps a string in double quotes
          • buildLocation: helps to build the NGINX Location section in each server
          • buildProxyPass: builds the reverse proxy configuration
          • buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation

          TODO:

          • buildAuthLocation:
          • buildAuthResponseHeaders:
          • buildResolvers:
          • buildDenyVariable:
          • buildUpstreamName:
          • buildForwardedFor:
          • buildAuthSignURL:
          • buildNextUpstream:
          • filterRateLimits:
          • formatIP:
          • getenv:
          • getIngressInformation:
          • serverConfig:
          • isLocationAllowed:
          • isValidClientBodyBufferSize:
          "},{"location":"user-guide/nginx-configuration/log-format/","title":"Log format","text":"

          The default configuration uses a custom logging format to add additional information about upstreams, response time and status.

          log_format upstreaminfo\n    '$remote_addr - $remote_user [$time_local] \"$request\" '\n    '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" '\n    '$request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr '\n    '$upstream_response_length $upstream_response_time $upstream_status $req_id';\n
          Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream-<namespace>-<service name>-<service port> $proxy_alternative_upstream_name name of the alternative upstream. The format is upstream-<namespace>-<service name>-<service port> $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id value of the X-Request-ID HTTP header. If the header is not set, a randomly generated ID.

          Additional available variables:

          Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service

          Sources:

          • Upstream variables
          • Embedded variables
          "},{"location":"user-guide/third-party-addons/modsecurity/","title":"ModSecurity Web Application Firewall","text":"

          ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org

          The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).

          The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap.

          Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. Due to the value of the setting SecAuditLogType=Concurrent the ModSecurity log is stored in multiple files inside the directory /var/log/audit. The default Serial value in SecAuditLogType can impact performance.

          The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the OWASP ModSecurity Core Rule Set repository. Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.

          "},{"location":"user-guide/third-party-addons/modsecurity/#supported-annotations","title":"Supported annotations","text":"

          For more info on supported annotations, please see annotations/#modsecurity

          "},{"location":"user-guide/third-party-addons/modsecurity/#example-of-using-modsecurity-with-plugins-via-the-helm-chart","title":"Example of using ModSecurity with plugins via the helm chart","text":"

          Suppose you have a ConfigMap that contains the contents of the nextcloud-rule-exclusions plugin like this:

          apiVersion: v1\nkind: ConfigMap\nmetadata:\n  name: modsecurity-plugins\ndata:\n  empty-after.conf: |\n    # no data\n  empty-before.conf: |\n    # no data\n  empty-config.conf: |\n    # no data\n  nextcloud-rule-exclusions-before.conf:\n    # this is just a snippet\n    # find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin\n    #\n    # [ File Manager ]\n    # The web interface uploads files, and interacts with the user.\n    SecRule REQUEST_FILENAME \"@contains /remote.php/webdav\" \\\n        \"id:9508102,\\\n        phase:1,\\\n        pass,\\\n        t:none,\\\n        nolog,\\\n        ver:'nextcloud-rule-exclusions-plugin/1.2.0',\\\n        ctl:ruleRemoveById=920420,\\\n        ctl:ruleRemoveById=920440,\\\n        ctl:ruleRemoveById=941000-942999,\\\n        ctl:ruleRemoveById=951000-951999,\\\n        ctl:ruleRemoveById=953100-953130,\\\n        ctl:ruleRemoveByTag=attack-injection-php\"\n

          If you're using the helm chart, you can pass in the following parameters in your values.yaml:

          controller:\n  config:\n    # Enables Modsecurity\n    enable-modsecurity: \"true\"\n\n    # Update ModSecurity config and rules\n    modsecurity-snippet: |\n      # this enables the mod security nextcloud plugin\n      Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf\n\n      # this enables the default OWASP Core Rule Set\n      Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n\n      # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)\n      SecRuleEngine On\n\n      # Enable scanning of the request body\n      SecRequestBodyAccess On\n\n      # Enable XML and JSON parsing\n      SecRule REQUEST_HEADERS:Content-Type \"(?:text|application(?:/soap\\+|/)|application/xml)/\" \\\n        \"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML\"\n\n      SecRule REQUEST_HEADERS:Content-Type \"application/json\" \\\n        \"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON\"\n\n      # Reject if larger (we could also let it pass with ProcessPartial)\n      SecRequestBodyLimitAction Reject\n\n      # Send ModSecurity audit logs to the stdout (only for rejected requests)\n      SecAuditLog /dev/stdout\n\n      # format the logs in JSON\n      SecAuditLogFormat JSON\n\n      # could be On/Off/RelevantOnly\n      SecAuditEngine RelevantOnly\n\n  # Add a volume for the plugins directory\n  extraVolumes:\n    - name: plugins\n      configMap:\n        name: modsecurity-plugins\n\n  # override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap\n  extraVolumeMounts:\n    - name: plugins\n      mountPath: /etc/nginx/owasp-modsecurity-crs/plugins\n
          "},{"location":"user-guide/third-party-addons/opentelemetry/","title":"OpenTelemetry","text":"

          Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.

          Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled.

          Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes.

          Demo: OpenTelemetry in Ingress NGINX.

          "},{"location":"user-guide/third-party-addons/opentelemetry/#usage","title":"Usage","text":"

          To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap:

          data:\n  enable-opentelemetry: \"true\"\n

          To enable or disable instrumentation for a single Ingress, use the enable-opentelemetry annotation:

          kind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/enable-opentelemetry: \"true\"\n

          We must also set the host to use when uploading traces:

          otlp-collector-host: \"otel-coll-collector.otel.svc\"\n
          NOTE: While the option is called otlp-collector-host, you will need to point this to any backend that receives otlp-grpc.

          Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector, Jaeger Tempo, and zipkin have been tested.

          Other optional configuration options:

          # specifies the name to use for the server span\nopentelemetry-operation-name\n\n# sets whether or not to trust incoming telemetry spans\nopentelemetry-trust-incoming-span\n\n# specifies the port to use when uploading traces, Default: 4317\notlp-collector-port\n\n# specifies the service name to use for any traces created, Default: nginx\notel-service-name\n\n# The maximum queue size. After the size is reached data are dropped.\notel-max-queuesize\n\n# The delay interval in milliseconds between two consecutive exports.\notel-schedule-delay-millis\n\n# How long the export can run before it is cancelled.\notel-schedule-delay-millis\n\n# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.\notel-max-export-batch-size\n\n# specifies sample rate for any traces created, Default: 0.01\notel-sampler-ratio\n\n# specifies the sampler to be used when sampling traces.\n# The available samplers are: AlwaysOn,  AlwaysOff, TraceIdRatioBased, Default: AlwaysOff\notel-sampler\n\n# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false\notel-sampler-parent-based\n

          Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:

          kind: Ingress\nmetadata:\n  annotations:\n    nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span: \"true\"\n

          "},{"location":"user-guide/third-party-addons/opentelemetry/#examples","title":"Examples","text":"

          The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop.

          In the esigo/nginx-example GitHub repository is an example of a simple hello service:

          graph TB\n    subgraph Browser\n    start[\"http://esigo.dev/hello/nginx\"]\n    end\n\n    subgraph app\n        sa[service-a]\n        sb[service-b]\n        sa --> |name: nginx| sb\n        sb --> |hello nginx!| sa\n    end\n\n    subgraph otel\n        otc[\"Otel Collector\"]\n    end\n\n    subgraph observability\n        tempo[\"Tempo\"]\n        grafana[\"Grafana\"]\n        backend[\"Jaeger\"]\n        zipkin[\"Zipkin\"]\n    end\n\n    subgraph ingress-nginx\n        ngx[nginx]\n    end\n\n    subgraph ngx[nginx]\n        ng[nginx]\n        om[OpenTelemetry module]\n    end\n\n    subgraph Node\n        app\n        otel\n        observability\n        ingress-nginx\n        om --> |otlp-gRPC| otc --> |jaeger| backend\n        otc --> |zipkin| zipkin\n        otc --> |otlp-gRPC| tempo --> grafana\n        sa --> |otlp-gRPC| otc\n        sb --> |otlp-gRPC| otc\n        start --> ng --> sa\n    end

          To install the example and collectors run:

          1. Enable Ingress addon with:

              opentelemetry:\n    enabled: true\n    image: registry.k8s.io/ingress-nginx/opentelemetry-1.25.3:v20240813-b933310d@sha256:f7604ac0547ed64d79b98d92133234e66c2c8aade3c1f4809fed5eec1fb7f922\n    containerSecurityContext:\n    allowPrivilegeEscalation: false\n
          2. Enable OpenTelemetry and set the otlp-collector-host:

            $ echo '\n  apiVersion: v1\n  kind: ConfigMap\n  data:\n    enable-opentelemetry: \"true\"\n    opentelemetry-config: \"/etc/nginx/opentelemetry.toml\"\n    opentelemetry-operation-name: \"HTTP $request_method $service_name $uri\"\n    opentelemetry-trust-incoming-span: \"true\"\n    otlp-collector-host: \"otel-coll-collector.otel.svc\"\n    otlp-collector-port: \"4317\"\n    otel-max-queuesize: \"2048\"\n    otel-schedule-delay-millis: \"5000\"\n    otel-max-export-batch-size: \"512\"\n    otel-service-name: \"nginx-proxy\" # Opentelemetry resource name\n    otel-sampler: \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased\n    otel-sampler-ratio: \"1.0\"\n    otel-sampler-parent-based: \"false\"\n  metadata:\n    name: ingress-nginx-controller\n    namespace: ingress-nginx\n  ' | kubectl replace -f -\n
          3. Deploy otel-collector, grafana and Jaeger backend:

            # add helm charts needed for grafana and OpenTelemetry collector\nhelm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts\nhelm repo add grafana https://grafana.github.io/helm-charts\nhelm repo update\n# deploy cert-manager needed for OpenTelemetry collector operator\nkubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml\n# create observability namespace\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml\n# install OpenTelemetry collector operator\nhelm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator\n# deploy OpenTelemetry collector\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml\n# deploy Jaeger all-in-one\nkubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability\n# deploy zipkin\nkubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability\n# deploy tempo and grafana\nhelm upgrade --install tempo grafana/tempo --create-namespace -n observability\nhelm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability\n
          4. Build and deploy demo app:

            # build images\nmake images\n\n# deploy demo app:\nmake deploy-app\n
          5. Make a few requests to the Service:

            kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8090:80\ncurl http://esigo.dev:8090/hello/nginx\n\n\nStatusCode        : 200\nStatusDescription : OK\nContent           : {\"v\":\"hello nginx!\"}\n\nRawContent        : HTTP/1.1 200 OK\n                    Connection: keep-alive\n                    Content-Length: 21\n                    Content-Type: text/plain; charset=utf-8\n                    Date: Mon, 10 Oct 2022 17:43:33 GMT\n\n                    {\"v\":\"hello nginx!\"}\n\nForms             : {}\nHeaders           : {[Connection, keep-alive], [Content-Length, 21], [Content-Type, text/plain; charset=utf-8], [Date,\n                    Mon, 10 Oct 2022 17:43:33 GMT]}\nImages            : {}\nInputFields       : {}\nLinks             : {}\nParsedHtml        : System.__ComObject\nRawContentLength  : 21\n
          6. View the Grafana UI:

            kubectl port-forward --namespace=observability service/grafana 3000:80\n
            In the Grafana interface we can see the details:

          7. View the Jaeger UI:

            kubectl port-forward --namespace=observability service/jaeger-all-in-one-query 16686:16686\n
            In the Jaeger interface we can see the details:

          8. View the Zipkin UI:

            kubectl port-forward --namespace=observability service/zipkin 9411:9411\n
            In the Zipkin interface we can see the details:

          "},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog","text":"

          If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:

          "},{"location":"user-guide/third-party-addons/opentelemetry/#annotations","title":"Annotations","text":"Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry nginx.ingress.kubernetes.io/opentracing-trust-incoming-span nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span"},{"location":"user-guide/third-party-addons/opentelemetry/#configs","title":"Configs","text":"Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port, otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 1a1c88d5d..f1c157624 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,322 +2,322 @@ https://kubernetes.github.io/ingress-nginx/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/e2e-tests/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/faq/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/how-it-works/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/kubectl-plugin/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/lua_tests/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/troubleshooting/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/deploy/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/deploy/hardening-guide/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/deploy/rbac/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/deploy/upgrade/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/developer-guide/code-overview/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/developer-guide/getting-started/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/enhancements/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/enhancements/20190724-only-dynamic-ssl/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/enhancements/20190815-zone-aware-routing/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/enhancements/20231001-split-containers/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/enhancements/YYYYMMDD-kep-template/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/auth/basic/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/auth/external-auth/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/canary/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/configuration-snippets/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-configuration/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-errors/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/external-auth-headers/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/jwt/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/ssl-dh-param/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/customization/sysctl/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/docker-registry/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/grpc/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/multi-tls/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/openpolicyagent/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/psp/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/rewrite/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/static-ip/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/examples/tls-termination/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/external-articles/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/fcgi-services/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/k8s-122-migration/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/tls/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations-risk/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/ - 2024-09-07 + 2024-09-08 daily https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/opentelemetry/ - 2024-09-07 + 2024-09-08 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 615c4a7f2..0fb646dd3 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ