diff --git a/examples/affinity/cookie/index.html b/examples/affinity/cookie/index.html index 1ba30f097..99d7c0d15 100644 --- a/examples/affinity/cookie/index.html +++ b/examples/affinity/cookie/index.html @@ -129,7 +129,7 @@ - + Skip to content @@ -1172,48 +1172,53 @@ -

Sticky Session

-

This example demonstrates how to achieve session affinity using cookies

+

Sticky sessions

+

This example demonstrates how to achieve session affinity using cookies.

Deployment

-

Session stickiness is achieved through 3 annotations on the Ingress, as shown in the example.

+

Session affinity can be configured using the following annotations:

- + - - + + - - + + - - - + + + - + + + + + +
Name DescriptionValuesValue
nginx.ingress.kubernetes.io/affinitySets the affinity typestring (in NGINX only cookie is possibleType of the affinity, set this to cookie to enable session affinitystring (NGINX only supports cookie)
nginx.ingress.kubernetes.io/session-cookie-nameName of the cookie that will be usedstring (default to INGRESSCOOKIE)Name of the cookie that will be createdstring (defaults to INGRESSCOOKIE)
nginx.ingress.kubernetes.io/session-cookie-expiresThe value is a date as UNIX timestamp that the cookie will expire on, it corresponds to cookie Expires directivenumber of secondsnginx.ingress.kubernetes.io/session-cookie-pathPath that will be set on the cookie (required if your Ingress paths use regular expressions)string (defaults to the currently matched path)
nginx.ingress.kubernetes.io/session-cookie-max-ageNumber of seconds until the cookie expires that will correspond to cookie Max-Age directiveTime until the cookie expires, corresponds to the Max-Age cookie directivenumber of seconds
nginx.ingress.kubernetes.io/session-cookie-expiresLegacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds
-

You can create the ingress to test this

+

You can create the example Ingress to test this:

kubectl create -f ingress.yaml
 

Validation

-

You can confirm that the Ingress works.

-

$ kubectl describe ing nginx-test
+

You can confirm that the Ingress works:

+
$ kubectl describe ing nginx-test
 Name:           nginx-test
 Namespace:      default
 Address:        
@@ -1246,13 +1251,14 @@
 ETag: "58875e6b-264"
 Accept-Ranges: bytes
 
-In the example above, you can see a line containing the 'Set-Cookie: INGRESSCOOKIE' setting the right defined stickiness cookie. -This cookie is created by NGINX, it contains the hash of the used upstream in that request and has an expires. -If the user changes this cookie, NGINX creates a new one and redirect the user to another upstream.

-

If the backend pool grows up NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.

-

When the backend server is removed, the requests are then re-routed to another upstream server and NGINX creates a new cookie, as the previous hash became invalid.

-

When you have more than one Ingress Object pointing to the same Service, but one containing affinity configuration and other don't, the first created Ingress will be used. -This means that you can face the situation that you've configured Session Affinity in one Ingress and it doesn't reflects in NGINX configuration, because there is another Ingress Object pointing to the same service that doesn't configure this.

+ +

In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. +This cookie is created by NGINX, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing) and has an Expires directive. +If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream.

+

If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.

+

When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.

+

When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. +This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.

diff --git a/examples/customization/custom-errors/custom-default-backend.yaml b/examples/customization/custom-errors/custom-default-backend.yaml index 0d6f2cd7a..70096bdbe 100644 --- a/examples/customization/custom-errors/custom-default-backend.yaml +++ b/examples/customization/custom-errors/custom-default-backend.yaml @@ -15,9 +15,8 @@ spec: targetPort: 8080 name: http --- -apiVersion: apps/v1beta2 +apiVersion: apps/v1 kind: Deployment -apiVersion: apps/v1beta2 metadata: name: nginx-errors labels: diff --git a/search/search_index.json b/search/search_index.json index 329387721..05f093d67 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started.","title":"Welcome"},{"location":"#welcome","text":"This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io .","title":"Welcome"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"development/","text":"Developing for NGINX Ingress Controller \u00b6 This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers. Quick Start \u00b6 Getting the code \u00b6 The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx Initial developer environment build \u00b6 Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env Updating the deployment \u00b6 The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller- Dependencies \u00b6 The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune Building \u00b6 All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER = # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY = To find the registry simply run: docker system info | grep Registry Nginx Controller \u00b6 Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG = REGISTRY = $USER /ingress-controller make docker-build Push the container image to a remote repository $ TAG = REGISTRY = $USER /ingress-controller make docker-push Deploying \u00b6 There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide Testing \u00b6 To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored. Releasing \u00b6 All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Development"},{"location":"development/#developing-for-nginx-ingress-controller","text":"This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers.","title":"Developing for NGINX Ingress Controller"},{"location":"development/#quick-start","text":"","title":"Quick Start"},{"location":"development/#getting-the-code","text":"The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx","title":"Getting the code"},{"location":"development/#initial-developer-environment-build","text":"Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env","title":"Initial developer environment build"},{"location":"development/#updating-the-deployment","text":"The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller-","title":"Updating the deployment"},{"location":"development/#dependencies","text":"The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune","title":"Dependencies"},{"location":"development/#building","text":"All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER = # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY = To find the registry simply run: docker system info | grep Registry","title":"Building"},{"location":"development/#nginx-controller","text":"Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG = REGISTRY = $USER /ingress-controller make docker-build Push the container image to a remote repository $ TAG = REGISTRY = $USER /ingress-controller make docker-push","title":"Nginx Controller"},{"location":"development/#deploying","text":"There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide","title":"Deploying"},{"location":"development/#testing","text":"To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored.","title":"Testing"},{"location":"development/#releasing","text":"All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Releasing"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use https://github.com/openresty/lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use https://github.com/openresty/lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 10s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Use the ingress-nginx kubectl plugin Install krew , then run $ ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.23.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) to install the plugin. Then run $ kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands. The plugin includes all of the commands present in the /dbg tool, plus a more detailed version of kubectl get ingresses available by runnning kubectl ingress-nginx ingresses . Use the /dbg Tool to Check Dynamic Configuration $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg dbg is a tool for quickly inspecting the state of the nginx instance Usage: dbg [command] Available Commands: backends Inspect the dynamically-loaded backends information conf Dump the contents of /etc/nginx/nginx.conf general Output the general dynamic lua state help Help about any command Flags: -h, --help help for dbg Use \"dbg [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends Inspect the dynamically-loaded backends information. Usage: dbg backends [command] Available Commands: all Output the all dynamic backend information as a JSON array get Output the backend information only for the backend that has this name list Output a newline-separated list of the backend names Flags: -h, --help help for backends Use \"dbg backends [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends list coffee-svc-80 tea-svc-80 upstream-default-backend $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends get coffee-svc-80 { \"endpoints\": [ { \"address\": \"10.1.1.112\", \"port\": \"8080\" }, { \"address\": \"10.1.1.119\", \"port\": \"8080\" }, { \"address\": \"10.1.1.121\", \"port\": \"8080\" } ], \"load-balance\": \"ewma\", \"name\": \"coffee-svc-80\", \"noServer\": false, \"port\": 0, \"secureCACert\": { \"caFilename\": \"\", \"pemSha\": \"\", \"secret\": \"\" }, \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { .... Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/extensions\", \"/apis/extensions/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 10s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Use the ingress-nginx kubectl plugin Install krew , then run $ ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.23.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) to install the plugin. Then run $ kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands. The plugin includes all of the commands present in the /dbg tool, plus a more detailed version of kubectl get ingresses available by runnning kubectl ingress-nginx ingresses . Use the /dbg Tool to Check Dynamic Configuration $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg dbg is a tool for quickly inspecting the state of the nginx instance Usage: dbg [command] Available Commands: backends Inspect the dynamically-loaded backends information conf Dump the contents of /etc/nginx/nginx.conf general Output the general dynamic lua state help Help about any command Flags: -h, --help help for dbg Use \"dbg [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends Inspect the dynamically-loaded backends information. Usage: dbg backends [command] Available Commands: all Output the all dynamic backend information as a JSON array get Output the backend information only for the backend that has this name list Output a newline-separated list of the backend names Flags: -h, --help help for backends Use \"dbg backends [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends list coffee-svc-80 tea-svc-80 upstream-default-backend $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends get coffee-svc-80 { \"endpoints\": [ { \"address\": \"10.1.1.112\", \"port\": \"8080\" }, { \"address\": \"10.1.1.119\", \"port\": \"8080\" }, { \"address\": \"10.1.1.121\", \"port\": \"8080\" } ], \"load-balance\": \"ewma\", \"name\": \"coffee-svc-80\", \"noServer\": false, \"port\": 0, \"secureCACert\": { \"caFilename\": \"\", \"pemSha\": \"\", \"secret\": \"\" }, \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { ....","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/extensions\", \"/apis/extensions/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"deploy/","text":"Installation Guide \u00b6 Contents \u00b6 Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm Prerequisite Generic Deployment Command \u00b6 The following Mandatory Command is required for all deployments. Attention The default configuration watches Ingress object from all the namespaces. To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml Provider Specific Steps \u00b6 There are cloud provider specific yaml files. Docker for Mac \u00b6 Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml minikube \u00b6 For standard usage: minikube addons enable ingress For development: Disable the ingress addon: $ minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s AWS \u00b6 In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page Elastic Load Balancer - ELB \u00b6 This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443 ELB Idle Timeouts \u00b6 In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation . Network Load Balancer (NLB) \u00b6 This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml GCE-GKE \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Bare-metal \u00b6 Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations . Verify installation \u00b6 To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress. Detect installed version \u00b6 To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Using Helm \u00b6 NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"","title":"Installation Guide"},{"location":"deploy/#contents","text":"Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm","title":"Contents"},{"location":"deploy/#prerequisite-generic-deployment-command","text":"The following Mandatory Command is required for all deployments. Attention The default configuration watches Ingress object from all the namespaces. To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml","title":"Prerequisite Generic Deployment Command"},{"location":"deploy/#provider-specific-steps","text":"There are cloud provider specific yaml files.","title":"Provider Specific Steps"},{"location":"deploy/#docker-for-mac","text":"Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml","title":"Docker for Mac"},{"location":"deploy/#minikube","text":"For standard usage: minikube addons enable ingress For development: Disable the ingress addon: $ minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s","title":"minikube"},{"location":"deploy/#aws","text":"In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page","title":"AWS"},{"location":"deploy/#elastic-load-balancer-elb","text":"This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443","title":"Elastic Load Balancer - ELB"},{"location":"deploy/#elb-idle-timeouts","text":"In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation .","title":"ELB Idle Timeouts"},{"location":"deploy/#network-load-balancer-nlb","text":"This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#gce-gke","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml","title":"Azure"},{"location":"deploy/#bare-metal","text":"Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations .","title":"Bare-metal"},{"location":"deploy/#verify-installation","text":"To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress.","title":"Verify installation"},{"location":"deploy/#detect-installed-version","text":"To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Detect installed version"},{"location":"deploy/#using-helm","text":"NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Using Helm"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, nginx-ingress-serviceaccount . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role . Cluster Permissions \u00b6 These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update Namespace Permissions \u00b6 These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller. Bindings \u00b6 The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, nginx-ingress-serviceaccount .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=nginx:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=nginx:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"With Helm"},{"location":"examples/","text":"Ingress examples \u00b6 This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them. Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO","title":"Introduction"},{"location":"examples/#ingress-examples","text":"This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them. Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO","title":"Ingress examples"},{"location":"examples/PREREQUISITES/","text":"Prerequisites \u00b6 Many of the examples in this directory have common prerequisites. TLS certificates \u00b6 Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\" Generating a 2048 bit RSA private key ................+++ ................+++ writing new private key to 'tls.key' ----- $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt secret \"tls-secret\" created Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA. Client Certificate Authentication \u00b6 CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA. We have a CA Certificate which we obtain usually from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate. These instructions are based on the following blog Generate the CA Key and Certificate: $ openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority' Generate the Server Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com' $ openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Generate the Client Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client' $ openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt Once this is complete you can continue to follow the instructions here Test HTTP Service \u00b6 All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched","title":"Prerequisites"},{"location":"examples/PREREQUISITES/#prerequisites","text":"Many of the examples in this directory have common prerequisites.","title":"Prerequisites"},{"location":"examples/PREREQUISITES/#tls-certificates","text":"Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\" Generating a 2048 bit RSA private key ................+++ ................+++ writing new private key to 'tls.key' ----- $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt secret \"tls-secret\" created Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.","title":"TLS certificates"},{"location":"examples/PREREQUISITES/#client-certificate-authentication","text":"CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA. We have a CA Certificate which we obtain usually from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate. These instructions are based on the following blog Generate the CA Key and Certificate: $ openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority' Generate the Server Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com' $ openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Generate the Client Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client' $ openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt Once this is complete you can continue to follow the instructions here","title":"Client Certificate Authentication"},{"location":"examples/PREREQUISITES/#test-http-service","text":"All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched","title":"Test HTTP Service"},{"location":"examples/affinity/cookie/","text":"Sticky Session \u00b6 This example demonstrates how to achieve session affinity using cookies Deployment \u00b6 Session stickiness is achieved through 3 annotations on the Ingress, as shown in the example . Name Description Values nginx.ingress.kubernetes.io/affinity Sets the affinity type string (in NGINX only cookie is possible nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be used string (default to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-expires The value is a date as UNIX timestamp that the cookie will expire on, it corresponds to cookie Expires directive number of seconds nginx.ingress.kubernetes.io/session-cookie-max-age Number of seconds until the cookie expires that will correspond to cookie Max-Age directive number of seconds You can create the ingress to test this kubectl create -f ingress.yaml Validation \u00b6 You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) Rules: Host Path Backends ---- ---- -------- stickyingress.example.com / nginx-service:80 () Annotations: affinity: cookie session-cookie-name: INGRESSCOOKIE session-cookie-expires: 172800 session-cookie-max-age: 172800 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test $ curl -I http://stickyingress.example.com HTTP/1.1 200 OK Server: nginx/1.11.9 Date: Fri, 10 Feb 2017 14:11:12 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Set-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT ETag: \"58875e6b-264\" Accept-Ranges: bytes In the example above, you can see a line containing the 'Set-Cookie: INGRESSCOOKIE' setting the right defined stickiness cookie. This cookie is created by NGINX, it contains the hash of the used upstream in that request and has an expires. If the user changes this cookie, NGINX creates a new one and redirect the user to another upstream. If the backend pool grows up NGINX will keep sending the requests through the same server of the first request, even if it's overloaded. When the backend server is removed, the requests are then re-routed to another upstream server and NGINX creates a new cookie, as the previous hash became invalid. When you have more than one Ingress Object pointing to the same Service, but one containing affinity configuration and other don't, the first created Ingress will be used. This means that you can face the situation that you've configured Session Affinity in one Ingress and it doesn't reflects in NGINX configuration, because there is another Ingress Object pointing to the same service that doesn't configure this.","title":"Sticky Sessions"},{"location":"examples/affinity/cookie/#sticky-session","text":"This example demonstrates how to achieve session affinity using cookies","title":"Sticky Session"},{"location":"examples/affinity/cookie/#deployment","text":"Session stickiness is achieved through 3 annotations on the Ingress, as shown in the example . Name Description Values nginx.ingress.kubernetes.io/affinity Sets the affinity type string (in NGINX only cookie is possible nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be used string (default to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-expires The value is a date as UNIX timestamp that the cookie will expire on, it corresponds to cookie Expires directive number of seconds nginx.ingress.kubernetes.io/session-cookie-max-age Number of seconds until the cookie expires that will correspond to cookie Max-Age directive number of seconds You can create the ingress to test this kubectl create -f ingress.yaml","title":"Deployment"},{"location":"examples/affinity/cookie/#validation","text":"You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) Rules: Host Path Backends ---- ---- -------- stickyingress.example.com / nginx-service:80 () Annotations: affinity: cookie session-cookie-name: INGRESSCOOKIE session-cookie-expires: 172800 session-cookie-max-age: 172800 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test $ curl -I http://stickyingress.example.com HTTP/1.1 200 OK Server: nginx/1.11.9 Date: Fri, 10 Feb 2017 14:11:12 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Set-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT ETag: \"58875e6b-264\" Accept-Ranges: bytes In the example above, you can see a line containing the 'Set-Cookie: INGRESSCOOKIE' setting the right defined stickiness cookie. This cookie is created by NGINX, it contains the hash of the used upstream in that request and has an expires. If the user changes this cookie, NGINX creates a new one and redirect the user to another upstream. If the backend pool grows up NGINX will keep sending the requests through the same server of the first request, even if it's overloaded. When the backend server is removed, the requests are then re-routed to another upstream server and NGINX creates a new cookie, as the previous hash became invalid. When you have more than one Ingress Object pointing to the same Service, but one containing affinity configuration and other don't, the first created Ingress will be used. This means that you can face the situation that you've configured Session Affinity in one Ingress and it doesn't reflects in NGINX configuration, because there is another Ingress Object pointing to the same service that doesn't configure this.","title":"Validation"},{"location":"examples/auth/basic/","text":"Basic Authentication \u00b6 This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd . It's important the file generated is named auth (actually - that the secret has a key data.auth ), otherwise the ingress-controller returns a 503. $ htpasswd -c auth foo New password: New password: Re-type new password: Adding password for user foo $ kubectl create secret generic basic-auth --from-file = auth secret \"basic-auth\" created $ kubectl get secret basic-auth -o yaml apiVersion: v1 data: auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind: Secret metadata: name: basic-auth namespace: default type: Opaque echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-auth annotations: # type of authentication nginx.ingress.kubernetes.io/auth-type: basic # name of the secret that contains the user/password definitions nginx.ingress.kubernetes.io/auth-secret: basic-auth # message to display with an appropriate context why the authentication is required nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: http-svc servicePort: 80 \" | kubectl create -f - $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' * Trying 10.2.29.4... * Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) > GET / HTTP/1.1 > Host: foo.bar.com > User-Agent: curl/7.43.0 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 05:27:23 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm= \"Authentication Required - foo\" < 401 Authorization Required

401 Authorization Required


nginx/1.10.0
* Connection #0 to host 10.2.29.4 left intact $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' * Trying 10 .2.29.4... * Connected to 10 .2.29.4 ( 10 .2.29.4 ) port 80 ( #0) * Server auth using Basic with user 'foo' > GET / HTTP/1.1 > Host: foo.bar.com > Authorization: Basic Zm9vOmJhcg == > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 06 :05:26 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept-Encoding < CLIENT VALUES: client_address = 10 .2.29.4 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://foo.bar.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic Zm9vOmJhcg == connection = close host = foo.bar.com user-agent = curl/7.43.0 x-forwarded-for = 10 .2.29.1 x-forwarded-host = foo.bar.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.29.1 BODY: * Connection #0 to host 10.2.29.4 left intact -no body in request-","title":"Basic Authentication"},{"location":"examples/auth/basic/#basic-authentication","text":"This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd . It's important the file generated is named auth (actually - that the secret has a key data.auth ), otherwise the ingress-controller returns a 503. $ htpasswd -c auth foo New password: New password: Re-type new password: Adding password for user foo $ kubectl create secret generic basic-auth --from-file = auth secret \"basic-auth\" created $ kubectl get secret basic-auth -o yaml apiVersion: v1 data: auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind: Secret metadata: name: basic-auth namespace: default type: Opaque echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-auth annotations: # type of authentication nginx.ingress.kubernetes.io/auth-type: basic # name of the secret that contains the user/password definitions nginx.ingress.kubernetes.io/auth-secret: basic-auth # message to display with an appropriate context why the authentication is required nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: http-svc servicePort: 80 \" | kubectl create -f - $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' * Trying 10.2.29.4... * Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) > GET / HTTP/1.1 > Host: foo.bar.com > User-Agent: curl/7.43.0 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 05:27:23 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm= \"Authentication Required - foo\" < 401 Authorization Required

401 Authorization Required


nginx/1.10.0
* Connection #0 to host 10.2.29.4 left intact $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' * Trying 10 .2.29.4... * Connected to 10 .2.29.4 ( 10 .2.29.4 ) port 80 ( #0) * Server auth using Basic with user 'foo' > GET / HTTP/1.1 > Host: foo.bar.com > Authorization: Basic Zm9vOmJhcg == > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 06 :05:26 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept-Encoding < CLIENT VALUES: client_address = 10 .2.29.4 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://foo.bar.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic Zm9vOmJhcg == connection = close host = foo.bar.com user-agent = curl/7.43.0 x-forwarded-for = 10 .2.29.1 x-forwarded-host = foo.bar.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.29.1 BODY: * Connection #0 to host 10.2.29.4 left intact -no body in request-","title":"Basic Authentication"},{"location":"examples/auth/client-certs/","text":"Client Certificate Authentication \u00b6 It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup: CA certificate and Key(Intermediate Certs need to be in CA) Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate(Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs . You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following: $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem Then, you can concatenate them all in only one file, named 'ca.crt' as the following: $ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error. Creating Certificate Secrets \u00b6 There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly. You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA. $ kubectl create secret generic ca-secret --from-file = ca.crt = ca.crt $ kubectl create secret generic tls-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key You can create a secret containing CA certificate along with the Server Certificate, that can be used for both TLS and Client Auth. $ kubectl create secret generic ca-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key --from-file = ca.crt = ca.crt Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates. Setup Instructions \u00b6 Add the annotations as provided in the ingress.yaml example to your own ingress resources as required. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.","title":"Client Certificate Authentication"},{"location":"examples/auth/client-certs/#client-certificate-authentication","text":"It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup: CA certificate and Key(Intermediate Certs need to be in CA) Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate(Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs . You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following: $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem Then, you can concatenate them all in only one file, named 'ca.crt' as the following: $ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.","title":"Client Certificate Authentication"},{"location":"examples/auth/client-certs/#creating-certificate-secrets","text":"There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly. You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA. $ kubectl create secret generic ca-secret --from-file = ca.crt = ca.crt $ kubectl create secret generic tls-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key You can create a secret containing CA certificate along with the Server Certificate, that can be used for both TLS and Client Auth. $ kubectl create secret generic ca-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key --from-file = ca.crt = ca.crt Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.","title":"Creating Certificate Secrets"},{"location":"examples/auth/client-certs/#setup-instructions","text":"Add the annotations as provided in the ingress.yaml example to your own ingress resources as required. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.","title":"Setup Instructions"},{"location":"examples/auth/external-auth/","text":"External Basic Authentication \u00b6 Example 1: \u00b6 Use an external service (Basic Auth) located in https://httpbin.org $ kubectl create -f ingress.yaml ingress \"external-auth\" created $ kubectl get ing external-auth NAME HOSTS ADDRESS PORTS AGE external-auth external-auth-01.sample.com 172 .17.4.99 80 13s $ kubectl get ing external-auth -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd creationTimestamp: 2016 -10-03T13:50:35Z generation: 1 name: external-auth namespace: default resourceVersion: \"2068378\" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth uid: 5c388f1d-8970-11e6-9004-080027d2dc94 spec: rules: - host: external-auth-01.sample.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / status: loadBalancer: ingress: - ip: 172 .17.4.99 $ Test 1: no username/password (expect code 401) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) > GET / HTTP/1.1 > Host: external-auth-01.sample.com > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:52:08 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm=\"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact Test 2: valid username/password (expect code 200) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' * Rebuilt URL to: http://172.17.4.99/ * Trying 172 .17.4.99... * Connected to 172 .17.4.99 ( 172 .17.4.99 ) port 80 ( #0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjpwYXNzd2Q = > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14 :52:50 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < CLIENT VALUES: client_address = 10 .2.60.2 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://external-auth-01.sample.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic dXNlcjpwYXNzd2Q = connection = close host = external-auth-01.sample.com user-agent = curl/7.50.1 x-forwarded-for = 10 .2.60.1 x-forwarded-host = external-auth-01.sample.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.60.1 BODY: * Connection #0 to host 172.17.4.99 left intact -no body in request- Test 3: invalid username/password (expect code 401) curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjp1c2Vy > User-Agent: curl/7.50.1 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:53:04 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive * Authentication problem. Ignoring this. < WWW-Authenticate: Basic realm= \"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact","title":"External Basic Authentication"},{"location":"examples/auth/external-auth/#external-basic-authentication","text":"","title":"External Basic Authentication"},{"location":"examples/auth/external-auth/#example-1","text":"Use an external service (Basic Auth) located in https://httpbin.org $ kubectl create -f ingress.yaml ingress \"external-auth\" created $ kubectl get ing external-auth NAME HOSTS ADDRESS PORTS AGE external-auth external-auth-01.sample.com 172 .17.4.99 80 13s $ kubectl get ing external-auth -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd creationTimestamp: 2016 -10-03T13:50:35Z generation: 1 name: external-auth namespace: default resourceVersion: \"2068378\" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth uid: 5c388f1d-8970-11e6-9004-080027d2dc94 spec: rules: - host: external-auth-01.sample.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / status: loadBalancer: ingress: - ip: 172 .17.4.99 $ Test 1: no username/password (expect code 401) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) > GET / HTTP/1.1 > Host: external-auth-01.sample.com > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:52:08 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm=\"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact Test 2: valid username/password (expect code 200) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' * Rebuilt URL to: http://172.17.4.99/ * Trying 172 .17.4.99... * Connected to 172 .17.4.99 ( 172 .17.4.99 ) port 80 ( #0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjpwYXNzd2Q = > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14 :52:50 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < CLIENT VALUES: client_address = 10 .2.60.2 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://external-auth-01.sample.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic dXNlcjpwYXNzd2Q = connection = close host = external-auth-01.sample.com user-agent = curl/7.50.1 x-forwarded-for = 10 .2.60.1 x-forwarded-host = external-auth-01.sample.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.60.1 BODY: * Connection #0 to host 172.17.4.99 left intact -no body in request- Test 3: invalid username/password (expect code 401) curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjp1c2Vy > User-Agent: curl/7.50.1 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:53:04 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive * Authentication problem. Ignoring this. < WWW-Authenticate: Basic realm= \"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact","title":"Example 1:"},{"location":"examples/auth/oauth-external-auth/","text":"External OAUTH Authentication \u00b6 Overview \u00b6 The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources. Important This annotation requires nginx-ingress-controller v0.9.0 or greater.) Key Detail \u00b6 This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401 s to the same endpoint. Sample: ... metadata : name : application annotations : nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$escaped_request_uri\" ... Example: OAuth2 Proxy + Kubernetes-Dashboard \u00b6 This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider Prepare \u00b6 Install the kubernetes dashboard kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml Create a custom Github OAuth application Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com Authorization callback URL is the same as the base FQDN plus /oauth2 , like https://foo.bar.com/oauth2 Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values: OAUTH2_PROXY_CLIENT_ID with the github OAUTH2_PROXY_CLIENT_SECRET with the github OAUTH2_PROXY_COOKIE_SECRET with value of python - c 'import os,base64; print base64.b64encode(os.urandom(16))' Customize the contents of the file dashboard-ingress.yaml: Replace __INGRESS_HOST__ with a valid FQDN and __INGRESS_SECRET__ with a Secret with a valid SSL certificate. Deploy the oauth2 proxy and the ingress rules running: $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml Test the oauth integration accessing the configured URL, like https://foo.bar.com","title":"External OAUTH Authentication"},{"location":"examples/auth/oauth-external-auth/#external-oauth-authentication","text":"","title":"External OAUTH Authentication"},{"location":"examples/auth/oauth-external-auth/#overview","text":"The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources. Important This annotation requires nginx-ingress-controller v0.9.0 or greater.)","title":"Overview"},{"location":"examples/auth/oauth-external-auth/#key-detail","text":"This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401 s to the same endpoint. Sample: ... metadata : name : application annotations : nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$escaped_request_uri\" ...","title":"Key Detail"},{"location":"examples/auth/oauth-external-auth/#example-oauth2-proxy-kubernetes-dashboard","text":"This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider","title":"Example: OAuth2 Proxy + Kubernetes-Dashboard"},{"location":"examples/auth/oauth-external-auth/#prepare","text":"Install the kubernetes dashboard kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml Create a custom Github OAuth application Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com Authorization callback URL is the same as the base FQDN plus /oauth2 , like https://foo.bar.com/oauth2 Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values: OAUTH2_PROXY_CLIENT_ID with the github OAUTH2_PROXY_CLIENT_SECRET with the github OAUTH2_PROXY_COOKIE_SECRET with value of python - c 'import os,base64; print base64.b64encode(os.urandom(16))' Customize the contents of the file dashboard-ingress.yaml: Replace __INGRESS_HOST__ with a valid FQDN and __INGRESS_SECRET__ with a Secret with a valid SSL certificate. Deploy the oauth2 proxy and the ingress rules running: $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml Test the oauth integration accessing the configured URL, like https://foo.bar.com","title":"Prepare"},{"location":"examples/customization/configuration-snippets/","text":"Configuration Snippets \u00b6 Ingress \u00b6 The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example . $ kubectl apply -f ingress.yaml Test \u00b6 Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Configuration Snippets"},{"location":"examples/customization/configuration-snippets/#configuration-snippets","text":"","title":"Configuration Snippets"},{"location":"examples/customization/configuration-snippets/#ingress","text":"The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example . $ kubectl apply -f ingress.yaml","title":"Ingress"},{"location":"examples/customization/configuration-snippets/#test","text":"Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/custom-configuration/","text":"Custom Configuration \u00b6 Using a ConfigMap is possible to customize the NGINX configuration For example, if we want to change the timeouts we need to create a ConfigMap: $ cat configmap.yaml apiVersion: v1 data: proxy-connect-timeout: \"10\" proxy-read-timeout: \"120\" proxy-send-timeout: \"120\" kind: ConfigMap metadata: name: nginx-configuration curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \\ | kubectl apply -f - If the Configmap it is updated, NGINX will be reloaded with the new configuration.","title":"Custom Configuration"},{"location":"examples/customization/custom-configuration/#custom-configuration","text":"Using a ConfigMap is possible to customize the NGINX configuration For example, if we want to change the timeouts we need to create a ConfigMap: $ cat configmap.yaml apiVersion: v1 data: proxy-connect-timeout: \"10\" proxy-read-timeout: \"120\" proxy-send-timeout: \"120\" kind: ConfigMap metadata: name: nginx-configuration curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \\ | kubectl apply -f - If the Configmap it is updated, NGINX will be reloaded with the new configuration.","title":"Custom Configuration"},{"location":"examples/customization/custom-errors/","text":"Custom Errors \u00b6 This example demonstrates how to use a custom backend to render custom error pages. Customized default backend \u00b6 First, create the custom default-backend . It will be used by the Ingress controller later on. $ kubectl create -f custom-default-backend.yaml service \"nginx-errors\" created deployment.apps \"nginx-errors\" created This should have created a Deployment and a Service with the name nginx-errors . $ kubectl get deploy,svc NAME DESIRED CURRENT READY AGE deployment.apps/nginx-errors 1 1 1 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE service/nginx-errors ClusterIP 10 .0.0.12 80 /TCP 10s Ingress controller configuration \u00b6 If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the deployment guide , then follow these steps: Edit the nginx-ingress-controller Deployment and set the value of the --default-backend flag to the name of the newly created error backend. Edit the nginx-configuration ConfigMap and create the key custom-http-errors with a value of 404,503 . Take note of the IP address assigned to the NGINX Ingress controller Service. $ kubectl get svc ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE ingress-nginx ClusterIP 10 .0.0.13 80 /TCP,443/TCP 10m Note The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example. Testing error pages \u00b6 Let us send a couple of HTTP requests using cURL and validate everything is working as expected. A request to the default backend returns a 404 error with a custom message: $ curl -D- http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:11:24 GMT Content-Type: */* Transfer-Encoding: chunked Connection: keep-alive The page you're looking for could not be found. A request with a custom Accept header returns the corresponding document type (JSON): $ curl -D- -H 'Accept: application/json' http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19 :12:36 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding { \"message\" : \"The page you're looking for could not be found\" } To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).","title":"Custom Errors"},{"location":"examples/customization/custom-errors/#custom-errors","text":"This example demonstrates how to use a custom backend to render custom error pages.","title":"Custom Errors"},{"location":"examples/customization/custom-errors/#customized-default-backend","text":"First, create the custom default-backend . It will be used by the Ingress controller later on. $ kubectl create -f custom-default-backend.yaml service \"nginx-errors\" created deployment.apps \"nginx-errors\" created This should have created a Deployment and a Service with the name nginx-errors . $ kubectl get deploy,svc NAME DESIRED CURRENT READY AGE deployment.apps/nginx-errors 1 1 1 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE service/nginx-errors ClusterIP 10 .0.0.12 80 /TCP 10s","title":"Customized default backend"},{"location":"examples/customization/custom-errors/#ingress-controller-configuration","text":"If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the deployment guide , then follow these steps: Edit the nginx-ingress-controller Deployment and set the value of the --default-backend flag to the name of the newly created error backend. Edit the nginx-configuration ConfigMap and create the key custom-http-errors with a value of 404,503 . Take note of the IP address assigned to the NGINX Ingress controller Service. $ kubectl get svc ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE ingress-nginx ClusterIP 10 .0.0.13 80 /TCP,443/TCP 10m Note The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.","title":"Ingress controller configuration"},{"location":"examples/customization/custom-errors/#testing-error-pages","text":"Let us send a couple of HTTP requests using cURL and validate everything is working as expected. A request to the default backend returns a 404 error with a custom message: $ curl -D- http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:11:24 GMT Content-Type: */* Transfer-Encoding: chunked Connection: keep-alive The page you're looking for could not be found. A request with a custom Accept header returns the corresponding document type (JSON): $ curl -D- -H 'Accept: application/json' http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19 :12:36 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding { \"message\" : \"The page you're looking for could not be found\" } To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).","title":"Testing error pages"},{"location":"examples/customization/custom-headers/","text":"Custom Headers \u00b6 This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure a custom list of headers to be passed to the upstream server curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml \\ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml \\ | kubectl apply -f - Test \u00b6 Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Custom Headers"},{"location":"examples/customization/custom-headers/#custom-headers","text":"This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure a custom list of headers to be passed to the upstream server curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml \\ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml \\ | kubectl apply -f -","title":"Custom Headers"},{"location":"examples/customization/custom-headers/#test","text":"Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/external-auth-headers/","text":"External authentication, authentication service response headers propagation \u00b6 This example demonstrates propagation of selected authentication service response headers to backend service. Sample configuration includes: Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated After successful authentication service generates response headers UserID and UserRole Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows: $ kubectl create -f deploy/ deployment \"demo-auth-service\" created service \"demo-auth-service\" created ingress \"demo-auth-service\" created deployment \"demo-echo-service\" created service \"demo-echo-service\" created ingress \"public-demo-echo-service\" created ingress \"secure-demo-echo-service\" created $ kubectl get po NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public-demo-echo-service public-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m Test 1: public service with no auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 20 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: , UserRole: Test 2: secure service with no auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:18:48 GMT < Content-Type: text/html < Content-Length: 170 < Connection: keep-alive < 403 Forbidden

403 Forbidden


nginx/1.11.10
* Connection #0 to host 192.168.99.100 left intact Test 3: public service with valid auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:59 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 44 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 1443635317331776148, UserRole: admin Test 4: public service with valid auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:17:23 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 43 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 605394647632969758, UserRole: admin","title":"External authentication"},{"location":"examples/customization/external-auth-headers/#external-authentication-authentication-service-response-headers-propagation","text":"This example demonstrates propagation of selected authentication service response headers to backend service. Sample configuration includes: Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated After successful authentication service generates response headers UserID and UserRole Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows: $ kubectl create -f deploy/ deployment \"demo-auth-service\" created service \"demo-auth-service\" created ingress \"demo-auth-service\" created deployment \"demo-echo-service\" created service \"demo-echo-service\" created ingress \"public-demo-echo-service\" created ingress \"secure-demo-echo-service\" created $ kubectl get po NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public-demo-echo-service public-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m Test 1: public service with no auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 20 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: , UserRole: Test 2: secure service with no auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:18:48 GMT < Content-Type: text/html < Content-Length: 170 < Connection: keep-alive < 403 Forbidden

403 Forbidden


nginx/1.11.10
* Connection #0 to host 192.168.99.100 left intact Test 3: public service with valid auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:59 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 44 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 1443635317331776148, UserRole: admin Test 4: public service with valid auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:17:23 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 43 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 605394647632969758, UserRole: admin","title":"External authentication, authentication service response headers propagation"},{"location":"examples/customization/ssl-dh-param/","text":"Custom DH parameters for perfect forward secrecy \u00b6 This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\". Custom configuration \u00b6 $ cat configmap.yaml apiVersion: v1 data: ssl-dh-param: \"ingress-nginx/lb-dhparam\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f configmap.yaml Custom DH parameters secret \u00b6 $ > openssl dhparam 1024 2 > /dev/null | base64 LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... $ cat ssl-dh-param.yaml apiVersion: v1 data: dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f ssl-dh-param.yaml Test \u00b6 Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Custom DH parameters for perfect forward secrecy"},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-for-perfect-forward-secrecy","text":"This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".","title":"Custom DH parameters for perfect forward secrecy"},{"location":"examples/customization/ssl-dh-param/#custom-configuration","text":"$ cat configmap.yaml apiVersion: v1 data: ssl-dh-param: \"ingress-nginx/lb-dhparam\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f configmap.yaml","title":"Custom configuration"},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-secret","text":"$ > openssl dhparam 1024 2 > /dev/null | base64 LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... $ cat ssl-dh-param.yaml apiVersion: v1 data: dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f ssl-dh-param.yaml","title":"Custom DH parameters secret"},{"location":"examples/customization/ssl-dh-param/#test","text":"Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/sysctl/","text":"Sysctl tuning \u00b6 This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch kubectl patch deployment -n ingress-nginx nginx-ingress-controller --patch=\"$(cat patch.json)\"","title":"Sysctl tuning"},{"location":"examples/customization/sysctl/#sysctl-tuning","text":"This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch kubectl patch deployment -n ingress-nginx nginx-ingress-controller --patch=\"$(cat patch.json)\"","title":"Sysctl tuning"},{"location":"examples/docker-registry/","text":"Docker registry \u00b6 This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet Deployment \u00b6 First we deploy the docker registry in the cluster: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml Important DO NOT RUN THIS IN PRODUCTION This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies. The next required step is creation of the ingress rules. To do this we have two options: with and without TLS Without TLS \u00b6 Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml Important Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. Please check deploy a plain http registry With TLS \u00b6 Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate. Testing \u00b6 To test the registry is working correctly we download a known image from docker hub , create a tag pointing to the new registry and upload the image: docker pull ubuntu:16.04 docker tag ubuntu:16.04 `registry./ubuntu:16.04` docker push `registry./ubuntu:16.04` Please replace registry. with your domain.","title":"Docker registry"},{"location":"examples/docker-registry/#docker-registry","text":"This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet","title":"Docker registry"},{"location":"examples/docker-registry/#deployment","text":"First we deploy the docker registry in the cluster: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml Important DO NOT RUN THIS IN PRODUCTION This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies. The next required step is creation of the ingress rules. To do this we have two options: with and without TLS","title":"Deployment"},{"location":"examples/docker-registry/#without-tls","text":"Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml Important Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. Please check deploy a plain http registry","title":"Without TLS"},{"location":"examples/docker-registry/#with-tls","text":"Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.","title":"With TLS"},{"location":"examples/docker-registry/#testing","text":"To test the registry is working correctly we download a known image from docker hub , create a tag pointing to the new registry and upload the image: docker pull ubuntu:16.04 docker tag ubuntu:16.04 `registry./ubuntu:16.04` docker push `registry./ubuntu:16.04` Please replace registry. with your domain.","title":"Testing"},{"location":"examples/grpc/","text":"gRPC \u00b6 This example demonstrates how to route traffic to a gRPC service through the nginx controller. Prerequisites \u00b6 You have a kubernetes cluster running. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress). You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0 for grpc support. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example. Step 1: kubernetes Deployment \u00b6 $ kubectl create -f app.yaml This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051 . The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation: func main () { grpcServer := grpc . NewServer () fortune . RegisterFortuneTellerServer ( grpcServer , & FortuneTeller {}) lis , _ := net . Listen ( \"tcp\" , \":50051\" ) grpcServer . Serve ( lis ) } The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive \"insecure\"). For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\" . Step 2: the kubernetes Service \u00b6 $ kubectl create -f svc.yaml Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051 . Step 3: the kubernetes Ingress \u00b6 $ kubectl create -f ingress.yaml A few things to note: We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\" . This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service. We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build . The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service. Step 4: test the connection \u00b6 Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility: $ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict { \"message\" : \"Let us endeavor so to live that when we come to die even the undertaker will be sorry.\\n\\t\\t-- Mark Twain, \\\"Pudd'nhead Wilson's Calendar\\\"\" } Debugging Hints \u00b6 Obviously, watch the logs on your app. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed). Double-check your address and ports. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540 . If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.","title":"gRPC"},{"location":"examples/grpc/#grpc","text":"This example demonstrates how to route traffic to a gRPC service through the nginx controller.","title":"gRPC"},{"location":"examples/grpc/#prerequisites","text":"You have a kubernetes cluster running. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress). You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0 for grpc support. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.","title":"Prerequisites"},{"location":"examples/grpc/#step-1-kubernetes-deployment","text":"$ kubectl create -f app.yaml This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051 . The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation: func main () { grpcServer := grpc . NewServer () fortune . RegisterFortuneTellerServer ( grpcServer , & FortuneTeller {}) lis , _ := net . Listen ( \"tcp\" , \":50051\" ) grpcServer . Serve ( lis ) } The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive \"insecure\"). For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\" .","title":"Step 1: kubernetes Deployment"},{"location":"examples/grpc/#step-2-the-kubernetes-service","text":"$ kubectl create -f svc.yaml Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051 .","title":"Step 2: the kubernetes Service"},{"location":"examples/grpc/#step-3-the-kubernetes-ingress","text":"$ kubectl create -f ingress.yaml A few things to note: We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\" . This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service. We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build . The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.","title":"Step 3: the kubernetes Ingress"},{"location":"examples/grpc/#step-4-test-the-connection","text":"Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility: $ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict { \"message\" : \"Let us endeavor so to live that when we come to die even the undertaker will be sorry.\\n\\t\\t-- Mark Twain, \\\"Pudd'nhead Wilson's Calendar\\\"\" }","title":"Step 4: test the connection"},{"location":"examples/grpc/#debugging-hints","text":"Obviously, watch the logs on your app. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed). Double-check your address and ports. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540 . If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.","title":"Debugging Hints"},{"location":"examples/multi-tls/","text":"Multi TLS certificate termination \u00b6 This example uses 2 different certificates to terminate SSL for 2 hostnames. Deploy the controller by creating the rc in the parent dir Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like: $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35 server { listen 80; listen 443 ssl http2; ssl_certificate /etc/nginx-ssl/default-foobar.pem; ssl_certificate_key /etc/nginx-ssl/default-foobar.pem; server_name foo.bar.com; if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://default-http-svc-80; } And you should be able to reach your nginx service or http-svc service using a hostname switch: $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com / http-svc:80 bar.baz.com / nginx:80 $ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k CLIENT VALUES: client_address=10.245.0.6 command=GET real path=/ query=nil request_version=1.1 request_uri=http://foo.bar.com:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close host=foo.bar.com user-agent=curl/7.35.0 x-forwarded-for=10.245.0.1 x-forwarded-host=foo.bar.com x-forwarded-proto=https $ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k Welcome to nginx on Debian! $ curl 104 .154.30.67 default backend - 404","title":"Multi TLS certificate termination"},{"location":"examples/multi-tls/#multi-tls-certificate-termination","text":"This example uses 2 different certificates to terminate SSL for 2 hostnames. Deploy the controller by creating the rc in the parent dir Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like: $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35 server { listen 80; listen 443 ssl http2; ssl_certificate /etc/nginx-ssl/default-foobar.pem; ssl_certificate_key /etc/nginx-ssl/default-foobar.pem; server_name foo.bar.com; if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://default-http-svc-80; } And you should be able to reach your nginx service or http-svc service using a hostname switch: $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com / http-svc:80 bar.baz.com / nginx:80 $ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k CLIENT VALUES: client_address=10.245.0.6 command=GET real path=/ query=nil request_version=1.1 request_uri=http://foo.bar.com:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close host=foo.bar.com user-agent=curl/7.35.0 x-forwarded-for=10.245.0.1 x-forwarded-host=foo.bar.com x-forwarded-proto=https $ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k Welcome to nginx on Debian! $ curl 104 .154.30.67 default backend - 404","title":"Multi TLS certificate termination"},{"location":"examples/rewrite/","text":"Rewrite \u00b6 This example demonstrates how to use the Rewrite annotations Prerequisites \u00b6 You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster. Deployment \u00b6 Rewriting can be controlled using the following annotations: Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool Examples \u00b6 Rewrite Target \u00b6 Attention Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group . Note Captured groups are saved in numbered placeholders, chronologically, in the form $1 , $2 ... $n . These placeholders can be used as parameters in the rewrite-target annotation. Create an Ingress rule with a rewrite annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something/?(.*) \" | kubectl create -f - In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $1 , which is then used as a parameter in the rewrite-target annotation. For example, the ingress definition above will result in the following rewrites: - rewrite.bar.com/something rewrites to rewrite.bar.com/ - rewrite.bar.com/something/ rewrites to rewrite.bar.com/ - rewrite.bar.com/something/new rewrites to rewrite.bar.com/new App Root \u00b6 Create an Ingress rule with a app-root annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / \" | kubectl create -f - Check the rewrite is working $ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14 :57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://stickyingress.example.com/app1 Connection: keep-alive","title":"Rewrite"},{"location":"examples/rewrite/#rewrite","text":"This example demonstrates how to use the Rewrite annotations","title":"Rewrite"},{"location":"examples/rewrite/#prerequisites","text":"You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.","title":"Prerequisites"},{"location":"examples/rewrite/#deployment","text":"Rewriting can be controlled using the following annotations: Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool","title":"Deployment"},{"location":"examples/rewrite/#examples","text":"","title":"Examples"},{"location":"examples/rewrite/#rewrite-target","text":"Attention Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group . Note Captured groups are saved in numbered placeholders, chronologically, in the form $1 , $2 ... $n . These placeholders can be used as parameters in the rewrite-target annotation. Create an Ingress rule with a rewrite annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something/?(.*) \" | kubectl create -f - In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $1 , which is then used as a parameter in the rewrite-target annotation. For example, the ingress definition above will result in the following rewrites: - rewrite.bar.com/something rewrites to rewrite.bar.com/ - rewrite.bar.com/something/ rewrites to rewrite.bar.com/ - rewrite.bar.com/something/new rewrites to rewrite.bar.com/new","title":"Rewrite Target"},{"location":"examples/rewrite/#app-root","text":"Create an Ingress rule with a app-root annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / \" | kubectl create -f - Check the rewrite is working $ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14 :57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://stickyingress.example.com/app1 Connection: keep-alive","title":"App Root"},{"location":"examples/static-ip/","text":"Static IPs \u00b6 This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller. Prerequisites \u00b6 You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster. Acquiring an IP \u00b6 Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade. To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer . First, create a loadbalancer Service and wait for it to acquire an IP $ kubectl create -f static-ip-svc.yaml service \"nginx-ingress-lb\" created $ kubectl get svc nginx-ingress-lb NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"nginx-ingress-lb\"). $ kubectl create -f nginx-ingress-controller.yaml deployment \"nginx-ingress-controller\" created Assigning the IP to an Ingress \u00b6 From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m $ curl 104 .154.109.191 -kL CLIENT VALUES: client_address=10.180.1.25 command=GET real path=/ query=nil request_version=1.1 request_uri=http://104.154.109.191:8080/ ... Retaining the IP \u00b6 You can test retention by deleting the Ingress $ kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers. Promote ephemeral to static IP \u00b6 To promote the allocated IP to static, you can update the Service manifest $ kubectl patch svc nginx-ingress-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}' \"nginx-ingress-lb\" patched and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) ` $ gcloud compute addresses create nginx-ingress-lb --addresses 104 .154.109.191 --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb]. --- address: 104.154.109.191 creationTimestamp: '2017-01-31T16:34:50.089-08:00' description: '' id: '5208037144487826373' kind: compute#address name: nginx-ingress-lb region: us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb status: IN_USE users: - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191 .","title":"Static IPs"},{"location":"examples/static-ip/#static-ips","text":"This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.","title":"Static IPs"},{"location":"examples/static-ip/#prerequisites","text":"You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.","title":"Prerequisites"},{"location":"examples/static-ip/#acquiring-an-ip","text":"Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade. To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer . First, create a loadbalancer Service and wait for it to acquire an IP $ kubectl create -f static-ip-svc.yaml service \"nginx-ingress-lb\" created $ kubectl get svc nginx-ingress-lb NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"nginx-ingress-lb\"). $ kubectl create -f nginx-ingress-controller.yaml deployment \"nginx-ingress-controller\" created","title":"Acquiring an IP"},{"location":"examples/static-ip/#assigning-the-ip-to-an-ingress","text":"From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m $ curl 104 .154.109.191 -kL CLIENT VALUES: client_address=10.180.1.25 command=GET real path=/ query=nil request_version=1.1 request_uri=http://104.154.109.191:8080/ ...","title":"Assigning the IP to an Ingress"},{"location":"examples/static-ip/#retaining-the-ip","text":"You can test retention by deleting the Ingress $ kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.","title":"Retaining the IP"},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","text":"To promote the allocated IP to static, you can update the Service manifest $ kubectl patch svc nginx-ingress-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}' \"nginx-ingress-lb\" patched and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) ` $ gcloud compute addresses create nginx-ingress-lb --addresses 104 .154.109.191 --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb]. --- address: 104.154.109.191 creationTimestamp: '2017-01-31T16:34:50.089-08:00' description: '' id: '5208037144487826373' kind: compute#address name: nginx-ingress-lb region: us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb status: IN_USE users: - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191 .","title":"Promote ephemeral to static IP"},{"location":"examples/tls-termination/","text":"TLS termination \u00b6 This example demonstrates how to terminate TLS through the nginx Ingress controller. Prerequisites \u00b6 You need a TLS cert and a test HTTP service for this example. Deployment \u00b6 Create a values.yaml file. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx-test spec : tls : - hosts : - foo.bar.com # This assumes tls-secret exists and the SSL # certificate contains a CN for foo.bar.com secretName : tls-secret rules : - host : foo.bar.com http : paths : - path : / backend : # This assumes http-svc exists and routes to healthy endpoints serviceName : http-svc servicePort : 80 The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service. kubectl apply -f ingress.yaml Validation \u00b6 You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: 104.198.183.6 Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) TLS: tls-secret terminates Rules: Host Path Backends ---- ---- -------- * http-svc:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal UPDATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal CREATE ip: 104.198.183.6 7s 7s 1 {nginx-ingress-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming / $ curl 104 .198.183.6 -L curl: (60) SSL certificate problem: self signed certificate More details here: http://curl.haxx.se/docs/sslcerts.html $ curl 104 .198.183.6 -Lk CLIENT VALUES: client_address=10.240.0.4 command=GET real path=/ query=nil request_version=1.1 request_uri=http://35.186.221.137:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=Keep-Alive host=35.186.221.137 user-agent=curl/7.46.0 via=1.1 google x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106 x-forwarded-for=104.132.0.80, 35.186.221.137 x-forwarded-proto=https BODY:","title":"TLS termination"},{"location":"examples/tls-termination/#tls-termination","text":"This example demonstrates how to terminate TLS through the nginx Ingress controller.","title":"TLS termination"},{"location":"examples/tls-termination/#prerequisites","text":"You need a TLS cert and a test HTTP service for this example.","title":"Prerequisites"},{"location":"examples/tls-termination/#deployment","text":"Create a values.yaml file. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx-test spec : tls : - hosts : - foo.bar.com # This assumes tls-secret exists and the SSL # certificate contains a CN for foo.bar.com secretName : tls-secret rules : - host : foo.bar.com http : paths : - path : / backend : # This assumes http-svc exists and routes to healthy endpoints serviceName : http-svc servicePort : 80 The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service. kubectl apply -f ingress.yaml","title":"Deployment"},{"location":"examples/tls-termination/#validation","text":"You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: 104.198.183.6 Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) TLS: tls-secret terminates Rules: Host Path Backends ---- ---- -------- * http-svc:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal UPDATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal CREATE ip: 104.198.183.6 7s 7s 1 {nginx-ingress-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming / $ curl 104 .198.183.6 -L curl: (60) SSL certificate problem: self signed certificate More details here: http://curl.haxx.se/docs/sslcerts.html $ curl 104 .198.183.6 -Lk CLIENT VALUES: client_address=10.240.0.4 command=GET real path=/ query=nil request_version=1.1 request_uri=http://35.186.221.137:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=Keep-Alive host=35.186.221.137 user-agent=curl/7.46.0 via=1.1 google x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106 x-forwarded-for=104.132.0.80, 35.186.221.137 x-forwarded-proto=https BODY:","title":"Validation"},{"location":"user-guide/basic-usage/","text":"Basic usage - host based routing \u00b6 ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powerd by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name. First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA , myServiceB . Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org . One possible solution is to create two ingress resources: apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceA annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceA . foo . org http : paths : - path : / backend : serviceName : myServiceA servicePort : 80 --- apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceB annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceB . foo . org http : paths : - path : / backend : serviceName : myServiceB servicePort : 80 When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource. On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running: kubectl get services -n ingress-nginx","title":"Basic usage"},{"location":"user-guide/basic-usage/#basic-usage-host-based-routing","text":"ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powerd by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name. First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA , myServiceB . Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org . One possible solution is to create two ingress resources: apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceA annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceA . foo . org http : paths : - path : / backend : serviceName : myServiceA servicePort : 80 --- apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceB annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceB . foo . org http : paths : - path : / backend : serviceName : myServiceB servicePort : 80 When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource. On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running: kubectl get services -n ingress-nginx","title":"Basic usage - host based routing"},{"location":"user-guide/cli-arguments/","text":"Command line arguments \u00b6 The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. If not specified, a 404 page will be returned directly from NGINX. --default-server-port int When default-backend-service is not specified or specified service does not have any endpoint, a local endpoint with this port will be used to serve 404 page from inside Nginx. --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --disable-catch-all Disable support for catch-all Ingresses. --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-certificates Dynamically serves certificates instead of reloading NGINX when certificates are created, updated, or deleted. Currently does not support OCSP stapling, so --enable-ssl-chain-completion must be turned off (default behaviour). Assuming the certificate is generated with a 2048 bit RSA key/cert pair, this feature can store roughly 5000 certificates. (enabled by default) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout duration Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) -v , --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.","title":"Command line arguments"},{"location":"user-guide/cli-arguments/#command-line-arguments","text":"The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. If not specified, a 404 page will be returned directly from NGINX. --default-server-port int When default-backend-service is not specified or specified service does not have any endpoint, a local endpoint with this port will be used to serve 404 page from inside Nginx. --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --disable-catch-all Disable support for catch-all Ingresses. --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-certificates Dynamically serves certificates instead of reloading NGINX when certificates are created, updated, or deleted. Currently does not support OCSP stapling, so --enable-ssl-chain-completion must be turned off (default behaviour). Assuming the certificate is generated with a 2048 bit RSA key/cert pair, this feature can store roughly 5000 certificates. (enabled by default) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout duration Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) -v , --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.","title":"Command line arguments"},{"location":"user-guide/custom-errors/","text":"Custom errors \u00b6 When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error: Header Value X-Code HTTP status code retuned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json , a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML. Important The custom backend is expected to return the correct HTTP status code instead of 200 . NGINX does not change the response from the custom default backend. An example of such custom backend is available inside the source repository at images/custom-error-pages . See also the Custom errors example.","title":"Custom errors"},{"location":"user-guide/custom-errors/#custom-errors","text":"When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error: Header Value X-Code HTTP status code retuned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json , a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML. Important The custom backend is expected to return the correct HTTP status code instead of 200 . NGINX does not change the response from the custom default backend. An example of such custom backend is available inside the source repository at images/custom-error-pages . See also the Custom errors example.","title":"Custom errors"},{"location":"user-guide/default-backend/","text":"Default backend \u00b6 The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs: /healthz that returns 200 / that returns 404 Example The sub-directory /images/404-server provides a service which satisfies the requirements for a default backend. Example The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.","title":"Default backend"},{"location":"user-guide/default-backend/#default-backend","text":"The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs: /healthz that returns 200 / that returns 404 Example The sub-directory /images/404-server provides a service which satisfies the requirements for a default backend. Example The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.","title":"Default backend"},{"location":"user-guide/exposing-tcp-udp-services/","text":"Exposing TCP and UDP services \u00b6 Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: ::[PROXY]:[PROXY] It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service https://www.nginx.com/resources/admin-guide/proxy-protocol The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000 apiVersion : v1 kind : ConfigMap metadata : name : tcp-services namespace : ingress-nginx data : 9000 : \"default/example-go:8080\" Since 1.9.13 NGINX provides UDP Load Balancing . The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53 apiVersion : v1 kind : ConfigMap metadata : name : udp-services namespace : ingress-nginx data : 53 : \"kube-system/kube-dns:53\" If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress. apiVersion : v1 kind : Service metadata : name : ingress-nginx namespace : ingress-nginx labels : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx spec : type : LoadBalancer ports : - name : http port : 80 targetPort : 80 protocol : TCP - name : https port : 443 targetPort : 443 protocol : TCP - name : proxied-tcp-9000 port : 9000 targetPort : 9000 protocol : TCP selector : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx","title":"Exposing TCP and UDP services"},{"location":"user-guide/exposing-tcp-udp-services/#exposing-tcp-and-udp-services","text":"Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: ::[PROXY]:[PROXY] It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service https://www.nginx.com/resources/admin-guide/proxy-protocol The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000 apiVersion : v1 kind : ConfigMap metadata : name : tcp-services namespace : ingress-nginx data : 9000 : \"default/example-go:8080\" Since 1.9.13 NGINX provides UDP Load Balancing . The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53 apiVersion : v1 kind : ConfigMap metadata : name : udp-services namespace : ingress-nginx data : 53 : \"kube-system/kube-dns:53\" If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress. apiVersion : v1 kind : Service metadata : name : ingress-nginx namespace : ingress-nginx labels : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx spec : type : LoadBalancer ports : - name : http port : 80 targetPort : 80 protocol : TCP - name : https port : 443 targetPort : 443 protocol : TCP - name : proxied-tcp-9000 port : 9000 targetPort : 9000 protocol : TCP selector : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx","title":"Exposing TCP and UDP services"},{"location":"user-guide/external-articles/","text":"External Articles \u00b6 Pain(less) NGINX Ingress Accessing Kubernetes Pods from Outside of the Cluster Kubernetes - Redirect HTTP to HTTPS with ELB and the nginx ingress controller Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure","title":"External Articles"},{"location":"user-guide/external-articles/#external-articles","text":"Pain(less) NGINX Ingress Accessing Kubernetes Pods from Outside of the Cluster Kubernetes - Redirect HTTP to HTTPS with ELB and the nginx ingress controller Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure","title":"External Articles"},{"location":"user-guide/ingress-path-matching/","text":"Ingress Path Matching \u00b6 Regular Expression Support \u00b6 Important Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used. The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. See the description of the use-regex annotation for more details. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/.* backend : serviceName : test servicePort : 80 The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server: location ~* \"^/foo/.*\" { ... } Path Priority \u00b6 In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks. Please read the warning before using regular expressions in your ingress definitions. Example \u00b6 Let the following two ingress definitions be created: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-1 spec : rules : - host : test.com http : paths : - path : /foo/bar backend : serviceName : service1 servicePort : 80 - path : /foo/bar/ backend : serviceName : service2 servicePort : 80 apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-2 annotations : nginx.ingress.kubernetes.io/rewrite-target : /$1 spec : rules : - host : test.com http : paths : - path : /foo/bar/(.+) backend : serviceName : service3 servicePort : 80 The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server: location ~* ^/foo/bar/.+ { ... } location ~* \"^/foo/bar/\" { ... } location ~* \"^/foo/bar\" { ... } The following request URI's would match the corresponding location blocks: test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3. test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2. test.com/foo/bar matches ~* ^/foo/bar and will go to service 1. IMPORTANT NOTES : If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Warning \u00b6 The following example describes a case that may inflict unwanted path matching behaviour. This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier . For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\" . Example \u00b6 Let the following ingress be defined: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-3 annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/bar/bar backend : serviceName : test servicePort : 80 - path : /foo/bar/[A-Z0-9]{3} backend : serviceName : test servicePort : 80 The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server: location ~* \"^/foo/bar/[A-Z0-9]{3}\" { ... } location ~* \"^/foo/bar/bar\" { ... } A request to test.com/foo/bar/bar would match the ^/foo/[A-Z0-9]{3} location block instead of the longest EXACT matching path.","title":"Regular expressions in paths"},{"location":"user-guide/ingress-path-matching/#ingress-path-matching","text":"","title":"Ingress Path Matching"},{"location":"user-guide/ingress-path-matching/#regular-expression-support","text":"Important Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used. The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. See the description of the use-regex annotation for more details. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/.* backend : serviceName : test servicePort : 80 The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server: location ~* \"^/foo/.*\" { ... }","title":"Regular Expression Support"},{"location":"user-guide/ingress-path-matching/#path-priority","text":"In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks. Please read the warning before using regular expressions in your ingress definitions.","title":"Path Priority"},{"location":"user-guide/ingress-path-matching/#example","text":"Let the following two ingress definitions be created: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-1 spec : rules : - host : test.com http : paths : - path : /foo/bar backend : serviceName : service1 servicePort : 80 - path : /foo/bar/ backend : serviceName : service2 servicePort : 80 apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-2 annotations : nginx.ingress.kubernetes.io/rewrite-target : /$1 spec : rules : - host : test.com http : paths : - path : /foo/bar/(.+) backend : serviceName : service3 servicePort : 80 The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server: location ~* ^/foo/bar/.+ { ... } location ~* \"^/foo/bar/\" { ... } location ~* \"^/foo/bar\" { ... } The following request URI's would match the corresponding location blocks: test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3. test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2. test.com/foo/bar matches ~* ^/foo/bar and will go to service 1. IMPORTANT NOTES : If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.","title":"Example"},{"location":"user-guide/ingress-path-matching/#warning","text":"The following example describes a case that may inflict unwanted path matching behaviour. This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier . For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\" .","title":"Warning"},{"location":"user-guide/ingress-path-matching/#example_1","text":"Let the following ingress be defined: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-3 annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/bar/bar backend : serviceName : test servicePort : 80 - path : /foo/bar/[A-Z0-9]{3} backend : serviceName : test servicePort : 80 The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server: location ~* \"^/foo/bar/[A-Z0-9]{3}\" { ... } location ~* \"^/foo/bar/bar\" { ... } A request to test.com/foo/bar/bar would match the ^/foo/[A-Z0-9]{3} location block instead of the longest EXACT matching path.","title":"Example"},{"location":"user-guide/miscellaneous/","text":"Miscellaneous \u00b6 Source IP address \u00b6 By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer. If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Another option is to enable proxy protocol using use-proxy-protocol: \"true\" . In this mode NGINX does not use the content of the header to get the source IP address of the connection. Proxy Protocol \u00b6 If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others ELBs in AWS and HAProxy support Proxy Protocol. Websockets \u00b6 Support for websockets is provided by NGINX out of the box. No special configuration required. The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout . The default value of this settings is 60 seconds . A more adequate value to support websockets is a value higher than one hour ( 3600 ). Important If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP. Optimizing TLS Time To First Byte (TTTFB) \u00b6 NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size. This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k ). Retries in non-idempotent methods \u00b6 Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap. Limitations \u00b6 Ingress rules for TLS require the definition of the field host Why endpoints and not services \u00b6 The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.","title":"Miscellaneous"},{"location":"user-guide/miscellaneous/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"user-guide/miscellaneous/#source-ip-address","text":"By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer. If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Another option is to enable proxy protocol using use-proxy-protocol: \"true\" . In this mode NGINX does not use the content of the header to get the source IP address of the connection.","title":"Source IP address"},{"location":"user-guide/miscellaneous/#proxy-protocol","text":"If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others ELBs in AWS and HAProxy support Proxy Protocol.","title":"Proxy Protocol"},{"location":"user-guide/miscellaneous/#websockets","text":"Support for websockets is provided by NGINX out of the box. No special configuration required. The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout . The default value of this settings is 60 seconds . A more adequate value to support websockets is a value higher than one hour ( 3600 ). Important If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.","title":"Websockets"},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","text":"NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size. This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k ).","title":"Optimizing TLS Time To First Byte (TTTFB)"},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","text":"Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.","title":"Retries in non-idempotent methods"},{"location":"user-guide/miscellaneous/#limitations","text":"Ingress rules for TLS require the definition of the field host","title":"Limitations"},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","text":"The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.","title":"Why endpoints and not services"},{"location":"user-guide/monitoring/","text":"Prometheus and Grafana installation \u00b6 This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the NGINX Ingress controller. Important This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data. Before You Begin \u00b6 The NGINX Ingress controller should already be deployed according to the deployment instructions here . Note that the yaml files used in this tutorial are stored in the deploy/monitoring folder of the GitHub repository kubernetes/ingress-nginx . Deploy and configure Prometheus Server \u00b6 The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed. If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server. Running the following command deploys the prometheus configuration in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/configuration.yaml configmap \"prometheus-configuration\" created Running the following command deploys prometheus in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/prometheus.yaml clusterrole \"prometheus-server\" created serviceaccount \"prometheus-server\" created clusterrolebinding \"prometheus-server\" created deployment \"prometheus-server\" created service \"prometheus-server\" created Prometheus Dashboard \u00b6 Open Prometheus dashboard in a web browser: kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 1m Obtain the IP address of the nodes in the running cluster: kubectl get nodes -o wide In some cases where the node only have internal IP addresses we need to execute: kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address} 10.192.0.2 10.192.0.3 10.192.0.4 Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard. According to the above example, this URL will be http://10.192.0.3:32630 Grafana \u00b6 kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/grafana.yaml kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 10m grafana NodePort 10.98.233.87 3000:31086/TCP 10m Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086 The username and password is admin After the login you can import the Grafana dashboard from https://github.com/kubernetes/ingress-nginx/tree/master/deploy/grafana/dashboards","title":"Prometheus and Grafana installation"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation","text":"This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the NGINX Ingress controller. Important This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.","title":"Prometheus and Grafana installation"},{"location":"user-guide/monitoring/#before-you-begin","text":"The NGINX Ingress controller should already be deployed according to the deployment instructions here . Note that the yaml files used in this tutorial are stored in the deploy/monitoring folder of the GitHub repository kubernetes/ingress-nginx .","title":"Before You Begin"},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","text":"The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed. If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server. Running the following command deploys the prometheus configuration in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/configuration.yaml configmap \"prometheus-configuration\" created Running the following command deploys prometheus in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/prometheus.yaml clusterrole \"prometheus-server\" created serviceaccount \"prometheus-server\" created clusterrolebinding \"prometheus-server\" created deployment \"prometheus-server\" created service \"prometheus-server\" created","title":"Deploy and configure Prometheus Server"},{"location":"user-guide/monitoring/#prometheus-dashboard","text":"Open Prometheus dashboard in a web browser: kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 1m Obtain the IP address of the nodes in the running cluster: kubectl get nodes -o wide In some cases where the node only have internal IP addresses we need to execute: kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address} 10.192.0.2 10.192.0.3 10.192.0.4 Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard. According to the above example, this URL will be http://10.192.0.3:32630","title":"Prometheus Dashboard"},{"location":"user-guide/monitoring/#grafana","text":"kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/grafana.yaml kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 10m grafana NodePort 10.98.233.87 3000:31086/TCP 10m Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086 The username and password is admin After the login you can import the Grafana dashboard from https://github.com/kubernetes/ingress-nginx/tree/master/deploy/grafana/dashboards","title":"Grafana"},{"location":"user-guide/multiple-ingress/","text":"Multiple Ingress controllers \u00b6 If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, you need to specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like the ingress-nginx controller to claim. For instance, metadata : name : foo annotations : kubernetes.io/ingress.class : \"gce\" will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like metadata : name : foo annotations : kubernetes.io/ingress.class : \"nginx\" will target the nginx controller, forcing the GCE controller to ignore it. To reiterate, setting the annotation to any value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. Multiple ingress-nginx controllers \u00b6 This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic). To do this, the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example: spec : template : spec : containers : - name : nginx-ingress-internal-controller args : - /nginx-ingress-controller - '--election-id=ingress-controller-leader-internal' - '--ingress-class=nginx-internal' - '--configmap=ingress/nginx-ingress-internal-controller' Important Deploying multiple Ingress controllers, of different types (e.g., ingress-nginx & gce ), and not specifying a class annotation will result in both or all controllers fighting to satisfy the Ingress, and all of them racing to update Ingress status field in confusing ways. When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --ingress-class value (see IsValid method in internal/ingress/annotations/class/main.go ), otherwise the class annotation become required.","title":"Multiple Ingress controllers"},{"location":"user-guide/multiple-ingress/#multiple-ingress-controllers","text":"If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, you need to specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like the ingress-nginx controller to claim. For instance, metadata : name : foo annotations : kubernetes.io/ingress.class : \"gce\" will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like metadata : name : foo annotations : kubernetes.io/ingress.class : \"nginx\" will target the nginx controller, forcing the GCE controller to ignore it. To reiterate, setting the annotation to any value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.","title":"Multiple Ingress controllers"},{"location":"user-guide/multiple-ingress/#multiple-ingress-nginx-controllers","text":"This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic). To do this, the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example: spec : template : spec : containers : - name : nginx-ingress-internal-controller args : - /nginx-ingress-controller - '--election-id=ingress-controller-leader-internal' - '--ingress-class=nginx-internal' - '--configmap=ingress/nginx-ingress-internal-controller' Important Deploying multiple Ingress controllers, of different types (e.g., ingress-nginx & gce ), and not specifying a class annotation will result in both or all controllers fighting to satisfy the Ingress, and all of them racing to update Ingress status field in confusing ways. When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --ingress-class value (see IsValid method in internal/ingress/annotations/class/main.go ), otherwise the class annotation become required.","title":"Multiple ingress-nginx controllers"},{"location":"user-guide/tls/","text":"TLS/HTTPS \u00b6 TLS Secrets \u00b6 Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret. You can generate a self-signed certificate and private key with: $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${ KEY_FILE } -out ${ CERT_FILE } -subj \"/CN= ${ HOST } /O= ${ HOST } \" ` Then create the secret in the cluster via: kubectl create secret tls ${ CERT_NAME } --key ${ KEY_FILE } --cert ${ CERT_FILE } The resulting secret will be of type kubernetes.io/tls . Default SSL Certificate \u00b6 NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required. For this reason the Ingress controller provides the flag --default-ssl-certificate . The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate. For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment. SSL Passthrough \u00b6 The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects. Warning This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend. Note Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints. HTTP Strict Transport Security \u00b6 HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HSTS is enabled by default. To disable this behavior use hsts: \"false\" in the configuration ConfigMap . Server-side HTTPS enforcement through redirect \u00b6 By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map , or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. Tip When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource. Automated Certificate Management with Kube-Lego \u00b6 Tip Kube-Lego has reached end-of-life and is being replaced by cert-manager . Kube-Lego automatically requests missing or expired certificates from Let's Encrypt by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: kubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\" To setup Kube-Lego you can take a look at this full example . The first version to fully support Kube-Lego is Nginx Ingress controller 0.8. Default TLS Version and Ciphers \u00b6 To provide the most secure baseline configuration possible, nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers . Legacy TLS \u00b6 The default configuration, though secure, does not support some older browsers and operating systems. For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress's default configuration. To change this default behavior, use a ConfigMap . A sample ConfigMap fragment to allow these older clients to connect could look something like the following: kind : ConfigMap apiVersion : v1 metadata : name : nginx - config data : ssl - ciphers : \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\" ssl - protocols : \"TLSv1 TLSv1.1 TLSv1.2\"","title":"TLS/HTTPS"},{"location":"user-guide/tls/#tlshttps","text":"","title":"TLS/HTTPS"},{"location":"user-guide/tls/#tls-secrets","text":"Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret. You can generate a self-signed certificate and private key with: $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${ KEY_FILE } -out ${ CERT_FILE } -subj \"/CN= ${ HOST } /O= ${ HOST } \" ` Then create the secret in the cluster via: kubectl create secret tls ${ CERT_NAME } --key ${ KEY_FILE } --cert ${ CERT_FILE } The resulting secret will be of type kubernetes.io/tls .","title":"TLS Secrets"},{"location":"user-guide/tls/#default-ssl-certificate","text":"NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required. For this reason the Ingress controller provides the flag --default-ssl-certificate . The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate. For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.","title":"Default SSL Certificate"},{"location":"user-guide/tls/#ssl-passthrough","text":"The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects. Warning This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend. Note Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.","title":"SSL Passthrough"},{"location":"user-guide/tls/#http-strict-transport-security","text":"HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HSTS is enabled by default. To disable this behavior use hsts: \"false\" in the configuration ConfigMap .","title":"HTTP Strict Transport Security"},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","text":"By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map , or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. Tip When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.","title":"Server-side HTTPS enforcement through redirect"},{"location":"user-guide/tls/#automated-certificate-management-with-kube-lego","text":"Tip Kube-Lego has reached end-of-life and is being replaced by cert-manager . Kube-Lego automatically requests missing or expired certificates from Let's Encrypt by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: kubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\" To setup Kube-Lego you can take a look at this full example . The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.","title":"Automated Certificate Management with Kube-Lego"},{"location":"user-guide/tls/#default-tls-version-and-ciphers","text":"To provide the most secure baseline configuration possible, nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers .","title":"Default TLS Version and Ciphers"},{"location":"user-guide/tls/#legacy-tls","text":"The default configuration, though secure, does not support some older browsers and operating systems. For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress's default configuration. To change this default behavior, use a ConfigMap . A sample ConfigMap fragment to allow these older clients to connect could look something like the following: kind : ConfigMap apiVersion : v1 metadata : name : nginx - config data : ssl - ciphers : \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\" ssl - protocols : \"TLSv1 TLSv1.1 TLSv1.2\"","title":"Legacy TLS"},{"location":"user-guide/nginx-configuration/","text":"NGINX Configuration \u00b6 There are three ways to customize NGINX: ConfigMap : using a Configmap to set global configurations in NGINX. Annotations : use this if you want a specific configuration for a particular Ingress rule. Custom template : when more specific settings are required, like open_file_cache , adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.","title":"Introduction"},{"location":"user-guide/nginx-configuration/#nginx-configuration","text":"There are three ways to customize NGINX: ConfigMap : using a Configmap to set global configurations in NGINX. Annotations : use this if you want a specific configuration for a particular Ingress rule. Custom template : when more specific settings are required, like open_file_cache , adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.","title":"NGINX Configuration"},{"location":"user-guide/nginx-configuration/annotations/","text":"Annotations \u00b6 You can add these Kubernetes annotations to specific Ingress objects to customize their behavior. Tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\" , \"false\" , \"100\" . Note The annotation prefix can be changed using the --annotations-prefix command line argument , but the default is nginx.ingress.kubernetes.io , as described in the table below. Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-type basic or digest nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/secure-verify-ca-secret string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf string nginx.ingress.kubernetes.io/lua-resty-waf-debug \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets string nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules string nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold number nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-influxdb \"true\" or \"false\" nginx.ingress.kubernetes.io/influxdb-measurement string nginx.ingress.kubernetes.io/influxdb-port string nginx.ingress.kubernetes.io/influxdb-host string nginx.ingress.kubernetes.io/influxdb-server-name string nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string Canary \u00b6 In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set: nginx.ingress.kubernetes.io/canary-by-header : The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always , it will be routed to the canary. When the header is set to never , it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-by-header-value : The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined. nginx.ingress.kubernetes.io/canary-by-cookie : The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always , it will be routed to the canary. When the cookie is set to never , it will never be routed to the canary. For any other value, the cookie will be ingored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-weight : The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress. Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by . Known Limitations Currently a maximum of one canary ingress can be applied per Ingress rule. Rewrite \u00b6 In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service. If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for / . Example Please check the rewrite example. Session Affinity \u00b6 The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie . Attention If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie , then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server. Example Please check the affinity example. Cookie affinity \u00b6 If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'. The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex. Authentication \u00b6 Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key auth . The annotations are: nginx.ingress.kubernetes.io/auth-type: [basic|digest] Indicates the HTTP Authentication Type: Basic or Digest Access Authentication . nginx.ingress.kubernetes.io/auth-secret: secretName The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-realm: \"realm string\" Example Please check the auth example. Custom NGINX upstream hashing \u00b6 NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution. To enable consistent hashing for a backend: nginx.ingress.kubernetes.io/upstream-hash-by : the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" to consistently hash upstream requests by the current request URI. \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset : \"true\". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3). Please check the chashsubset example. Custom NGINX load balancing \u00b6 This is similar to load-balance in ConfigMap , but configures load balancing algorithm per ingress. Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm. Custom NGINX upstream vhost \u00b6 This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host , which forms part of the location block. This is useful if you need to call the upstream server by something other than $host . Client Certificate Authentication \u00b6 It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. The annotations are: nginx.ingress.kubernetes.io/auth-tls-secret: secretName : The name of the Secret that contains the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-tls-verify-depth : The validation depth between the provided client certificate and the Certification Authority chain. nginx.ingress.kubernetes.io/auth-tls-verify-client : Enables verification of client certificates. nginx.ingress.kubernetes.io/auth-tls-error-page : The URL/Page that user should be redirected in case of a Certificate Authentication Error nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream : Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled. Example Please check the client-certs example. Attention TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior. Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/ Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls Configuration snippet \u00b6 Using this annotation you can add additional configuration to the NGINX location. For example: nginx.ingress.kubernetes.io/configuration-snippet : | more_set_headers \"Request-Id: $req_id\"; Custom HTTP Errors \u00b6 Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors , but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path. Example usage: nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\" Default Backend \u00b6 This annotation is of the form nginx.ingress.kubernetes.io/default-backend: to specify a custom default backend. This is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set. Enable CORS \u00b6 To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality. CORS can be controlled with the following annotations: nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case). Default: GET, PUT, POST, DELETE, PATCH, OPTIONS Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\" nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -. Default: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\" nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port Default: * Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\" nginx.ingress.kubernetes.io/cors-allow-credentials controls if credentials can be passed during CORS operations. Default: true Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\" nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600 Note For more information please see https://enable-cors.org HTTP2 Push Preload. \u00b6 Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests. Example nginx.ingress.kubernetes.io/http2-push-preload: \"true\" Server Alias \u00b6 To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: \"\" . This will create a server with the same configuration, but a different server_name as the provided host. Note A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration. For more information please see the server_name documentation . Server snippet \u00b6 Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block. apiVersion : extensions/v1beta1 kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/server-snippet : | set $agentflag 0; if ($http_user_agent ~* \"(Mobile)\" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 https://m.example.com; } Attention This annotation can be used only once per host. Client Body Buffer Size \u00b6 Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule. Note The annotation value must be given in a format understood by Nginx. Example nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte For more information please see http://nginx.org External Authentication \u00b6 To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent. nginx.ingress.kubernetes.io/auth-url : \"URL to the authentication service\" Additionally it is possible to set: nginx.ingress.kubernetes.io/auth-method : to specify the HTTP method to use. nginx.ingress.kubernetes.io/auth-signin : to specify the location of the error page. nginx.ingress.kubernetes.io/auth-response-headers : to specify headers to pass to backend once authentication request completes. nginx.ingress.kubernetes.io/auth-request-redirect : to specify the X-Auth-Request-Redirect header value. nginx.ingress.kubernetes.io/auth-snippet : to specify a custom snippet to use with external authentication, e.g. nginx.ingress.kubernetes.io/auth-url : http://foo.com/external-auth nginx.ingress.kubernetes.io/auth-snippet : | proxy_set_header Foo-Header 42; Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set Example Please check the external-auth example. Rate limiting \u00b6 These annotations define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate DDoS Attacks . nginx.ingress.kubernetes.io/limit-connections : number of concurrent connections allowed from a single IP address. nginx.ingress.kubernetes.io/limit-rps : number of connections that may be accepted from a given IP each second. nginx.ingress.kubernetes.io/limit-rpm : number of connections that may be accepted from a given IP each minute. nginx.ingress.kubernetes.io/limit-rate-after : sets the initial amount after which the further transmission of a response to a client will be rate limited. nginx.ingress.kubernetes.io/limit-rate : rate of request that accepted from a client each second. You can specify the client IP source ranges to be excluded from rate-limiting through the nginx.ingress.kubernetes.io/limit-whitelist annotation. The value is a comma separated list of CIDRs. If you specify multiple annotations in a single Ingress rule, limit-rpm , and then limit-rps takes precedence. The annotation nginx.ingress.kubernetes.io/limit-rate , nginx.ingress.kubernetes.io/limit-rate-after define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. To configure this setting globally for all Ingress rules, the limit-rate-after and limit-rate value may be set in the NGINX ConfigMap . if you set the value in ingress annotation will cover global setting. Permanent Redirect \u00b6 This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google. Permanent Redirect Code \u00b6 This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308. Temporal Redirect \u00b6 This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily) SSL Passthrough \u00b6 The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide. Note SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag. Attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object. Service Upstream \u00b6 By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257 . Known Issues \u00b6 If the service-upstream annotation is specified the following things should be taken into consideration: Sticky Sessions will not work as only round-robin load balancing is supported. The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream. Server-side HTTPS enforcement through redirect \u00b6 By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap . To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource. Redirect from/to www \u00b6 In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\" Attention If at some point a new Ingress is created with a host equal to one of the options (like domain.com ) the annotation will be omitted. Attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate. Whitelist source range \u00b6 You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 . To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap . Note Adding an annotation to an Ingress rule overrides any global restriction. Custom timeouts \u00b6 Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: nginx.ingress.kubernetes.io/proxy-connect-timeout nginx.ingress.kubernetes.io/proxy-send-timeout nginx.ingress.kubernetes.io/proxy-read-timeout nginx.ingress.kubernetes.io/proxy-next-upstream nginx.ingress.kubernetes.io/proxy-next-upstream-tries nginx.ingress.kubernetes.io/proxy-request-buffering Proxy redirect \u00b6 With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to it is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to , otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces. By default the value of each annotation is \"off\". Custom max body size \u00b6 For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size . To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-body-size : 8m Proxy cookie domain \u00b6 Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap . Proxy cookie path \u00b6 Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap . Proxy buffering \u00b6 Enable or disable proxy buffering proxy_buffering . By default proxy buffering is disabled in the NGINX config. To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-buffering : \"on\" Proxy buffers Number \u00b6 Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4 To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffers-number : \"4\" Proxy buffer size \u00b6 Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\" To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffer-size : \"8k\" SSL ciphers \u00b6 Specifies the enabled ciphers . Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host. nginx.ingress.kubernetes.io/ssl-ciphers : \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\" Connection proxy header \u00b6 Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation: nginx.ingress.kubernetes.io/connection-proxy-header : \"keep-alive\" Enable Access Log \u00b6 Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation: nginx.ingress.kubernetes.io/enable-access-log : \"false\" Enable Rewrite Log \u00b6 Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation: nginx.ingress.kubernetes.io/enable-rewrite-log : \"true\" X-Forwarded-Prefix Header \u00b6 To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used: nginx.ingress.kubernetes.io/x-forwarded-prefix : \"/path\" Lua Resty WAF \u00b6 Using lua-resty-waf-* annotations we can enable and control the lua-resty-waf Web Application Firewall per location. Following configuration will enable the WAF for the paths defined in the corresponding ingress: nginx.ingress.kubernetes.io/lua-resty-waf : \"active\" In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug to \"true\" in addition to the above configuration. The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf are inactive and simulate . In inactive mode WAF won't do anything, whereas in simulate mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it. lua-resty-waf comes with predefined set of rules https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules that covers ModSecurity CRS. You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets to ignore a subset of those rulesets. For an example: nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets : \"41000_sqli, 42000_xss\" will ignore the two mentioned rulesets. It is also possible to configure custom WAF rules per ingress using the nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word foo : nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules : '[=[ { \"access\": [ { \"actions\": { \"disrupt\" : \"DENY\" }, \"id\": 10001, \"msg\": \"my custom rule\", \"operator\": \"STR_CONTAINS\", \"pattern\": \"foo\", \"vars\": [ { \"parse\": [ \"values\", 1 ], \"type\": \"REQUEST_ARGS\" } ] } ], \"body_filter\": [], \"header_filter\":[] } ]=]' Since the default allowed contents were \"text/html\", \"text/json\", \"application/json\" We can enable the following annotation for allow all contents type: nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types : \"true\" The default score of lua-resty-waf is 5, which usually triggered if hitting 2 default rules, you can modify the score threshold with following annotation: nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold : \"10\" When you enabled HTTPS in the endpoint and since resty-lua will return 500 error when processing \"multipart\" contents Reference for this issue By default, it will be \"true\" You may enable the following annotation for work around: nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body : \"false\" For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf . ModSecurity \u00b6 ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap . Note this will enable ModSecurity for all paths, and each path must be disabled manually. It can be enabled using the following annotation: nginx.ingress.kubernetes.io/enable-modsecurity : \"true\" ModSecurity will run in \"Detection-Only\" mode using the recommended configuration . You can enable the OWASP Core Rule Set by setting the following annotation: nginx.ingress.kubernetes.io/enable-owasp-core-rules : \"true\" You can pass transactionIDs from nginx by setting up the following: nginx.ingress.kubernetes.io/modsecurity-transaction-id : \"$request_id\" You can also add your own set of modsecurity rules via a snippet: nginx.ingress.kubernetes.io/modsecurity-snippet : | SecRuleEngine On SecDebugLog /tmp/modsec_debug.log Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement: nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf InfluxDB \u00b6 Using influxdb-* annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket using the nginx-influxdb-module . nginx.ingress.kubernetes.io/enable-influxdb : \"true\" nginx.ingress.kubernetes.io/influxdb-measurement : \"nginx-reqs\" nginx.ingress.kubernetes.io/influxdb-port : \"8089\" nginx.ingress.kubernetes.io/influxdb-host : \"127.0.0.1\" nginx.ingress.kubernetes.io/influxdb-server-name : \"nginx-ingress\" For the influxdb-host parameter you have two options: Use an InfluxDB server configured with the UDP protocol enabled. Deploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the socket listener input and to write using anyone of the outputs plugins like InfluxDB, Apache Kafka, Prometheus, etc.. (recommended) It's important to remember that there's no DNS resolver at this stage so you will have to configure an ip address to nginx.ingress.kubernetes.io/influxdb-host . If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use 127.0.0.1 . Backend Protocol \u00b6 Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP By default NGINX uses HTTP . Example: nginx.ingress.kubernetes.io/backend-protocol : \"HTTPS\" Use Regex \u00b6 Attention When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie , nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex. Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false . The following will indicate that regular expression paths are being used: nginx.ingress.kubernetes.io/use-regex : \"true\" The following will indicate that regular expression paths are not being used: nginx.ingress.kubernetes.io/use-regex : \"false\" When this annotation is set to true , the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Please read about ingress path matching before using this modifier. Satisfy \u00b6 By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value. nginx.ingress.kubernetes.io/satisfy : \"any\"","title":"Annotations"},{"location":"user-guide/nginx-configuration/annotations/#annotations","text":"You can add these Kubernetes annotations to specific Ingress objects to customize their behavior. Tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\" , \"false\" , \"100\" . Note The annotation prefix can be changed using the --annotations-prefix command line argument , but the default is nginx.ingress.kubernetes.io , as described in the table below. Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-type basic or digest nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/secure-verify-ca-secret string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf string nginx.ingress.kubernetes.io/lua-resty-waf-debug \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets string nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules string nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold number nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-influxdb \"true\" or \"false\" nginx.ingress.kubernetes.io/influxdb-measurement string nginx.ingress.kubernetes.io/influxdb-port string nginx.ingress.kubernetes.io/influxdb-host string nginx.ingress.kubernetes.io/influxdb-server-name string nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string","title":"Annotations"},{"location":"user-guide/nginx-configuration/annotations/#canary","text":"In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set: nginx.ingress.kubernetes.io/canary-by-header : The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always , it will be routed to the canary. When the header is set to never , it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-by-header-value : The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined. nginx.ingress.kubernetes.io/canary-by-cookie : The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always , it will be routed to the canary. When the cookie is set to never , it will never be routed to the canary. For any other value, the cookie will be ingored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-weight : The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress. Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by . Known Limitations Currently a maximum of one canary ingress can be applied per Ingress rule.","title":"Canary"},{"location":"user-guide/nginx-configuration/annotations/#rewrite","text":"In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service. If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for / . Example Please check the rewrite example.","title":"Rewrite"},{"location":"user-guide/nginx-configuration/annotations/#session-affinity","text":"The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie . Attention If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie , then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server. Example Please check the affinity example.","title":"Session Affinity"},{"location":"user-guide/nginx-configuration/annotations/#cookie-affinity","text":"If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'. The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.","title":"Cookie affinity"},{"location":"user-guide/nginx-configuration/annotations/#authentication","text":"Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key auth . The annotations are: nginx.ingress.kubernetes.io/auth-type: [basic|digest] Indicates the HTTP Authentication Type: Basic or Digest Access Authentication . nginx.ingress.kubernetes.io/auth-secret: secretName The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-realm: \"realm string\" Example Please check the auth example.","title":"Authentication"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-hashing","text":"NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution. To enable consistent hashing for a backend: nginx.ingress.kubernetes.io/upstream-hash-by : the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" to consistently hash upstream requests by the current request URI. \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset : \"true\". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3). Please check the chashsubset example.","title":"Custom NGINX upstream hashing"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-load-balancing","text":"This is similar to load-balance in ConfigMap , but configures load balancing algorithm per ingress. Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.","title":"Custom NGINX load balancing"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-vhost","text":"This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host , which forms part of the location block. This is useful if you need to call the upstream server by something other than $host .","title":"Custom NGINX upstream vhost"},{"location":"user-guide/nginx-configuration/annotations/#client-certificate-authentication","text":"It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. The annotations are: nginx.ingress.kubernetes.io/auth-tls-secret: secretName : The name of the Secret that contains the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-tls-verify-depth : The validation depth between the provided client certificate and the Certification Authority chain. nginx.ingress.kubernetes.io/auth-tls-verify-client : Enables verification of client certificates. nginx.ingress.kubernetes.io/auth-tls-error-page : The URL/Page that user should be redirected in case of a Certificate Authentication Error nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream : Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled. Example Please check the client-certs example. Attention TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior. Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/ Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls","title":"Client Certificate Authentication"},{"location":"user-guide/nginx-configuration/annotations/#configuration-snippet","text":"Using this annotation you can add additional configuration to the NGINX location. For example: nginx.ingress.kubernetes.io/configuration-snippet : | more_set_headers \"Request-Id: $req_id\";","title":"Configuration snippet"},{"location":"user-guide/nginx-configuration/annotations/#custom-http-errors","text":"Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors , but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path. Example usage: nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\"","title":"Custom HTTP Errors"},{"location":"user-guide/nginx-configuration/annotations/#default-backend","text":"This annotation is of the form nginx.ingress.kubernetes.io/default-backend: to specify a custom default backend. This is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set.","title":"Default Backend"},{"location":"user-guide/nginx-configuration/annotations/#enable-cors","text":"To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality. CORS can be controlled with the following annotations: nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case). Default: GET, PUT, POST, DELETE, PATCH, OPTIONS Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\" nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -. Default: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\" nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port Default: * Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\" nginx.ingress.kubernetes.io/cors-allow-credentials controls if credentials can be passed during CORS operations. Default: true Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\" nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600 Note For more information please see https://enable-cors.org","title":"Enable CORS"},{"location":"user-guide/nginx-configuration/annotations/#http2-push-preload","text":"Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests. Example nginx.ingress.kubernetes.io/http2-push-preload: \"true\"","title":"HTTP2 Push Preload."},{"location":"user-guide/nginx-configuration/annotations/#server-alias","text":"To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: \"\" . This will create a server with the same configuration, but a different server_name as the provided host. Note A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration. For more information please see the server_name documentation .","title":"Server Alias"},{"location":"user-guide/nginx-configuration/annotations/#server-snippet","text":"Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block. apiVersion : extensions/v1beta1 kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/server-snippet : | set $agentflag 0; if ($http_user_agent ~* \"(Mobile)\" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 https://m.example.com; } Attention This annotation can be used only once per host.","title":"Server snippet"},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","text":"Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule. Note The annotation value must be given in a format understood by Nginx. Example nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte For more information please see http://nginx.org","title":"Client Body Buffer Size"},{"location":"user-guide/nginx-configuration/annotations/#external-authentication","text":"To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent. nginx.ingress.kubernetes.io/auth-url : \"URL to the authentication service\" Additionally it is possible to set: nginx.ingress.kubernetes.io/auth-method : to specify the HTTP method to use. nginx.ingress.kubernetes.io/auth-signin : to specify the location of the error page. nginx.ingress.kubernetes.io/auth-response-headers : to specify headers to pass to backend once authentication request completes. nginx.ingress.kubernetes.io/auth-request-redirect : to specify the X-Auth-Request-Redirect header value. nginx.ingress.kubernetes.io/auth-snippet : to specify a custom snippet to use with external authentication, e.g. nginx.ingress.kubernetes.io/auth-url : http://foo.com/external-auth nginx.ingress.kubernetes.io/auth-snippet : | proxy_set_header Foo-Header 42; Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set Example Please check the external-auth example.","title":"External Authentication"},{"location":"user-guide/nginx-configuration/annotations/#rate-limiting","text":"These annotations define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate DDoS Attacks . nginx.ingress.kubernetes.io/limit-connections : number of concurrent connections allowed from a single IP address. nginx.ingress.kubernetes.io/limit-rps : number of connections that may be accepted from a given IP each second. nginx.ingress.kubernetes.io/limit-rpm : number of connections that may be accepted from a given IP each minute. nginx.ingress.kubernetes.io/limit-rate-after : sets the initial amount after which the further transmission of a response to a client will be rate limited. nginx.ingress.kubernetes.io/limit-rate : rate of request that accepted from a client each second. You can specify the client IP source ranges to be excluded from rate-limiting through the nginx.ingress.kubernetes.io/limit-whitelist annotation. The value is a comma separated list of CIDRs. If you specify multiple annotations in a single Ingress rule, limit-rpm , and then limit-rps takes precedence. The annotation nginx.ingress.kubernetes.io/limit-rate , nginx.ingress.kubernetes.io/limit-rate-after define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. To configure this setting globally for all Ingress rules, the limit-rate-after and limit-rate value may be set in the NGINX ConfigMap . if you set the value in ingress annotation will cover global setting.","title":"Rate limiting"},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect","text":"This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.","title":"Permanent Redirect"},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect-code","text":"This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.","title":"Permanent Redirect Code"},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect","text":"This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)","title":"Temporal Redirect"},{"location":"user-guide/nginx-configuration/annotations/#ssl-passthrough","text":"The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide. Note SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag. Attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.","title":"SSL Passthrough"},{"location":"user-guide/nginx-configuration/annotations/#service-upstream","text":"By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257 .","title":"Service Upstream"},{"location":"user-guide/nginx-configuration/annotations/#known-issues","text":"If the service-upstream annotation is specified the following things should be taken into consideration: Sticky Sessions will not work as only round-robin load balancing is supported. The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.","title":"Known Issues"},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","text":"By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap . To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.","title":"Server-side HTTPS enforcement through redirect"},{"location":"user-guide/nginx-configuration/annotations/#redirect-fromto-www","text":"In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\" Attention If at some point a new Ingress is created with a host equal to one of the options (like domain.com ) the annotation will be omitted. Attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.","title":"Redirect from/to www"},{"location":"user-guide/nginx-configuration/annotations/#whitelist-source-range","text":"You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 . To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap . Note Adding an annotation to an Ingress rule overrides any global restriction.","title":"Whitelist source range"},{"location":"user-guide/nginx-configuration/annotations/#custom-timeouts","text":"Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: nginx.ingress.kubernetes.io/proxy-connect-timeout nginx.ingress.kubernetes.io/proxy-send-timeout nginx.ingress.kubernetes.io/proxy-read-timeout nginx.ingress.kubernetes.io/proxy-next-upstream nginx.ingress.kubernetes.io/proxy-next-upstream-tries nginx.ingress.kubernetes.io/proxy-request-buffering","title":"Custom timeouts"},{"location":"user-guide/nginx-configuration/annotations/#proxy-redirect","text":"With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to it is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to , otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces. By default the value of each annotation is \"off\".","title":"Proxy redirect"},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","text":"For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size . To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-body-size : 8m","title":"Custom max body size"},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-domain","text":"Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap .","title":"Proxy cookie domain"},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-path","text":"Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap .","title":"Proxy cookie path"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffering","text":"Enable or disable proxy buffering proxy_buffering . By default proxy buffering is disabled in the NGINX config. To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-buffering : \"on\"","title":"Proxy buffering"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffers-number","text":"Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4 To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffers-number : \"4\"","title":"Proxy buffers Number"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffer-size","text":"Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\" To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffer-size : \"8k\"","title":"Proxy buffer size"},{"location":"user-guide/nginx-configuration/annotations/#ssl-ciphers","text":"Specifies the enabled ciphers . Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host. nginx.ingress.kubernetes.io/ssl-ciphers : \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"","title":"SSL ciphers"},{"location":"user-guide/nginx-configuration/annotations/#connection-proxy-header","text":"Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation: nginx.ingress.kubernetes.io/connection-proxy-header : \"keep-alive\"","title":"Connection proxy header"},{"location":"user-guide/nginx-configuration/annotations/#enable-access-log","text":"Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation: nginx.ingress.kubernetes.io/enable-access-log : \"false\"","title":"Enable Access Log"},{"location":"user-guide/nginx-configuration/annotations/#enable-rewrite-log","text":"Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation: nginx.ingress.kubernetes.io/enable-rewrite-log : \"true\"","title":"Enable Rewrite Log"},{"location":"user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header","text":"To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used: nginx.ingress.kubernetes.io/x-forwarded-prefix : \"/path\"","title":"X-Forwarded-Prefix Header"},{"location":"user-guide/nginx-configuration/annotations/#lua-resty-waf","text":"Using lua-resty-waf-* annotations we can enable and control the lua-resty-waf Web Application Firewall per location. Following configuration will enable the WAF for the paths defined in the corresponding ingress: nginx.ingress.kubernetes.io/lua-resty-waf : \"active\" In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug to \"true\" in addition to the above configuration. The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf are inactive and simulate . In inactive mode WAF won't do anything, whereas in simulate mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it. lua-resty-waf comes with predefined set of rules https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules that covers ModSecurity CRS. You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets to ignore a subset of those rulesets. For an example: nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets : \"41000_sqli, 42000_xss\" will ignore the two mentioned rulesets. It is also possible to configure custom WAF rules per ingress using the nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word foo : nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules : '[=[ { \"access\": [ { \"actions\": { \"disrupt\" : \"DENY\" }, \"id\": 10001, \"msg\": \"my custom rule\", \"operator\": \"STR_CONTAINS\", \"pattern\": \"foo\", \"vars\": [ { \"parse\": [ \"values\", 1 ], \"type\": \"REQUEST_ARGS\" } ] } ], \"body_filter\": [], \"header_filter\":[] } ]=]' Since the default allowed contents were \"text/html\", \"text/json\", \"application/json\" We can enable the following annotation for allow all contents type: nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types : \"true\" The default score of lua-resty-waf is 5, which usually triggered if hitting 2 default rules, you can modify the score threshold with following annotation: nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold : \"10\" When you enabled HTTPS in the endpoint and since resty-lua will return 500 error when processing \"multipart\" contents Reference for this issue By default, it will be \"true\" You may enable the following annotation for work around: nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body : \"false\" For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf .","title":"Lua Resty WAF"},{"location":"user-guide/nginx-configuration/annotations/#modsecurity","text":"ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap . Note this will enable ModSecurity for all paths, and each path must be disabled manually. It can be enabled using the following annotation: nginx.ingress.kubernetes.io/enable-modsecurity : \"true\" ModSecurity will run in \"Detection-Only\" mode using the recommended configuration . You can enable the OWASP Core Rule Set by setting the following annotation: nginx.ingress.kubernetes.io/enable-owasp-core-rules : \"true\" You can pass transactionIDs from nginx by setting up the following: nginx.ingress.kubernetes.io/modsecurity-transaction-id : \"$request_id\" You can also add your own set of modsecurity rules via a snippet: nginx.ingress.kubernetes.io/modsecurity-snippet : | SecRuleEngine On SecDebugLog /tmp/modsec_debug.log Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement: nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf","title":"ModSecurity"},{"location":"user-guide/nginx-configuration/annotations/#influxdb","text":"Using influxdb-* annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket using the nginx-influxdb-module . nginx.ingress.kubernetes.io/enable-influxdb : \"true\" nginx.ingress.kubernetes.io/influxdb-measurement : \"nginx-reqs\" nginx.ingress.kubernetes.io/influxdb-port : \"8089\" nginx.ingress.kubernetes.io/influxdb-host : \"127.0.0.1\" nginx.ingress.kubernetes.io/influxdb-server-name : \"nginx-ingress\" For the influxdb-host parameter you have two options: Use an InfluxDB server configured with the UDP protocol enabled. Deploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the socket listener input and to write using anyone of the outputs plugins like InfluxDB, Apache Kafka, Prometheus, etc.. (recommended) It's important to remember that there's no DNS resolver at this stage so you will have to configure an ip address to nginx.ingress.kubernetes.io/influxdb-host . If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use 127.0.0.1 .","title":"InfluxDB"},{"location":"user-guide/nginx-configuration/annotations/#backend-protocol","text":"Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP By default NGINX uses HTTP . Example: nginx.ingress.kubernetes.io/backend-protocol : \"HTTPS\"","title":"Backend Protocol"},{"location":"user-guide/nginx-configuration/annotations/#use-regex","text":"Attention When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie , nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex. Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false . The following will indicate that regular expression paths are being used: nginx.ingress.kubernetes.io/use-regex : \"true\" The following will indicate that regular expression paths are not being used: nginx.ingress.kubernetes.io/use-regex : \"false\" When this annotation is set to true , the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Please read about ingress path matching before using this modifier.","title":"Use Regex"},{"location":"user-guide/nginx-configuration/annotations/#satisfy","text":"By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value. nginx.ingress.kubernetes.io/satisfy : \"any\"","title":"Satisfy"},{"location":"user-guide/nginx-configuration/configmap/","text":"ConfigMaps \u00b6 ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller. In order to overwrite nginx-controller configuration values as seen in config.go , you can add key-value pairs to the data section of the config-map. For Example: data : map-hash-bucket-size : \"128\" ssl-protocols : SSLv2 Important The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\". \"Slice\" types (defined below as []string or []int can be provided as a comma-delimited string. Configuration options \u00b6 The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" http2-max-requests int 1000 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" use-geoip2 bool \"false\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 upstream-keepalive-timeout int 60 upstream-keepalive-requests int 100 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\" add-headers \u00b6 Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers . example allow-backend-server-header \u00b6 Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled hide-headers \u00b6 Sets additional header that will not be passed from the upstream server to the client response. default: empty References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header access-log-params \u00b6 Additional params for access_log. For example, buffer=16k, gzip, flush=1m References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log access-log-path \u00b6 Access log path. Goes to /var/log/nginx/access.log by default. Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout enable-access-log-for-default-backend \u00b6 Enables logging access to default backend. default: is disabled. error-log-path \u00b6 Error log path. Goes to /var/log/nginx/error.log by default. Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr References: http://nginx.org/en/docs/ngx_core_module.html#error_log enable-dynamic-tls-records \u00b6 Enables dynamically sized TLS records to improve time-to-first-byte. default: is enabled References: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency enable-modsecurity \u00b6 Enables the modsecurity module for NGINX. default: is disabled enable-owasp-modsecurity-crs \u00b6 Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled client-header-buffer-size \u00b6 Allows to configure a custom buffer size for reading client request header. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size client-header-timeout \u00b6 Defines a timeout for reading client request header, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout client-body-buffer-size \u00b6 Sets buffer size for reading client request body. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size client-body-timeout \u00b6 Defines a timeout for reading client request body, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout disable-access-log \u00b6 Disables the Access Log from the entire Ingress Controller. default: '\"false\"' References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log disable-ipv6 \u00b6 Disable listening on IPV6. default: is disabled disable-ipv6-dns \u00b6 Disable IPV6 for nginx DNS resolver. default: is disabled enable-underscores-in-headers \u00b6 Enables underscores in header names. default: is disabled ignore-invalid-headers \u00b6 Set if header fields with invalid names should be ignored. default: is enabled retry-non-idempotent \u00b6 Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\". error-log-level \u00b6 Configures the logging level of errors. Log levels above are listed in the order of increasing severity. References: http://nginx.org/en/docs/ngx_core_module.html#error_log http2-max-field-size \u00b6 Limits the maximum size of an HPACK-compressed request header field. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size http2-max-header-size \u00b6 Limits the maximum size of the entire request header list after HPACK decompression. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size http2-max-requests \u00b6 Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection. References: http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests hsts \u00b6 Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. References: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server hsts-include-subdomains \u00b6 Enables or disables the use of HSTS in all the subdomains of the server-name. hsts-max-age \u00b6 Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. hsts-preload \u00b6 Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd keep-alive \u00b6 Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout keep-alive-requests \u00b6 Sets the maximum number of requests that can be served through one keep-alive connection. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests large-client-header-buffers \u00b6 Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k References: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers log-format-escape-json \u00b6 Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format . log-format-upstream \u00b6 Sets the nginx log format . Example for json output: consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }' Please check the log-format for definition of each field. log-format-stream \u00b6 Sets the nginx stream format . enable-multi-accept \u00b6 If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true References: http://nginx.org/en/docs/ngx_core_module.html#multi_accept max-worker-connections \u00b6 Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files . default: 16384 Tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle). max-worker-open-files \u00b6 Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) / worker-processes - 1024\". default: 0 map-hash-bucket-size \u00b6 Sets the bucket size for the map variables hash tables . The details of setting up hash tables are provided in a separate document . proxy-real-ip-cidr \u00b6 If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer. proxy-set-headers \u00b6 Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example server-name-hash-max-size \u00b6 Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc. References: http://nginx.org/en/docs/hash.html server-name-hash-bucket-size \u00b6 Sets the size of the bucket for the server names hash tables. References: http://nginx.org/en/docs/hash.html http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size proxy-headers-hash-max-size \u00b6 Sets the maximum size of the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size reuse-port \u00b6 Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true proxy-headers-hash-bucket-size \u00b6 Sets the size of the bucket for the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size server-tokens \u00b6 Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled ssl-ciphers \u00b6 Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library. The default cipher list is: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 . The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy . Please check the Mozilla SSL Configuration Generator . ssl-ecdh-curve \u00b6 Specifies a curve for ECDHE ciphers. References: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve ssl-dh-param \u00b6 Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\". References: https://wiki.openssl.org/index.php/Diffie-Hellman_parameters https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam ssl-protocols \u00b6 Sets the SSL protocols to use. The default is: TLSv1.2 . Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh . ssl-session-cache \u00b6 Enables or disables the use of shared SSL cache among worker processes. ssl-session-cache-size \u00b6 Sets the size of the SSL shared session cache between all worker processes. ssl-session-tickets \u00b6 Enables or disables session resumption through TLS session tickets . ssl-session-ticket-key \u00b6 Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64 TLS session ticket-key , by default, a randomly generated key is used. ssl-session-timeout \u00b6 Sets the time during which a client may reuse the session parameters stored in a cache. ssl-buffer-size \u00b6 Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ use-proxy-protocol \u00b6 Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB). proxy-protocol-header-timeout \u00b6 Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s use-gzip \u00b6 Enables or disables compression of HTTP responses using the \"gzip\" module . The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . use-geoip \u00b6 Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice . Consider use-geoip2 below. use-geoip2 \u00b6 Enables the geoip2 module for NGINX. default: false enable-brotli \u00b6 Enables or disables compression of HTTP responses using the \"brotli\" module . The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . default: is disabled Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli brotli-level \u00b6 Sets the Brotli Compression Level that will be used. default: 4 brotli-types \u00b6 Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component use-http2 \u00b6 Enables or disables HTTP/2 support in secure connections. gzip-level \u00b6 Sets the gzip Compression Level that will be used. default: 5 gzip-types \u00b6 Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. worker-processes \u00b6 Sets the number of worker processes . The default of \"auto\" means number of available CPU cores. worker-cpu-affinity \u00b6 Binds worker processes to the sets of CPUs. worker_cpu_affinity . By default worker processes are not bound to any specific CPUs. The value can be: \"\": empty string indicate no affinity is applied. cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus. auto: binding worker processes automatically to available CPUs. worker-shutdown-timeout \u00b6 Sets a timeout for Nginx to wait for worker to gracefully shutdown . default: \"10s\" load-balance \u00b6 Sets the algorithm to use for load balancing. The value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false ) ip_hash: to use a hash of the server for routing ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false , but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by ) ewma: to use the Peak EWMA method for routing ( implementation ) The default is round_robin . References: http://nginx.org/en/docs/http/load_balancing.html variables-hash-bucket-size \u00b6 Sets the bucket size for the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size variables-hash-max-size \u00b6 Sets the maximum size of the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size upstream-keepalive-connections \u00b6 Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 32 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive upstream-keepalive-timeout \u00b6 Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout upstream-keepalive-requests \u00b6 Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 100 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests limit-conn-zone-variable \u00b6 Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone . The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. proxy-stream-timeout \u00b6 Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout proxy-stream-responses \u00b6 Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses bind-address \u00b6 Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop. use-forwarded-headers \u00b6 If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers. If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets. forwarded-for-header \u00b6 Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For compute-full-forwarded-for \u00b6 Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies. proxy-add-original-uri-header \u00b6 Adds an X-Original-Uri header with the original request URI to the backend request generate-request-id \u00b6 Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request enable-opentracing \u00b6 Enables the nginx Opentracing extension. default: is disabled References: https://github.com/opentracing-contrib/nginx-opentracing zipkin-collector-host \u00b6 Specifies the host to use when uploading traces. It must be a valid URL. zipkin-collector-port \u00b6 Specifies the port to use when uploading traces. default: 9411 zipkin-service-name \u00b6 Specifies the service name to use for any traces created. default: nginx zipkin-sample-rate \u00b6 Specifies sample rate for any traces created. default: 1.0 jaeger-collector-host \u00b6 Specifies the host to use when uploading traces. It must be a valid URL. jaeger-collector-port \u00b6 Specifies the port to use when uploading traces. default: 6831 jaeger-service-name \u00b6 Specifies the service name to use for any traces created. default: nginx jaeger-sampler-type \u00b6 Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const jaeger-sampler-param \u00b6 Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1 main-snippet \u00b6 Adds custom configuration to the main section of the nginx configuration. http-snippet \u00b6 Adds custom configuration to the http section of the nginx configuration. server-snippet \u00b6 Adds custom configuration to all the servers in the nginx configuration. location-snippet \u00b6 Adds custom configuration to all the locations in the nginx configuration. custom-http-errors \u00b6 Enables which HTTP codes should be passed for processing with the error_page directive Setting at least one code also enables proxy_intercept_errors which are required to process error_page. Example usage: custom-http-errors: 404,415 proxy-body-size \u00b6 Sets the maximum allowed size of the client request body. See NGINX client_max_body_size . proxy-connect-timeout \u00b6 Sets the timeout for establishing a connection with a proxied server . It should be noted that this timeout cannot usually exceed 75 seconds. proxy-read-timeout \u00b6 Sets the timeout in seconds for reading a response from the proxied server . The timeout is set only between two successive read operations, not for the transmission of the whole response. proxy-send-timeout \u00b6 Sets the timeout in seconds for transmitting a request to the proxied server . The timeout is set only between two successive write operations, not for the transmission of the whole request. proxy-buffers-number \u00b6 Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. proxy-buffer-size \u00b6 Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. proxy-cookie-path \u00b6 Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response. proxy-cookie-domain \u00b6 Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response. proxy-next-upstream \u00b6 Specifies in which cases a request should be passed to the next server. proxy-next-upstream-tries \u00b6 Limit the number of possible tries a request should be passed to the next server. proxy-redirect-from \u00b6 Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect proxy-request-buffering \u00b6 Enables or disables buffering of a client request body . ssl-redirect \u00b6 Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\" whitelist-source-range \u00b6 Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module . skip-access-log-urls \u00b6 Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty limit-rate \u00b6 Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate limit-rate-after \u00b6 Sets the initial amount after which the further transmission of a response to a client will be rate limited. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after http-redirect-code \u00b6 Sets the HTTP status code to be used in redirects. Supported codes are 301 , 302 , 307 and 308 default: 308 Why the default code is 308? RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST. proxy-buffering \u00b6 Enables or disables buffering of responses from the proxied server . limit-req-status-code \u00b6 Sets the status code to return in response to rejected requests . default: 503 limit-conn-status-code \u00b6 Sets the status code to return in response to rejected connections . default: 503 no-tls-redirect-locations \u00b6 A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\" no-auth-locations \u00b6 A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\" block-cidrs \u00b6 A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally. References: http://nginx.org/en/docs/http/ngx_http_access_module.html#deny block-user-agents \u00b6 A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map block-referers \u00b6 A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"ConfigMap"},{"location":"user-guide/nginx-configuration/configmap/#configmaps","text":"ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller. In order to overwrite nginx-controller configuration values as seen in config.go , you can add key-value pairs to the data section of the config-map. For Example: data : map-hash-bucket-size : \"128\" ssl-protocols : SSLv2 Important The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\". \"Slice\" types (defined below as []string or []int can be provided as a comma-delimited string.","title":"ConfigMaps"},{"location":"user-guide/nginx-configuration/configmap/#configuration-options","text":"The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" http2-max-requests int 1000 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" use-geoip2 bool \"false\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 upstream-keepalive-timeout int 60 upstream-keepalive-requests int 100 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\"","title":"Configuration options"},{"location":"user-guide/nginx-configuration/configmap/#add-headers","text":"Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers . example","title":"add-headers"},{"location":"user-guide/nginx-configuration/configmap/#allow-backend-server-header","text":"Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled","title":"allow-backend-server-header"},{"location":"user-guide/nginx-configuration/configmap/#hide-headers","text":"Sets additional header that will not be passed from the upstream server to the client response. default: empty References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header","title":"hide-headers"},{"location":"user-guide/nginx-configuration/configmap/#access-log-params","text":"Additional params for access_log. For example, buffer=16k, gzip, flush=1m References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log","title":"access-log-params"},{"location":"user-guide/nginx-configuration/configmap/#access-log-path","text":"Access log path. Goes to /var/log/nginx/access.log by default. Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout","title":"access-log-path"},{"location":"user-guide/nginx-configuration/configmap/#enable-access-log-for-default-backend","text":"Enables logging access to default backend. default: is disabled.","title":"enable-access-log-for-default-backend"},{"location":"user-guide/nginx-configuration/configmap/#error-log-path","text":"Error log path. Goes to /var/log/nginx/error.log by default. Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr References: http://nginx.org/en/docs/ngx_core_module.html#error_log","title":"error-log-path"},{"location":"user-guide/nginx-configuration/configmap/#enable-dynamic-tls-records","text":"Enables dynamically sized TLS records to improve time-to-first-byte. default: is enabled References: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency","title":"enable-dynamic-tls-records"},{"location":"user-guide/nginx-configuration/configmap/#enable-modsecurity","text":"Enables the modsecurity module for NGINX. default: is disabled","title":"enable-modsecurity"},{"location":"user-guide/nginx-configuration/configmap/#enable-owasp-modsecurity-crs","text":"Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled","title":"enable-owasp-modsecurity-crs"},{"location":"user-guide/nginx-configuration/configmap/#client-header-buffer-size","text":"Allows to configure a custom buffer size for reading client request header. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size","title":"client-header-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#client-header-timeout","text":"Defines a timeout for reading client request header, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout","title":"client-header-timeout"},{"location":"user-guide/nginx-configuration/configmap/#client-body-buffer-size","text":"Sets buffer size for reading client request body. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size","title":"client-body-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#client-body-timeout","text":"Defines a timeout for reading client request body, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout","title":"client-body-timeout"},{"location":"user-guide/nginx-configuration/configmap/#disable-access-log","text":"Disables the Access Log from the entire Ingress Controller. default: '\"false\"' References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log","title":"disable-access-log"},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6","text":"Disable listening on IPV6. default: is disabled","title":"disable-ipv6"},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6-dns","text":"Disable IPV6 for nginx DNS resolver. default: is disabled","title":"disable-ipv6-dns"},{"location":"user-guide/nginx-configuration/configmap/#enable-underscores-in-headers","text":"Enables underscores in header names. default: is disabled","title":"enable-underscores-in-headers"},{"location":"user-guide/nginx-configuration/configmap/#ignore-invalid-headers","text":"Set if header fields with invalid names should be ignored. default: is enabled","title":"ignore-invalid-headers"},{"location":"user-guide/nginx-configuration/configmap/#retry-non-idempotent","text":"Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".","title":"retry-non-idempotent"},{"location":"user-guide/nginx-configuration/configmap/#error-log-level","text":"Configures the logging level of errors. Log levels above are listed in the order of increasing severity. References: http://nginx.org/en/docs/ngx_core_module.html#error_log","title":"error-log-level"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-field-size","text":"Limits the maximum size of an HPACK-compressed request header field. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size","title":"http2-max-field-size"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-header-size","text":"Limits the maximum size of the entire request header list after HPACK decompression. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size","title":"http2-max-header-size"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-requests","text":"Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection. References: http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests","title":"http2-max-requests"},{"location":"user-guide/nginx-configuration/configmap/#hsts","text":"Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. References: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server","title":"hsts"},{"location":"user-guide/nginx-configuration/configmap/#hsts-include-subdomains","text":"Enables or disables the use of HSTS in all the subdomains of the server-name.","title":"hsts-include-subdomains"},{"location":"user-guide/nginx-configuration/configmap/#hsts-max-age","text":"Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.","title":"hsts-max-age"},{"location":"user-guide/nginx-configuration/configmap/#hsts-preload","text":"Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd","title":"hsts-preload"},{"location":"user-guide/nginx-configuration/configmap/#keep-alive","text":"Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout","title":"keep-alive"},{"location":"user-guide/nginx-configuration/configmap/#keep-alive-requests","text":"Sets the maximum number of requests that can be served through one keep-alive connection. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests","title":"keep-alive-requests"},{"location":"user-guide/nginx-configuration/configmap/#large-client-header-buffers","text":"Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k References: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers","title":"large-client-header-buffers"},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-json","text":"Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format .","title":"log-format-escape-json"},{"location":"user-guide/nginx-configuration/configmap/#log-format-upstream","text":"Sets the nginx log format . Example for json output: consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }' Please check the log-format for definition of each field.","title":"log-format-upstream"},{"location":"user-guide/nginx-configuration/configmap/#log-format-stream","text":"Sets the nginx stream format .","title":"log-format-stream"},{"location":"user-guide/nginx-configuration/configmap/#enable-multi-accept","text":"If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true References: http://nginx.org/en/docs/ngx_core_module.html#multi_accept","title":"enable-multi-accept"},{"location":"user-guide/nginx-configuration/configmap/#max-worker-connections","text":"Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files . default: 16384 Tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).","title":"max-worker-connections"},{"location":"user-guide/nginx-configuration/configmap/#max-worker-open-files","text":"Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) / worker-processes - 1024\". default: 0","title":"max-worker-open-files"},{"location":"user-guide/nginx-configuration/configmap/#map-hash-bucket-size","text":"Sets the bucket size for the map variables hash tables . The details of setting up hash tables are provided in a separate document .","title":"map-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr","text":"If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.","title":"proxy-real-ip-cidr"},{"location":"user-guide/nginx-configuration/configmap/#proxy-set-headers","text":"Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example","title":"proxy-set-headers"},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-max-size","text":"Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc. References: http://nginx.org/en/docs/hash.html","title":"server-name-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size","text":"Sets the size of the bucket for the server names hash tables. References: http://nginx.org/en/docs/hash.html http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size","title":"server-name-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-max-size","text":"Sets the maximum size of the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size","title":"proxy-headers-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#reuse-port","text":"Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true","title":"reuse-port"},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-bucket-size","text":"Sets the size of the bucket for the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size","title":"proxy-headers-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#server-tokens","text":"Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled","title":"server-tokens"},{"location":"user-guide/nginx-configuration/configmap/#ssl-ciphers","text":"Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library. The default cipher list is: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 . The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy . Please check the Mozilla SSL Configuration Generator .","title":"ssl-ciphers"},{"location":"user-guide/nginx-configuration/configmap/#ssl-ecdh-curve","text":"Specifies a curve for ECDHE ciphers. References: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve","title":"ssl-ecdh-curve"},{"location":"user-guide/nginx-configuration/configmap/#ssl-dh-param","text":"Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\". References: https://wiki.openssl.org/index.php/Diffie-Hellman_parameters https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam","title":"ssl-dh-param"},{"location":"user-guide/nginx-configuration/configmap/#ssl-protocols","text":"Sets the SSL protocols to use. The default is: TLSv1.2 . Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh .","title":"ssl-protocols"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache","text":"Enables or disables the use of shared SSL cache among worker processes.","title":"ssl-session-cache"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache-size","text":"Sets the size of the SSL shared session cache between all worker processes.","title":"ssl-session-cache-size"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-tickets","text":"Enables or disables session resumption through TLS session tickets .","title":"ssl-session-tickets"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-ticket-key","text":"Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64 TLS session ticket-key , by default, a randomly generated key is used.","title":"ssl-session-ticket-key"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-timeout","text":"Sets the time during which a client may reuse the session parameters stored in a cache.","title":"ssl-session-timeout"},{"location":"user-guide/nginx-configuration/configmap/#ssl-buffer-size","text":"Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/","title":"ssl-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#use-proxy-protocol","text":"Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).","title":"use-proxy-protocol"},{"location":"user-guide/nginx-configuration/configmap/#proxy-protocol-header-timeout","text":"Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s","title":"proxy-protocol-header-timeout"},{"location":"user-guide/nginx-configuration/configmap/#use-gzip","text":"Enables or disables compression of HTTP responses using the \"gzip\" module . The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component .","title":"use-gzip"},{"location":"user-guide/nginx-configuration/configmap/#use-geoip","text":"Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice . Consider use-geoip2 below.","title":"use-geoip"},{"location":"user-guide/nginx-configuration/configmap/#use-geoip2","text":"Enables the geoip2 module for NGINX. default: false","title":"use-geoip2"},{"location":"user-guide/nginx-configuration/configmap/#enable-brotli","text":"Enables or disables compression of HTTP responses using the \"brotli\" module . The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . default: is disabled Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli","title":"enable-brotli"},{"location":"user-guide/nginx-configuration/configmap/#brotli-level","text":"Sets the Brotli Compression Level that will be used. default: 4","title":"brotli-level"},{"location":"user-guide/nginx-configuration/configmap/#brotli-types","text":"Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component","title":"brotli-types"},{"location":"user-guide/nginx-configuration/configmap/#use-http2","text":"Enables or disables HTTP/2 support in secure connections.","title":"use-http2"},{"location":"user-guide/nginx-configuration/configmap/#gzip-level","text":"Sets the gzip Compression Level that will be used. default: 5","title":"gzip-level"},{"location":"user-guide/nginx-configuration/configmap/#gzip-types","text":"Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled.","title":"gzip-types"},{"location":"user-guide/nginx-configuration/configmap/#worker-processes","text":"Sets the number of worker processes . The default of \"auto\" means number of available CPU cores.","title":"worker-processes"},{"location":"user-guide/nginx-configuration/configmap/#worker-cpu-affinity","text":"Binds worker processes to the sets of CPUs. worker_cpu_affinity . By default worker processes are not bound to any specific CPUs. The value can be: \"\": empty string indicate no affinity is applied. cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus. auto: binding worker processes automatically to available CPUs.","title":"worker-cpu-affinity"},{"location":"user-guide/nginx-configuration/configmap/#worker-shutdown-timeout","text":"Sets a timeout for Nginx to wait for worker to gracefully shutdown . default: \"10s\"","title":"worker-shutdown-timeout"},{"location":"user-guide/nginx-configuration/configmap/#load-balance","text":"Sets the algorithm to use for load balancing. The value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false ) ip_hash: to use a hash of the server for routing ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false , but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by ) ewma: to use the Peak EWMA method for routing ( implementation ) The default is round_robin . References: http://nginx.org/en/docs/http/load_balancing.html","title":"load-balance"},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-bucket-size","text":"Sets the bucket size for the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size","title":"variables-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-max-size","text":"Sets the maximum size of the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size","title":"variables-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-connections","text":"Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 32 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive","title":"upstream-keepalive-connections"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-timeout","text":"Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout","title":"upstream-keepalive-timeout"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-requests","text":"Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 100 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests","title":"upstream-keepalive-requests"},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-zone-variable","text":"Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone . The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.","title":"limit-conn-zone-variable"},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-timeout","text":"Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout","title":"proxy-stream-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-responses","text":"Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses","title":"proxy-stream-responses"},{"location":"user-guide/nginx-configuration/configmap/#bind-address","text":"Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.","title":"bind-address"},{"location":"user-guide/nginx-configuration/configmap/#use-forwarded-headers","text":"If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers. If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.","title":"use-forwarded-headers"},{"location":"user-guide/nginx-configuration/configmap/#forwarded-for-header","text":"Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For","title":"forwarded-for-header"},{"location":"user-guide/nginx-configuration/configmap/#compute-full-forwarded-for","text":"Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.","title":"compute-full-forwarded-for"},{"location":"user-guide/nginx-configuration/configmap/#proxy-add-original-uri-header","text":"Adds an X-Original-Uri header with the original request URI to the backend request","title":"proxy-add-original-uri-header"},{"location":"user-guide/nginx-configuration/configmap/#generate-request-id","text":"Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request","title":"generate-request-id"},{"location":"user-guide/nginx-configuration/configmap/#enable-opentracing","text":"Enables the nginx Opentracing extension. default: is disabled References: https://github.com/opentracing-contrib/nginx-opentracing","title":"enable-opentracing"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-collector-host","text":"Specifies the host to use when uploading traces. It must be a valid URL.","title":"zipkin-collector-host"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-collector-port","text":"Specifies the port to use when uploading traces. default: 9411","title":"zipkin-collector-port"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-service-name","text":"Specifies the service name to use for any traces created. default: nginx","title":"zipkin-service-name"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-sample-rate","text":"Specifies sample rate for any traces created. default: 1.0","title":"zipkin-sample-rate"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-host","text":"Specifies the host to use when uploading traces. It must be a valid URL.","title":"jaeger-collector-host"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-port","text":"Specifies the port to use when uploading traces. default: 6831","title":"jaeger-collector-port"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-service-name","text":"Specifies the service name to use for any traces created. default: nginx","title":"jaeger-service-name"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-type","text":"Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const","title":"jaeger-sampler-type"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-param","text":"Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1","title":"jaeger-sampler-param"},{"location":"user-guide/nginx-configuration/configmap/#main-snippet","text":"Adds custom configuration to the main section of the nginx configuration.","title":"main-snippet"},{"location":"user-guide/nginx-configuration/configmap/#http-snippet","text":"Adds custom configuration to the http section of the nginx configuration.","title":"http-snippet"},{"location":"user-guide/nginx-configuration/configmap/#server-snippet","text":"Adds custom configuration to all the servers in the nginx configuration.","title":"server-snippet"},{"location":"user-guide/nginx-configuration/configmap/#location-snippet","text":"Adds custom configuration to all the locations in the nginx configuration.","title":"location-snippet"},{"location":"user-guide/nginx-configuration/configmap/#custom-http-errors","text":"Enables which HTTP codes should be passed for processing with the error_page directive Setting at least one code also enables proxy_intercept_errors which are required to process error_page. Example usage: custom-http-errors: 404,415","title":"custom-http-errors"},{"location":"user-guide/nginx-configuration/configmap/#proxy-body-size","text":"Sets the maximum allowed size of the client request body. See NGINX client_max_body_size .","title":"proxy-body-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-connect-timeout","text":"Sets the timeout for establishing a connection with a proxied server . It should be noted that this timeout cannot usually exceed 75 seconds.","title":"proxy-connect-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-read-timeout","text":"Sets the timeout in seconds for reading a response from the proxied server . The timeout is set only between two successive read operations, not for the transmission of the whole response.","title":"proxy-read-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-send-timeout","text":"Sets the timeout in seconds for transmitting a request to the proxied server . The timeout is set only between two successive write operations, not for the transmission of the whole request.","title":"proxy-send-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffers-number","text":"Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.","title":"proxy-buffers-number"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffer-size","text":"Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.","title":"proxy-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-path","text":"Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.","title":"proxy-cookie-path"},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-domain","text":"Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.","title":"proxy-cookie-domain"},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream","text":"Specifies in which cases a request should be passed to the next server.","title":"proxy-next-upstream"},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-tries","text":"Limit the number of possible tries a request should be passed to the next server.","title":"proxy-next-upstream-tries"},{"location":"user-guide/nginx-configuration/configmap/#proxy-redirect-from","text":"Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect","title":"proxy-redirect-from"},{"location":"user-guide/nginx-configuration/configmap/#proxy-request-buffering","text":"Enables or disables buffering of a client request body .","title":"proxy-request-buffering"},{"location":"user-guide/nginx-configuration/configmap/#ssl-redirect","text":"Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\"","title":"ssl-redirect"},{"location":"user-guide/nginx-configuration/configmap/#whitelist-source-range","text":"Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module .","title":"whitelist-source-range"},{"location":"user-guide/nginx-configuration/configmap/#skip-access-log-urls","text":"Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty","title":"skip-access-log-urls"},{"location":"user-guide/nginx-configuration/configmap/#limit-rate","text":"Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate","title":"limit-rate"},{"location":"user-guide/nginx-configuration/configmap/#limit-rate-after","text":"Sets the initial amount after which the further transmission of a response to a client will be rate limited. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after","title":"limit-rate-after"},{"location":"user-guide/nginx-configuration/configmap/#http-redirect-code","text":"Sets the HTTP status code to be used in redirects. Supported codes are 301 , 302 , 307 and 308 default: 308 Why the default code is 308? RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.","title":"http-redirect-code"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffering","text":"Enables or disables buffering of responses from the proxied server .","title":"proxy-buffering"},{"location":"user-guide/nginx-configuration/configmap/#limit-req-status-code","text":"Sets the status code to return in response to rejected requests . default: 503","title":"limit-req-status-code"},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-status-code","text":"Sets the status code to return in response to rejected connections . default: 503","title":"limit-conn-status-code"},{"location":"user-guide/nginx-configuration/configmap/#no-tls-redirect-locations","text":"A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"","title":"no-tls-redirect-locations"},{"location":"user-guide/nginx-configuration/configmap/#no-auth-locations","text":"A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\"","title":"no-auth-locations"},{"location":"user-guide/nginx-configuration/configmap/#block-cidrs","text":"A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally. References: http://nginx.org/en/docs/http/ngx_http_access_module.html#deny","title":"block-cidrs"},{"location":"user-guide/nginx-configuration/configmap/#block-user-agents","text":"A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"block-user-agents"},{"location":"user-guide/nginx-configuration/configmap/#block-referers","text":"A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"block-referers"},{"location":"user-guide/nginx-configuration/custom-template/","text":"Custom NGINX template \u00b6 The NGINX template is located in the file /etc/nginx/template/nginx.tmpl . Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template volumeMounts : - mountPath : /etc/nginx/template name : nginx-template-volume readOnly : true volumes : - name : nginx-template-volume configMap : name : nginx-template items : - key : nginx.tmpl path : nginx.tmpl Please note the template is tied to the Go code. Do not change names in the variable $cfg . For more information about the template syntax please check the Go template package . In addition to the built-in functions provided by the Go package the following functions are also available: empty: returns true if the specified parameter (string) is empty contains: strings.Contains hasPrefix: strings.HasPrefix hasSuffix: strings.HasSuffix toUpper: strings.ToUpper toLower: strings.ToLower buildLocation: helps to build the NGINX Location section in each server buildProxyPass: builds the reverse proxy configuration buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation TODO: buildAuthLocation: buildAuthResponseHeaders: buildResolvers: buildLogFormatUpstream: buildDenyVariable: buildUpstreamName: buildForwardedFor: buildAuthSignURL: buildNextUpstream: filterRateLimits: formatIP: getenv: getIngressInformation: serverConfig: isLocationAllowed: isValidClientBodyBufferSize:","title":"Custom NGINX template"},{"location":"user-guide/nginx-configuration/custom-template/#custom-nginx-template","text":"The NGINX template is located in the file /etc/nginx/template/nginx.tmpl . Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template volumeMounts : - mountPath : /etc/nginx/template name : nginx-template-volume readOnly : true volumes : - name : nginx-template-volume configMap : name : nginx-template items : - key : nginx.tmpl path : nginx.tmpl Please note the template is tied to the Go code. Do not change names in the variable $cfg . For more information about the template syntax please check the Go template package . In addition to the built-in functions provided by the Go package the following functions are also available: empty: returns true if the specified parameter (string) is empty contains: strings.Contains hasPrefix: strings.HasPrefix hasSuffix: strings.HasSuffix toUpper: strings.ToUpper toLower: strings.ToLower buildLocation: helps to build the NGINX Location section in each server buildProxyPass: builds the reverse proxy configuration buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation TODO: buildAuthLocation: buildAuthResponseHeaders: buildResolvers: buildLogFormatUpstream: buildDenyVariable: buildUpstreamName: buildForwardedFor: buildAuthSignURL: buildNextUpstream: filterRateLimits: formatIP: getenv: getIngressInformation: serverConfig: isLocationAllowed: isValidClientBodyBufferSize:","title":"Custom NGINX template"},{"location":"user-guide/nginx-configuration/log-format/","text":"Log format \u00b6 The default configuration uses a custom logging format to add additional information about upstreams, response time and status. log_format upstreaminfo ' {{ if $ cfg.useProxyProtocol }} $proxy_protocol_addr {{ else }} $remote_addr {{ end }} - ' '[$the_real_ip] - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" ' '$request_length $request_time [$proxy_upstream_name] $upstream_addr ' '$upstream_response_length $upstream_response_time $upstream_status $req_id'; Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr remote address if proxy protocol is disabled (default) $the_real_ip the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream--- $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id the randomly generated ID of the request Additional available variables: Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service Sources: Upstream variables Embedded variables","title":"Log format"},{"location":"user-guide/nginx-configuration/log-format/#log-format","text":"The default configuration uses a custom logging format to add additional information about upstreams, response time and status. log_format upstreaminfo ' {{ if $ cfg.useProxyProtocol }} $proxy_protocol_addr {{ else }} $remote_addr {{ end }} - ' '[$the_real_ip] - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" ' '$request_length $request_time [$proxy_upstream_name] $upstream_addr ' '$upstream_response_length $upstream_response_time $upstream_status $req_id'; Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr remote address if proxy protocol is disabled (default) $the_real_ip the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream--- $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id the randomly generated ID of the request Additional available variables: Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service Sources: Upstream variables Embedded variables","title":"Log format"},{"location":"user-guide/third-party-addons/modsecurity/","text":"ModSecurity Web Application Firewall \u00b6 ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap. Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. The file /var/log/modsec_audit.log contains the log of ModSecurity. The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the owasp-modsecurity-crs repository . Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.","title":"ModSecurity Web Application Firewall"},{"location":"user-guide/third-party-addons/modsecurity/#modsecurity-web-application-firewall","text":"ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap. Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. The file /var/log/modsec_audit.log contains the log of ModSecurity. The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the owasp-modsecurity-crs repository . Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.","title":"ModSecurity Web Application Firewall"},{"location":"user-guide/third-party-addons/opentracing/","text":"OpenTracing \u00b6 Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled. Usage \u00b6 To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data : enable - opentracing : \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Examples \u00b6 The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube. Zipkin \u00b6 In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details: Jaeger \u00b6 Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \" $( minikube ip ) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=k8s.gcr.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: example.com http: paths: - backend: serviceName: echoheaders-x servicePort: 80 path: /echo ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address = 172 .17.0.5 command = POST real path = /echo query = nil request_version = 1 .1 request_uri = http://example.com:8080/echo SERVER VALUES: server_version = nginx: 1 .10.0 - lua: 10001 HEADERS RECEIVED: accept = */* connection = close content-length = 4 content-type = application/x-www-form-urlencoded host = example.com user-agent = curl/7.54.0 x-forwarded-for = 192 .168.99.1 x-forwarded-host = example.com x-forwarded-port = 80 x-forwarded-proto = http x-original-uri = /echo x-real-ip = 192 .168.99.1 x-scheme = http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#opentracing","text":"Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled.","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#usage","text":"To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data : enable - opentracing : \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent.","title":"Usage"},{"location":"user-guide/third-party-addons/opentracing/#examples","text":"The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube.","title":"Examples"},{"location":"user-guide/third-party-addons/opentracing/#zipkin","text":"In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details:","title":"Zipkin"},{"location":"user-guide/third-party-addons/opentracing/#jaeger","text":"Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \" $( minikube ip ) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=k8s.gcr.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: example.com http: paths: - backend: serviceName: echoheaders-x servicePort: 80 path: /echo ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address = 172 .17.0.5 command = POST real path = /echo query = nil request_version = 1 .1 request_uri = http://example.com:8080/echo SERVER VALUES: server_version = nginx: 1 .10.0 - lua: 10001 HEADERS RECEIVED: accept = */* connection = close content-length = 4 content-type = application/x-www-form-urlencoded host = example.com user-agent = curl/7.54.0 x-forwarded-for = 192 .168.99.1 x-forwarded-host = example.com x-forwarded-port = 80 x-forwarded-proto = http x-original-uri = /echo x-real-ip = 192 .168.99.1 x-scheme = http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"Jaeger"}]} \ No newline at end of file +{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started.","title":"Welcome"},{"location":"#welcome","text":"This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io .","title":"Welcome"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"development/","text":"Developing for NGINX Ingress Controller \u00b6 This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers. Quick Start \u00b6 Getting the code \u00b6 The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx Initial developer environment build \u00b6 Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env Updating the deployment \u00b6 The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller- Dependencies \u00b6 The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune Building \u00b6 All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER = # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY = To find the registry simply run: docker system info | grep Registry Nginx Controller \u00b6 Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG = REGISTRY = $USER /ingress-controller make docker-build Push the container image to a remote repository $ TAG = REGISTRY = $USER /ingress-controller make docker-push Deploying \u00b6 There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide Testing \u00b6 To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored. Releasing \u00b6 All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Development"},{"location":"development/#developing-for-nginx-ingress-controller","text":"This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers.","title":"Developing for NGINX Ingress Controller"},{"location":"development/#quick-start","text":"","title":"Quick Start"},{"location":"development/#getting-the-code","text":"The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx","title":"Getting the code"},{"location":"development/#initial-developer-environment-build","text":"Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env","title":"Initial developer environment build"},{"location":"development/#updating-the-deployment","text":"The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller-","title":"Updating the deployment"},{"location":"development/#dependencies","text":"The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune","title":"Dependencies"},{"location":"development/#building","text":"All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER = # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY = To find the registry simply run: docker system info | grep Registry","title":"Building"},{"location":"development/#nginx-controller","text":"Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG = REGISTRY = $USER /ingress-controller make docker-build Push the container image to a remote repository $ TAG = REGISTRY = $USER /ingress-controller make docker-push","title":"Nginx Controller"},{"location":"development/#deploying","text":"There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide","title":"Deploying"},{"location":"development/#testing","text":"To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored.","title":"Testing"},{"location":"development/#releasing","text":"All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Releasing"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use https://github.com/openresty/lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use https://github.com/openresty/lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 10s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Use the ingress-nginx kubectl plugin Install krew , then run $ ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.23.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) to install the plugin. Then run $ kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands. The plugin includes all of the commands present in the /dbg tool, plus a more detailed version of kubectl get ingresses available by runnning kubectl ingress-nginx ingresses . Use the /dbg Tool to Check Dynamic Configuration $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg dbg is a tool for quickly inspecting the state of the nginx instance Usage: dbg [command] Available Commands: backends Inspect the dynamically-loaded backends information conf Dump the contents of /etc/nginx/nginx.conf general Output the general dynamic lua state help Help about any command Flags: -h, --help help for dbg Use \"dbg [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends Inspect the dynamically-loaded backends information. Usage: dbg backends [command] Available Commands: all Output the all dynamic backend information as a JSON array get Output the backend information only for the backend that has this name list Output a newline-separated list of the backend names Flags: -h, --help help for backends Use \"dbg backends [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends list coffee-svc-80 tea-svc-80 upstream-default-backend $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends get coffee-svc-80 { \"endpoints\": [ { \"address\": \"10.1.1.112\", \"port\": \"8080\" }, { \"address\": \"10.1.1.119\", \"port\": \"8080\" }, { \"address\": \"10.1.1.121\", \"port\": \"8080\" } ], \"load-balance\": \"ewma\", \"name\": \"coffee-svc-80\", \"noServer\": false, \"port\": 0, \"secureCACert\": { \"caFilename\": \"\", \"pemSha\": \"\", \"secret\": \"\" }, \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { .... Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/extensions\", \"/apis/extensions/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/extensions/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 10s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Use the ingress-nginx kubectl plugin Install krew , then run $ ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.23.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) to install the plugin. Then run $ kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands. The plugin includes all of the commands present in the /dbg tool, plus a more detailed version of kubectl get ingresses available by runnning kubectl ingress-nginx ingresses . Use the /dbg Tool to Check Dynamic Configuration $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg dbg is a tool for quickly inspecting the state of the nginx instance Usage: dbg [command] Available Commands: backends Inspect the dynamically-loaded backends information conf Dump the contents of /etc/nginx/nginx.conf general Output the general dynamic lua state help Help about any command Flags: -h, --help help for dbg Use \"dbg [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends Inspect the dynamically-loaded backends information. Usage: dbg backends [command] Available Commands: all Output the all dynamic backend information as a JSON array get Output the backend information only for the backend that has this name list Output a newline-separated list of the backend names Flags: -h, --help help for backends Use \"dbg backends [command] --help\" for more information about a command. $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends list coffee-svc-80 tea-svc-80 upstream-default-backend $ kubectl exec -n nginx-ingress-controller-67956bf89d-fv58j /dbg backends get coffee-svc-80 { \"endpoints\": [ { \"address\": \"10.1.1.112\", \"port\": \"8080\" }, { \"address\": \"10.1.1.119\", \"port\": \"8080\" }, { \"address\": \"10.1.1.121\", \"port\": \"8080\" } ], \"load-balance\": \"ewma\", \"name\": \"coffee-svc-80\", \"noServer\": false, \"port\": 0, \"secureCACert\": { \"caFilename\": \"\", \"pemSha\": \"\", \"secret\": \"\" }, \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { ....","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1 443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/extensions\", \"/apis/extensions/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"deploy/","text":"Installation Guide \u00b6 Contents \u00b6 Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm Prerequisite Generic Deployment Command \u00b6 The following Mandatory Command is required for all deployments. Attention The default configuration watches Ingress object from all the namespaces. To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml Provider Specific Steps \u00b6 There are cloud provider specific yaml files. Docker for Mac \u00b6 Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml minikube \u00b6 For standard usage: minikube addons enable ingress For development: Disable the ingress addon: $ minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s AWS \u00b6 In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page Elastic Load Balancer - ELB \u00b6 This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443 ELB Idle Timeouts \u00b6 In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation . Network Load Balancer (NLB) \u00b6 This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml GCE-GKE \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Bare-metal \u00b6 Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations . Verify installation \u00b6 To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress. Detect installed version \u00b6 To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Using Helm \u00b6 NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"","title":"Installation Guide"},{"location":"deploy/#contents","text":"Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm","title":"Contents"},{"location":"deploy/#prerequisite-generic-deployment-command","text":"The following Mandatory Command is required for all deployments. Attention The default configuration watches Ingress object from all the namespaces. To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value account) kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml","title":"Prerequisite Generic Deployment Command"},{"location":"deploy/#provider-specific-steps","text":"There are cloud provider specific yaml files.","title":"Provider Specific Steps"},{"location":"deploy/#docker-for-mac","text":"Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml","title":"Docker for Mac"},{"location":"deploy/#minikube","text":"For standard usage: minikube addons enable ingress For development: Disable the ingress addon: $ minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s","title":"minikube"},{"location":"deploy/#aws","text":"In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page","title":"AWS"},{"location":"deploy/#elastic-load-balancer-elb","text":"This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443","title":"Elastic Load Balancer - ELB"},{"location":"deploy/#elb-idle-timeouts","text":"In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation .","title":"ELB Idle Timeouts"},{"location":"deploy/#network-load-balancer-nlb","text":"This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#gce-gke","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml","title":"Azure"},{"location":"deploy/#bare-metal","text":"Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations .","title":"Bare-metal"},{"location":"deploy/#verify-installation","text":"To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress.","title":"Verify installation"},{"location":"deploy/#detect-installed-version","text":"To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Detect installed version"},{"location":"deploy/#using-helm","text":"NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Using Helm"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, nginx-ingress-serviceaccount . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role . Cluster Permissions \u00b6 These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update Namespace Permissions \u00b6 These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller. Bindings \u00b6 The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, nginx-ingress-serviceaccount .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=nginx:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=nginx:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"With Helm"},{"location":"examples/","text":"Ingress examples \u00b6 This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them. Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO","title":"Introduction"},{"location":"examples/#ingress-examples","text":"This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them. Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO","title":"Ingress examples"},{"location":"examples/PREREQUISITES/","text":"Prerequisites \u00b6 Many of the examples in this directory have common prerequisites. TLS certificates \u00b6 Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\" Generating a 2048 bit RSA private key ................+++ ................+++ writing new private key to 'tls.key' ----- $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt secret \"tls-secret\" created Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA. Client Certificate Authentication \u00b6 CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA. We have a CA Certificate which we obtain usually from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate. These instructions are based on the following blog Generate the CA Key and Certificate: $ openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority' Generate the Server Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com' $ openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Generate the Client Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client' $ openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt Once this is complete you can continue to follow the instructions here Test HTTP Service \u00b6 All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched","title":"Prerequisites"},{"location":"examples/PREREQUISITES/#prerequisites","text":"Many of the examples in this directory have common prerequisites.","title":"Prerequisites"},{"location":"examples/PREREQUISITES/#tls-certificates","text":"Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \"/CN=nginxsvc/O=nginxsvc\" Generating a 2048 bit RSA private key ................+++ ................+++ writing new private key to 'tls.key' ----- $ kubectl create secret tls tls-secret --key tls.key --cert tls.crt secret \"tls-secret\" created Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.","title":"TLS certificates"},{"location":"examples/PREREQUISITES/#client-certificate-authentication","text":"CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA. We have a CA Certificate which we obtain usually from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate. These instructions are based on the following blog Generate the CA Key and Certificate: $ openssl req -x509 -sha256 -newkey rsa:4096 -keyout ca.key -out ca.crt -days 356 -nodes -subj '/CN=My Cert Authority' Generate the Server Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout server.key -out server.csr -nodes -subj '/CN=mydomain.com' $ openssl x509 -req -sha256 -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01 -out server.crt Generate the Client Key, and Certificate and Sign with the CA Certificate: $ openssl req -new -newkey rsa:4096 -keyout client.key -out client.csr -nodes -subj '/CN=My Client' $ openssl x509 -req -sha256 -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out client.crt Once this is complete you can continue to follow the instructions here","title":"Client Certificate Authentication"},{"location":"examples/PREREQUISITES/#test-http-service","text":"All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched","title":"Test HTTP Service"},{"location":"examples/affinity/cookie/","text":"Sticky sessions \u00b6 This example demonstrates how to achieve session affinity using cookies. Deployment \u00b6 Session affinity can be configured using the following annotations: Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie ) nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE ) nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path ) nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds You can create the example Ingress to test this: kubectl create -f ingress.yaml Validation \u00b6 You can confirm that the Ingress works: $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) Rules: Host Path Backends ---- ---- -------- stickyingress.example.com / nginx-service:80 () Annotations: affinity: cookie session-cookie-name: INGRESSCOOKIE session-cookie-expires: 172800 session-cookie-max-age: 172800 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test $ curl -I http://stickyingress.example.com HTTP/1.1 200 OK Server: nginx/1.11.9 Date: Fri, 10 Feb 2017 14:11:12 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Set-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT ETag: \"58875e6b-264\" Accept-Ranges: bytes In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by NGINX, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing ) and has an Expires directive. If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream. If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded. When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change. When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.","title":"Sticky Sessions"},{"location":"examples/affinity/cookie/#sticky-sessions","text":"This example demonstrates how to achieve session affinity using cookies.","title":"Sticky sessions"},{"location":"examples/affinity/cookie/#deployment","text":"Session affinity can be configured using the following annotations: Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie ) nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE ) nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path ) nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds You can create the example Ingress to test this: kubectl create -f ingress.yaml","title":"Deployment"},{"location":"examples/affinity/cookie/#validation","text":"You can confirm that the Ingress works: $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) Rules: Host Path Backends ---- ---- -------- stickyingress.example.com / nginx-service:80 () Annotations: affinity: cookie session-cookie-name: INGRESSCOOKIE session-cookie-expires: 172800 session-cookie-max-age: 172800 Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test $ curl -I http://stickyingress.example.com HTTP/1.1 200 OK Server: nginx/1.11.9 Date: Fri, 10 Feb 2017 14:11:12 GMT Content-Type: text/html Content-Length: 612 Connection: keep-alive Set-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Expires=Sun, 12-Feb-17 14:11:12 GMT; Max-Age=172800; Path=/; HttpOnly Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT ETag: \"58875e6b-264\" Accept-Ranges: bytes In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by NGINX, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing ) and has an Expires directive. If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream. If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded. When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change. When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.","title":"Validation"},{"location":"examples/auth/basic/","text":"Basic Authentication \u00b6 This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd . It's important the file generated is named auth (actually - that the secret has a key data.auth ), otherwise the ingress-controller returns a 503. $ htpasswd -c auth foo New password: New password: Re-type new password: Adding password for user foo $ kubectl create secret generic basic-auth --from-file = auth secret \"basic-auth\" created $ kubectl get secret basic-auth -o yaml apiVersion: v1 data: auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind: Secret metadata: name: basic-auth namespace: default type: Opaque echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-auth annotations: # type of authentication nginx.ingress.kubernetes.io/auth-type: basic # name of the secret that contains the user/password definitions nginx.ingress.kubernetes.io/auth-secret: basic-auth # message to display with an appropriate context why the authentication is required nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: http-svc servicePort: 80 \" | kubectl create -f - $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' * Trying 10.2.29.4... * Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) > GET / HTTP/1.1 > Host: foo.bar.com > User-Agent: curl/7.43.0 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 05:27:23 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm= \"Authentication Required - foo\" < 401 Authorization Required

401 Authorization Required


nginx/1.10.0
* Connection #0 to host 10.2.29.4 left intact $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' * Trying 10 .2.29.4... * Connected to 10 .2.29.4 ( 10 .2.29.4 ) port 80 ( #0) * Server auth using Basic with user 'foo' > GET / HTTP/1.1 > Host: foo.bar.com > Authorization: Basic Zm9vOmJhcg == > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 06 :05:26 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept-Encoding < CLIENT VALUES: client_address = 10 .2.29.4 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://foo.bar.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic Zm9vOmJhcg == connection = close host = foo.bar.com user-agent = curl/7.43.0 x-forwarded-for = 10 .2.29.1 x-forwarded-host = foo.bar.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.29.1 BODY: * Connection #0 to host 10.2.29.4 left intact -no body in request-","title":"Basic Authentication"},{"location":"examples/auth/basic/#basic-authentication","text":"This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd . It's important the file generated is named auth (actually - that the secret has a key data.auth ), otherwise the ingress-controller returns a 503. $ htpasswd -c auth foo New password: New password: Re-type new password: Adding password for user foo $ kubectl create secret generic basic-auth --from-file = auth secret \"basic-auth\" created $ kubectl get secret basic-auth -o yaml apiVersion: v1 data: auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK kind: Secret metadata: name: basic-auth namespace: default type: Opaque echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress-with-auth annotations: # type of authentication nginx.ingress.kubernetes.io/auth-type: basic # name of the secret that contains the user/password definitions nginx.ingress.kubernetes.io/auth-secret: basic-auth # message to display with an appropriate context why the authentication is required nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo' spec: rules: - host: foo.bar.com http: paths: - path: / backend: serviceName: http-svc servicePort: 80 \" | kubectl create -f - $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' * Trying 10.2.29.4... * Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) > GET / HTTP/1.1 > Host: foo.bar.com > User-Agent: curl/7.43.0 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 05:27:23 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm= \"Authentication Required - foo\" < 401 Authorization Required

401 Authorization Required


nginx/1.10.0
* Connection #0 to host 10.2.29.4 left intact $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' * Trying 10 .2.29.4... * Connected to 10 .2.29.4 ( 10 .2.29.4 ) port 80 ( #0) * Server auth using Basic with user 'foo' > GET / HTTP/1.1 > Host: foo.bar.com > Authorization: Basic Zm9vOmJhcg == > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.10.0 < Date: Wed, 11 May 2016 06 :05:26 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < Vary: Accept-Encoding < CLIENT VALUES: client_address = 10 .2.29.4 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://foo.bar.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic Zm9vOmJhcg == connection = close host = foo.bar.com user-agent = curl/7.43.0 x-forwarded-for = 10 .2.29.1 x-forwarded-host = foo.bar.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.29.1 BODY: * Connection #0 to host 10.2.29.4 left intact -no body in request-","title":"Basic Authentication"},{"location":"examples/auth/client-certs/","text":"Client Certificate Authentication \u00b6 It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup: CA certificate and Key(Intermediate Certs need to be in CA) Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate(Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs . You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following: $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem Then, you can concatenate them all in only one file, named 'ca.crt' as the following: $ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error. Creating Certificate Secrets \u00b6 There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly. You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA. $ kubectl create secret generic ca-secret --from-file = ca.crt = ca.crt $ kubectl create secret generic tls-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key You can create a secret containing CA certificate along with the Server Certificate, that can be used for both TLS and Client Auth. $ kubectl create secret generic ca-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key --from-file = ca.crt = ca.crt Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates. Setup Instructions \u00b6 Add the annotations as provided in the ingress.yaml example to your own ingress resources as required. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.","title":"Client Certificate Authentication"},{"location":"examples/auth/client-certs/#client-certificate-authentication","text":"It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup: CA certificate and Key(Intermediate Certs need to be in CA) Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use) Client Certificate(Signed by CA) and Key For more details on the generation process, checkout the Prerequisite docs . You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following: $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem Then, you can concatenate them all in only one file, named 'ca.crt' as the following: $ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.","title":"Client Certificate Authentication"},{"location":"examples/auth/client-certs/#creating-certificate-secrets","text":"There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly. You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA. $ kubectl create secret generic ca-secret --from-file = ca.crt = ca.crt $ kubectl create secret generic tls-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key You can create a secret containing CA certificate along with the Server Certificate, that can be used for both TLS and Client Auth. $ kubectl create secret generic ca-secret --from-file = tls.crt = server.crt --from-file = tls.key = server.key --from-file = ca.crt = ca.crt Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.","title":"Creating Certificate Secrets"},{"location":"examples/auth/client-certs/#setup-instructions","text":"Add the annotations as provided in the ingress.yaml example to your own ingress resources as required. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400. Test by performing a curl against the Ingress Path with the Client Cert and expect a Status Code 200.","title":"Setup Instructions"},{"location":"examples/auth/external-auth/","text":"External Basic Authentication \u00b6 Example 1: \u00b6 Use an external service (Basic Auth) located in https://httpbin.org $ kubectl create -f ingress.yaml ingress \"external-auth\" created $ kubectl get ing external-auth NAME HOSTS ADDRESS PORTS AGE external-auth external-auth-01.sample.com 172 .17.4.99 80 13s $ kubectl get ing external-auth -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd creationTimestamp: 2016 -10-03T13:50:35Z generation: 1 name: external-auth namespace: default resourceVersion: \"2068378\" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth uid: 5c388f1d-8970-11e6-9004-080027d2dc94 spec: rules: - host: external-auth-01.sample.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / status: loadBalancer: ingress: - ip: 172 .17.4.99 $ Test 1: no username/password (expect code 401) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) > GET / HTTP/1.1 > Host: external-auth-01.sample.com > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:52:08 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm=\"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact Test 2: valid username/password (expect code 200) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' * Rebuilt URL to: http://172.17.4.99/ * Trying 172 .17.4.99... * Connected to 172 .17.4.99 ( 172 .17.4.99 ) port 80 ( #0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjpwYXNzd2Q = > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14 :52:50 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < CLIENT VALUES: client_address = 10 .2.60.2 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://external-auth-01.sample.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic dXNlcjpwYXNzd2Q = connection = close host = external-auth-01.sample.com user-agent = curl/7.50.1 x-forwarded-for = 10 .2.60.1 x-forwarded-host = external-auth-01.sample.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.60.1 BODY: * Connection #0 to host 172.17.4.99 left intact -no body in request- Test 3: invalid username/password (expect code 401) curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjp1c2Vy > User-Agent: curl/7.50.1 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:53:04 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive * Authentication problem. Ignoring this. < WWW-Authenticate: Basic realm= \"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact","title":"External Basic Authentication"},{"location":"examples/auth/external-auth/#external-basic-authentication","text":"","title":"External Basic Authentication"},{"location":"examples/auth/external-auth/#example-1","text":"Use an external service (Basic Auth) located in https://httpbin.org $ kubectl create -f ingress.yaml ingress \"external-auth\" created $ kubectl get ing external-auth NAME HOSTS ADDRESS PORTS AGE external-auth external-auth-01.sample.com 172 .17.4.99 80 13s $ kubectl get ing external-auth -o yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd creationTimestamp: 2016 -10-03T13:50:35Z generation: 1 name: external-auth namespace: default resourceVersion: \"2068378\" selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth uid: 5c388f1d-8970-11e6-9004-080027d2dc94 spec: rules: - host: external-auth-01.sample.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / status: loadBalancer: ingress: - ip: 172 .17.4.99 $ Test 1: no username/password (expect code 401) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) > GET / HTTP/1.1 > Host: external-auth-01.sample.com > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:52:08 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive < WWW-Authenticate: Basic realm=\"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact Test 2: valid username/password (expect code 200) $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' * Rebuilt URL to: http://172.17.4.99/ * Trying 172 .17.4.99... * Connected to 172 .17.4.99 ( 172 .17.4.99 ) port 80 ( #0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjpwYXNzd2Q = > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14 :52:50 GMT < Content-Type: text/plain < Transfer-Encoding: chunked < Connection: keep-alive < CLIENT VALUES: client_address = 10 .2.60.2 command = GET real path = / query = nil request_version = 1 .1 request_uri = http://external-auth-01.sample.com:8080/ SERVER VALUES: server_version = nginx: 1 .9.11 - lua: 10001 HEADERS RECEIVED: accept = */* authorization = Basic dXNlcjpwYXNzd2Q = connection = close host = external-auth-01.sample.com user-agent = curl/7.50.1 x-forwarded-for = 10 .2.60.1 x-forwarded-host = external-auth-01.sample.com x-forwarded-port = 80 x-forwarded-proto = http x-real-ip = 10 .2.60.1 BODY: * Connection #0 to host 172.17.4.99 left intact -no body in request- Test 3: invalid username/password (expect code 401) curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' * Rebuilt URL to: http://172.17.4.99/ * Trying 172.17.4.99... * Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) * Server auth using Basic with user 'user' > GET / HTTP/1.1 > Host: external-auth-01.sample.com > Authorization: Basic dXNlcjp1c2Vy > User-Agent: curl/7.50.1 > Accept: */* > < HTTP /1.1 401 Unauthorized < Server: nginx/1.11.3 < Date: Mon, 03 Oct 2016 14:53:04 GMT < Content-Type: text/html < Content-Length: 195 < Connection: keep-alive * Authentication problem. Ignoring this. < WWW-Authenticate: Basic realm= \"Fake Realm\" < 401 Authorization Required

401 Authorization Required


nginx/1.11.3
* Connection #0 to host 172.17.4.99 left intact","title":"Example 1:"},{"location":"examples/auth/oauth-external-auth/","text":"External OAUTH Authentication \u00b6 Overview \u00b6 The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources. Important This annotation requires nginx-ingress-controller v0.9.0 or greater.) Key Detail \u00b6 This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401 s to the same endpoint. Sample: ... metadata : name : application annotations : nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$escaped_request_uri\" ... Example: OAuth2 Proxy + Kubernetes-Dashboard \u00b6 This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider Prepare \u00b6 Install the kubernetes dashboard kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml Create a custom Github OAuth application Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com Authorization callback URL is the same as the base FQDN plus /oauth2 , like https://foo.bar.com/oauth2 Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values: OAUTH2_PROXY_CLIENT_ID with the github OAUTH2_PROXY_CLIENT_SECRET with the github OAUTH2_PROXY_COOKIE_SECRET with value of python - c 'import os,base64; print base64.b64encode(os.urandom(16))' Customize the contents of the file dashboard-ingress.yaml: Replace __INGRESS_HOST__ with a valid FQDN and __INGRESS_SECRET__ with a Secret with a valid SSL certificate. Deploy the oauth2 proxy and the ingress rules running: $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml Test the oauth integration accessing the configured URL, like https://foo.bar.com","title":"External OAUTH Authentication"},{"location":"examples/auth/oauth-external-auth/#external-oauth-authentication","text":"","title":"External OAUTH Authentication"},{"location":"examples/auth/oauth-external-auth/#overview","text":"The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources. Important This annotation requires nginx-ingress-controller v0.9.0 or greater.)","title":"Overview"},{"location":"examples/auth/oauth-external-auth/#key-detail","text":"This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401 s to the same endpoint. Sample: ... metadata : name : application annotations : nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$escaped_request_uri\" ...","title":"Key Detail"},{"location":"examples/auth/oauth-external-auth/#example-oauth2-proxy-kubernetes-dashboard","text":"This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider","title":"Example: OAuth2 Proxy + Kubernetes-Dashboard"},{"location":"examples/auth/oauth-external-auth/#prepare","text":"Install the kubernetes dashboard kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml Create a custom Github OAuth application Homepage URL is the FQDN in the Ingress rule, like https://foo.bar.com Authorization callback URL is the same as the base FQDN plus /oauth2 , like https://foo.bar.com/oauth2 Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values: OAUTH2_PROXY_CLIENT_ID with the github OAUTH2_PROXY_CLIENT_SECRET with the github OAUTH2_PROXY_COOKIE_SECRET with value of python - c 'import os,base64; print base64.b64encode(os.urandom(16))' Customize the contents of the file dashboard-ingress.yaml: Replace __INGRESS_HOST__ with a valid FQDN and __INGRESS_SECRET__ with a Secret with a valid SSL certificate. Deploy the oauth2 proxy and the ingress rules running: $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml Test the oauth integration accessing the configured URL, like https://foo.bar.com","title":"Prepare"},{"location":"examples/customization/configuration-snippets/","text":"Configuration Snippets \u00b6 Ingress \u00b6 The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example . $ kubectl apply -f ingress.yaml Test \u00b6 Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Configuration Snippets"},{"location":"examples/customization/configuration-snippets/#configuration-snippets","text":"","title":"Configuration Snippets"},{"location":"examples/customization/configuration-snippets/#ingress","text":"The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example . $ kubectl apply -f ingress.yaml","title":"Ingress"},{"location":"examples/customization/configuration-snippets/#test","text":"Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/custom-configuration/","text":"Custom Configuration \u00b6 Using a ConfigMap is possible to customize the NGINX configuration For example, if we want to change the timeouts we need to create a ConfigMap: $ cat configmap.yaml apiVersion: v1 data: proxy-connect-timeout: \"10\" proxy-read-timeout: \"120\" proxy-send-timeout: \"120\" kind: ConfigMap metadata: name: nginx-configuration curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \\ | kubectl apply -f - If the Configmap it is updated, NGINX will be reloaded with the new configuration.","title":"Custom Configuration"},{"location":"examples/customization/custom-configuration/#custom-configuration","text":"Using a ConfigMap is possible to customize the NGINX configuration For example, if we want to change the timeouts we need to create a ConfigMap: $ cat configmap.yaml apiVersion: v1 data: proxy-connect-timeout: \"10\" proxy-read-timeout: \"120\" proxy-send-timeout: \"120\" kind: ConfigMap metadata: name: nginx-configuration curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \\ | kubectl apply -f - If the Configmap it is updated, NGINX will be reloaded with the new configuration.","title":"Custom Configuration"},{"location":"examples/customization/custom-errors/","text":"Custom Errors \u00b6 This example demonstrates how to use a custom backend to render custom error pages. Customized default backend \u00b6 First, create the custom default-backend . It will be used by the Ingress controller later on. $ kubectl create -f custom-default-backend.yaml service \"nginx-errors\" created deployment.apps \"nginx-errors\" created This should have created a Deployment and a Service with the name nginx-errors . $ kubectl get deploy,svc NAME DESIRED CURRENT READY AGE deployment.apps/nginx-errors 1 1 1 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE service/nginx-errors ClusterIP 10 .0.0.12 80 /TCP 10s Ingress controller configuration \u00b6 If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the deployment guide , then follow these steps: Edit the nginx-ingress-controller Deployment and set the value of the --default-backend flag to the name of the newly created error backend. Edit the nginx-configuration ConfigMap and create the key custom-http-errors with a value of 404,503 . Take note of the IP address assigned to the NGINX Ingress controller Service. $ kubectl get svc ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE ingress-nginx ClusterIP 10 .0.0.13 80 /TCP,443/TCP 10m Note The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example. Testing error pages \u00b6 Let us send a couple of HTTP requests using cURL and validate everything is working as expected. A request to the default backend returns a 404 error with a custom message: $ curl -D- http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:11:24 GMT Content-Type: */* Transfer-Encoding: chunked Connection: keep-alive The page you're looking for could not be found. A request with a custom Accept header returns the corresponding document type (JSON): $ curl -D- -H 'Accept: application/json' http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19 :12:36 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding { \"message\" : \"The page you're looking for could not be found\" } To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).","title":"Custom Errors"},{"location":"examples/customization/custom-errors/#custom-errors","text":"This example demonstrates how to use a custom backend to render custom error pages.","title":"Custom Errors"},{"location":"examples/customization/custom-errors/#customized-default-backend","text":"First, create the custom default-backend . It will be used by the Ingress controller later on. $ kubectl create -f custom-default-backend.yaml service \"nginx-errors\" created deployment.apps \"nginx-errors\" created This should have created a Deployment and a Service with the name nginx-errors . $ kubectl get deploy,svc NAME DESIRED CURRENT READY AGE deployment.apps/nginx-errors 1 1 1 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE service/nginx-errors ClusterIP 10 .0.0.12 80 /TCP 10s","title":"Customized default backend"},{"location":"examples/customization/custom-errors/#ingress-controller-configuration","text":"If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the deployment guide , then follow these steps: Edit the nginx-ingress-controller Deployment and set the value of the --default-backend flag to the name of the newly created error backend. Edit the nginx-configuration ConfigMap and create the key custom-http-errors with a value of 404,503 . Take note of the IP address assigned to the NGINX Ingress controller Service. $ kubectl get svc ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT ( S ) AGE ingress-nginx ClusterIP 10 .0.0.13 80 /TCP,443/TCP 10m Note The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.","title":"Ingress controller configuration"},{"location":"examples/customization/custom-errors/#testing-error-pages","text":"Let us send a couple of HTTP requests using cURL and validate everything is working as expected. A request to the default backend returns a 404 error with a custom message: $ curl -D- http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19:11:24 GMT Content-Type: */* Transfer-Encoding: chunked Connection: keep-alive The page you're looking for could not be found. A request with a custom Accept header returns the corresponding document type (JSON): $ curl -D- -H 'Accept: application/json' http://10.0.0.13/ HTTP/1.1 404 Not Found Server: nginx/1.13.12 Date: Tue, 12 Jun 2018 19 :12:36 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding { \"message\" : \"The page you're looking for could not be found\" } To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).","title":"Testing error pages"},{"location":"examples/customization/custom-headers/","text":"Custom Headers \u00b6 This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure a custom list of headers to be passed to the upstream server curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml \\ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml \\ | kubectl apply -f - Test \u00b6 Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Custom Headers"},{"location":"examples/customization/custom-headers/#custom-headers","text":"This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure a custom list of headers to be passed to the upstream server curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml \\ | kubectl apply -f - curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml \\ | kubectl apply -f -","title":"Custom Headers"},{"location":"examples/customization/custom-headers/#test","text":"Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/external-auth-headers/","text":"External authentication, authentication service response headers propagation \u00b6 This example demonstrates propagation of selected authentication service response headers to backend service. Sample configuration includes: Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated After successful authentication service generates response headers UserID and UserRole Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows: $ kubectl create -f deploy/ deployment \"demo-auth-service\" created service \"demo-auth-service\" created ingress \"demo-auth-service\" created deployment \"demo-echo-service\" created service \"demo-echo-service\" created ingress \"public-demo-echo-service\" created ingress \"secure-demo-echo-service\" created $ kubectl get po NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public-demo-echo-service public-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m Test 1: public service with no auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 20 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: , UserRole: Test 2: secure service with no auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:18:48 GMT < Content-Type: text/html < Content-Length: 170 < Connection: keep-alive < 403 Forbidden

403 Forbidden


nginx/1.11.10
* Connection #0 to host 192.168.99.100 left intact Test 3: public service with valid auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:59 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 44 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 1443635317331776148, UserRole: admin Test 4: public service with valid auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:17:23 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 43 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 605394647632969758, UserRole: admin","title":"External authentication"},{"location":"examples/customization/external-auth-headers/#external-authentication-authentication-service-response-headers-propagation","text":"This example demonstrates propagation of selected authentication service response headers to backend service. Sample configuration includes: Sample authentication service producing several response headers Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated After successful authentication service generates response headers UserID and UserRole Sample echo service displaying header information Two ingress objects pointing to echo service Public, which allows access from unauthenticated users Private, which allows access from authenticated users only You can deploy the controller as follows: $ kubectl create -f deploy/ deployment \"demo-auth-service\" created service \"demo-auth-service\" created ingress \"demo-auth-service\" created deployment \"demo-echo-service\" created service \"demo-echo-service\" created ingress \"public-demo-echo-service\" created ingress \"secure-demo-echo-service\" created $ kubectl get po NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s kubectl get ing NAME HOSTS ADDRESS PORTS AGE public-demo-echo-service public-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m Test 1: public service with no auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:21 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 20 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: , UserRole: Test 2: secure service with no auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > < HTTP/1.1 403 Forbidden < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:18:48 GMT < Content-Type: text/html < Content-Length: 170 < Connection: keep-alive < 403 Forbidden

403 Forbidden


nginx/1.11.10
* Connection #0 to host 192.168.99.100 left intact Test 3: public service with valid auth header $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: public-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:19:59 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 44 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 1443635317331776148, UserRole: admin Test 4: public service with valid auth header $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192 .168.99.100 * Rebuilt URL to: 192.168.99.100/ * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) > GET / HTTP/1.1 > Host: secure-demo-echo-service.kube.local > User-Agent: curl/7.43.0 > Accept: */* > User:internal > < HTTP/1.1 200 OK < Server: nginx/1.11.10 < Date: Mon, 13 Mar 2017 20:17:23 GMT < Content-Type: text/plain; charset=utf-8 < Content-Length: 43 < Connection: keep-alive < * Connection #0 to host 192.168.99.100 left intact UserID: 605394647632969758, UserRole: admin","title":"External authentication, authentication service response headers propagation"},{"location":"examples/customization/ssl-dh-param/","text":"Custom DH parameters for perfect forward secrecy \u00b6 This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\". Custom configuration \u00b6 $ cat configmap.yaml apiVersion: v1 data: ssl-dh-param: \"ingress-nginx/lb-dhparam\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f configmap.yaml Custom DH parameters secret \u00b6 $ > openssl dhparam 1024 2 > /dev/null | base64 LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... $ cat ssl-dh-param.yaml apiVersion: v1 data: dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f ssl-dh-param.yaml Test \u00b6 Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Custom DH parameters for perfect forward secrecy"},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-for-perfect-forward-secrecy","text":"This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".","title":"Custom DH parameters for perfect forward secrecy"},{"location":"examples/customization/ssl-dh-param/#custom-configuration","text":"$ cat configmap.yaml apiVersion: v1 data: ssl-dh-param: \"ingress-nginx/lb-dhparam\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f configmap.yaml","title":"Custom configuration"},{"location":"examples/customization/ssl-dh-param/#custom-dh-parameters-secret","text":"$ > openssl dhparam 1024 2 > /dev/null | base64 LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... $ cat ssl-dh-param.yaml apiVersion: v1 data: dhparam.pem: \"LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...\" kind: ConfigMap metadata: name: nginx-configuration namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx $ kubectl create -f ssl-dh-param.yaml","title":"Custom DH parameters secret"},{"location":"examples/customization/ssl-dh-param/#test","text":"Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf","title":"Test"},{"location":"examples/customization/sysctl/","text":"Sysctl tuning \u00b6 This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch kubectl patch deployment -n ingress-nginx nginx-ingress-controller --patch=\"$(cat patch.json)\"","title":"Sysctl tuning"},{"location":"examples/customization/sysctl/#sysctl-tuning","text":"This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch kubectl patch deployment -n ingress-nginx nginx-ingress-controller --patch=\"$(cat patch.json)\"","title":"Sysctl tuning"},{"location":"examples/docker-registry/","text":"Docker registry \u00b6 This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet Deployment \u00b6 First we deploy the docker registry in the cluster: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml Important DO NOT RUN THIS IN PRODUCTION This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies. The next required step is creation of the ingress rules. To do this we have two options: with and without TLS Without TLS \u00b6 Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml Important Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. Please check deploy a plain http registry With TLS \u00b6 Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate. Testing \u00b6 To test the registry is working correctly we download a known image from docker hub , create a tag pointing to the new registry and upload the image: docker pull ubuntu:16.04 docker tag ubuntu:16.04 `registry./ubuntu:16.04` docker push `registry./ubuntu:16.04` Please replace registry. with your domain.","title":"Docker registry"},{"location":"examples/docker-registry/#docker-registry","text":"This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet","title":"Docker registry"},{"location":"examples/docker-registry/#deployment","text":"First we deploy the docker registry in the cluster: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml Important DO NOT RUN THIS IN PRODUCTION This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies. The next required step is creation of the ingress rules. To do this we have two options: with and without TLS","title":"Deployment"},{"location":"examples/docker-registry/#without-tls","text":"Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml Important Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. Please check deploy a plain http registry","title":"Without TLS"},{"location":"examples/docker-registry/#with-tls","text":"Download and edit the yaml deployment replacing registry. with a valid DNS name pointing to the ingress controller: wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.","title":"With TLS"},{"location":"examples/docker-registry/#testing","text":"To test the registry is working correctly we download a known image from docker hub , create a tag pointing to the new registry and upload the image: docker pull ubuntu:16.04 docker tag ubuntu:16.04 `registry./ubuntu:16.04` docker push `registry./ubuntu:16.04` Please replace registry. with your domain.","title":"Testing"},{"location":"examples/grpc/","text":"gRPC \u00b6 This example demonstrates how to route traffic to a gRPC service through the nginx controller. Prerequisites \u00b6 You have a kubernetes cluster running. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress). You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0 for grpc support. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example. Step 1: kubernetes Deployment \u00b6 $ kubectl create -f app.yaml This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051 . The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation: func main () { grpcServer := grpc . NewServer () fortune . RegisterFortuneTellerServer ( grpcServer , & FortuneTeller {}) lis , _ := net . Listen ( \"tcp\" , \":50051\" ) grpcServer . Serve ( lis ) } The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive \"insecure\"). For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\" . Step 2: the kubernetes Service \u00b6 $ kubectl create -f svc.yaml Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051 . Step 3: the kubernetes Ingress \u00b6 $ kubectl create -f ingress.yaml A few things to note: We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\" . This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service. We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build . The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service. Step 4: test the connection \u00b6 Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility: $ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict { \"message\" : \"Let us endeavor so to live that when we come to die even the undertaker will be sorry.\\n\\t\\t-- Mark Twain, \\\"Pudd'nhead Wilson's Calendar\\\"\" } Debugging Hints \u00b6 Obviously, watch the logs on your app. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed). Double-check your address and ports. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540 . If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.","title":"gRPC"},{"location":"examples/grpc/#grpc","text":"This example demonstrates how to route traffic to a gRPC service through the nginx controller.","title":"gRPC"},{"location":"examples/grpc/#prerequisites","text":"You have a kubernetes cluster running. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress). You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0 for grpc support. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.","title":"Prerequisites"},{"location":"examples/grpc/#step-1-kubernetes-deployment","text":"$ kubectl create -f app.yaml This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051 . The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation: func main () { grpcServer := grpc . NewServer () fortune . RegisterFortuneTellerServer ( grpcServer , & FortuneTeller {}) lis , _ := net . Listen ( \"tcp\" , \":50051\" ) grpcServer . Serve ( lis ) } The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive \"insecure\"). For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\" .","title":"Step 1: kubernetes Deployment"},{"location":"examples/grpc/#step-2-the-kubernetes-service","text":"$ kubectl create -f svc.yaml Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051 .","title":"Step 2: the kubernetes Service"},{"location":"examples/grpc/#step-3-the-kubernetes-ingress","text":"$ kubectl create -f ingress.yaml A few things to note: We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\" . This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service. We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build . The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.","title":"Step 3: the kubernetes Ingress"},{"location":"examples/grpc/#step-4-test-the-connection","text":"Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility: $ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict { \"message\" : \"Let us endeavor so to live that when we come to die even the undertaker will be sorry.\\n\\t\\t-- Mark Twain, \\\"Pudd'nhead Wilson's Calendar\\\"\" }","title":"Step 4: test the connection"},{"location":"examples/grpc/#debugging-hints","text":"Obviously, watch the logs on your app. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed). Double-check your address and ports. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540 . If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.","title":"Debugging Hints"},{"location":"examples/multi-tls/","text":"Multi TLS certificate termination \u00b6 This example uses 2 different certificates to terminate SSL for 2 hostnames. Deploy the controller by creating the rc in the parent dir Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like: $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35 server { listen 80; listen 443 ssl http2; ssl_certificate /etc/nginx-ssl/default-foobar.pem; ssl_certificate_key /etc/nginx-ssl/default-foobar.pem; server_name foo.bar.com; if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://default-http-svc-80; } And you should be able to reach your nginx service or http-svc service using a hostname switch: $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com / http-svc:80 bar.baz.com / nginx:80 $ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k CLIENT VALUES: client_address=10.245.0.6 command=GET real path=/ query=nil request_version=1.1 request_uri=http://foo.bar.com:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close host=foo.bar.com user-agent=curl/7.35.0 x-forwarded-for=10.245.0.1 x-forwarded-host=foo.bar.com x-forwarded-proto=https $ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k Welcome to nginx on Debian! $ curl 104 .154.30.67 default backend - 404","title":"Multi TLS certificate termination"},{"location":"examples/multi-tls/#multi-tls-certificate-termination","text":"This example uses 2 different certificates to terminate SSL for 2 hostnames. Deploy the controller by creating the rc in the parent dir Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml Create multi-tls.yaml This should generate a segment like: $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep \"foo.bar.com\" -B 7 -A 35 server { listen 80; listen 443 ssl http2; ssl_certificate /etc/nginx-ssl/default-foobar.pem; ssl_certificate_key /etc/nginx-ssl/default-foobar.pem; server_name foo.bar.com; if ($scheme = http) { return 301 https://$host$request_uri; } location / { proxy_set_header Host $host; # Pass Real IP proxy_set_header X-Real-IP $remote_addr; # Allow websocket connections proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_connect_timeout 5s; proxy_send_timeout 60s; proxy_read_timeout 60s; proxy_redirect off; proxy_buffering off; proxy_http_version 1.1; proxy_pass http://default-http-svc-80; } And you should be able to reach your nginx service or http-svc service using a hostname switch: $ kubectl get ing NAME RULE BACKEND ADDRESS AGE foo-tls - 104.154.30.67 13m foo.bar.com / http-svc:80 bar.baz.com / nginx:80 $ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k CLIENT VALUES: client_address=10.245.0.6 command=GET real path=/ query=nil request_version=1.1 request_uri=http://foo.bar.com:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close host=foo.bar.com user-agent=curl/7.35.0 x-forwarded-for=10.245.0.1 x-forwarded-host=foo.bar.com x-forwarded-proto=https $ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k Welcome to nginx on Debian! $ curl 104 .154.30.67 default backend - 404","title":"Multi TLS certificate termination"},{"location":"examples/rewrite/","text":"Rewrite \u00b6 This example demonstrates how to use the Rewrite annotations Prerequisites \u00b6 You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster. Deployment \u00b6 Rewriting can be controlled using the following annotations: Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool Examples \u00b6 Rewrite Target \u00b6 Attention Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group . Note Captured groups are saved in numbered placeholders, chronologically, in the form $1 , $2 ... $n . These placeholders can be used as parameters in the rewrite-target annotation. Create an Ingress rule with a rewrite annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something/?(.*) \" | kubectl create -f - In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $1 , which is then used as a parameter in the rewrite-target annotation. For example, the ingress definition above will result in the following rewrites: - rewrite.bar.com/something rewrites to rewrite.bar.com/ - rewrite.bar.com/something/ rewrites to rewrite.bar.com/ - rewrite.bar.com/something/new rewrites to rewrite.bar.com/new App Root \u00b6 Create an Ingress rule with a app-root annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / \" | kubectl create -f - Check the rewrite is working $ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14 :57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://stickyingress.example.com/app1 Connection: keep-alive","title":"Rewrite"},{"location":"examples/rewrite/#rewrite","text":"This example demonstrates how to use the Rewrite annotations","title":"Rewrite"},{"location":"examples/rewrite/#prerequisites","text":"You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.","title":"Prerequisites"},{"location":"examples/rewrite/#deployment","text":"Rewriting can be controlled using the following annotations: Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool","title":"Deployment"},{"location":"examples/rewrite/#examples","text":"","title":"Examples"},{"location":"examples/rewrite/#rewrite-target","text":"Attention Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group . Note Captured groups are saved in numbered placeholders, chronologically, in the form $1 , $2 ... $n . These placeholders can be used as parameters in the rewrite-target annotation. Create an Ingress rule with a rewrite annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/rewrite-target: /$1 name: rewrite namespace: default spec: rules: - host: rewrite.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: /something/?(.*) \" | kubectl create -f - In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $1 , which is then used as a parameter in the rewrite-target annotation. For example, the ingress definition above will result in the following rewrites: - rewrite.bar.com/something rewrites to rewrite.bar.com/ - rewrite.bar.com/something/ rewrites to rewrite.bar.com/ - rewrite.bar.com/something/new rewrites to rewrite.bar.com/new","title":"Rewrite Target"},{"location":"examples/rewrite/#app-root","text":"Create an Ingress rule with a app-root annotation: $ echo \" apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/app-root: /app1 name: approot namespace: default spec: rules: - host: approot.bar.com http: paths: - backend: serviceName: http-svc servicePort: 80 path: / \" | kubectl create -f - Check the rewrite is working $ curl -I -k http://approot.bar.com/ HTTP/1.1 302 Moved Temporarily Server: nginx/1.11.10 Date: Mon, 13 Mar 2017 14 :57:15 GMT Content-Type: text/html Content-Length: 162 Location: http://stickyingress.example.com/app1 Connection: keep-alive","title":"App Root"},{"location":"examples/static-ip/","text":"Static IPs \u00b6 This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller. Prerequisites \u00b6 You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster. Acquiring an IP \u00b6 Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade. To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer . First, create a loadbalancer Service and wait for it to acquire an IP $ kubectl create -f static-ip-svc.yaml service \"nginx-ingress-lb\" created $ kubectl get svc nginx-ingress-lb NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"nginx-ingress-lb\"). $ kubectl create -f nginx-ingress-controller.yaml deployment \"nginx-ingress-controller\" created Assigning the IP to an Ingress \u00b6 From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m $ curl 104 .154.109.191 -kL CLIENT VALUES: client_address=10.180.1.25 command=GET real path=/ query=nil request_version=1.1 request_uri=http://104.154.109.191:8080/ ... Retaining the IP \u00b6 You can test retention by deleting the Ingress $ kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers. Promote ephemeral to static IP \u00b6 To promote the allocated IP to static, you can update the Service manifest $ kubectl patch svc nginx-ingress-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}' \"nginx-ingress-lb\" patched and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) ` $ gcloud compute addresses create nginx-ingress-lb --addresses 104 .154.109.191 --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb]. --- address: 104.154.109.191 creationTimestamp: '2017-01-31T16:34:50.089-08:00' description: '' id: '5208037144487826373' kind: compute#address name: nginx-ingress-lb region: us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb status: IN_USE users: - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191 .","title":"Static IPs"},{"location":"examples/static-ip/#static-ips","text":"This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.","title":"Static IPs"},{"location":"examples/static-ip/#prerequisites","text":"You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation , and that you have an ingress controller running in your cluster.","title":"Prerequisites"},{"location":"examples/static-ip/#acquiring-an-ip","text":"Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade. To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer . First, create a loadbalancer Service and wait for it to acquire an IP $ kubectl create -f static-ip-svc.yaml service \"nginx-ingress-lb\" created $ kubectl get svc nginx-ingress-lb NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ingress-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"nginx-ingress-lb\"). $ kubectl create -f nginx-ingress-controller.yaml deployment \"nginx-ingress-controller\" created","title":"Acquiring an IP"},{"location":"examples/static-ip/#assigning-the-ip-to-an-ingress","text":"From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m $ curl 104 .154.109.191 -kL CLIENT VALUES: client_address=10.180.1.25 command=GET real path=/ query=nil request_version=1.1 request_uri=http://104.154.109.191:8080/ ...","title":"Assigning the IP to an Ingress"},{"location":"examples/static-ip/#retaining-the-ip","text":"You can test retention by deleting the Ingress $ kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted $ kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created $ kubectl get ing nginx-ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress * 104.154.109.191 80, 443 13m Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.","title":"Retaining the IP"},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","text":"To promote the allocated IP to static, you can update the Service manifest $ kubectl patch svc nginx-ingress-lb -p '{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}' \"nginx-ingress-lb\" patched and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) ` $ gcloud compute addresses create nginx-ingress-lb --addresses 104 .154.109.191 --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb]. --- address: 104.154.109.191 creationTimestamp: '2017-01-31T16:34:50.089-08:00' description: '' id: '5208037144487826373' kind: compute#address name: nginx-ingress-lb region: us-central1 selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb status: IN_USE users: - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 Now even if the Service is deleted, the IP will persist, so you can recreate the Service with spec.loadBalancerIP set to 104.154.109.191 .","title":"Promote ephemeral to static IP"},{"location":"examples/tls-termination/","text":"TLS termination \u00b6 This example demonstrates how to terminate TLS through the nginx Ingress controller. Prerequisites \u00b6 You need a TLS cert and a test HTTP service for this example. Deployment \u00b6 Create a values.yaml file. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx-test spec : tls : - hosts : - foo.bar.com # This assumes tls-secret exists and the SSL # certificate contains a CN for foo.bar.com secretName : tls-secret rules : - host : foo.bar.com http : paths : - path : / backend : # This assumes http-svc exists and routes to healthy endpoints serviceName : http-svc servicePort : 80 The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service. kubectl apply -f ingress.yaml Validation \u00b6 You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: 104.198.183.6 Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) TLS: tls-secret terminates Rules: Host Path Backends ---- ---- -------- * http-svc:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal UPDATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal CREATE ip: 104.198.183.6 7s 7s 1 {nginx-ingress-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming / $ curl 104 .198.183.6 -L curl: (60) SSL certificate problem: self signed certificate More details here: http://curl.haxx.se/docs/sslcerts.html $ curl 104 .198.183.6 -Lk CLIENT VALUES: client_address=10.240.0.4 command=GET real path=/ query=nil request_version=1.1 request_uri=http://35.186.221.137:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=Keep-Alive host=35.186.221.137 user-agent=curl/7.46.0 via=1.1 google x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106 x-forwarded-for=104.132.0.80, 35.186.221.137 x-forwarded-proto=https BODY:","title":"TLS termination"},{"location":"examples/tls-termination/#tls-termination","text":"This example demonstrates how to terminate TLS through the nginx Ingress controller.","title":"TLS termination"},{"location":"examples/tls-termination/#prerequisites","text":"You need a TLS cert and a test HTTP service for this example.","title":"Prerequisites"},{"location":"examples/tls-termination/#deployment","text":"Create a values.yaml file. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : nginx-test spec : tls : - hosts : - foo.bar.com # This assumes tls-secret exists and the SSL # certificate contains a CN for foo.bar.com secretName : tls-secret rules : - host : foo.bar.com http : paths : - path : / backend : # This assumes http-svc exists and routes to healthy endpoints serviceName : http-svc servicePort : 80 The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service. kubectl apply -f ingress.yaml","title":"Deployment"},{"location":"examples/tls-termination/#validation","text":"You can confirm that the Ingress works. $ kubectl describe ing nginx-test Name: nginx-test Namespace: default Address: 104.198.183.6 Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) TLS: tls-secret terminates Rules: Host Path Backends ---- ---- -------- * http-svc:80 () Annotations: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal UPDATE default/nginx-test 7s 7s 1 {nginx-ingress-controller } Normal CREATE ip: 104.198.183.6 7s 7s 1 {nginx-ingress-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming / $ curl 104 .198.183.6 -L curl: (60) SSL certificate problem: self signed certificate More details here: http://curl.haxx.se/docs/sslcerts.html $ curl 104 .198.183.6 -Lk CLIENT VALUES: client_address=10.240.0.4 command=GET real path=/ query=nil request_version=1.1 request_uri=http://35.186.221.137:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=Keep-Alive host=35.186.221.137 user-agent=curl/7.46.0 via=1.1 google x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106 x-forwarded-for=104.132.0.80, 35.186.221.137 x-forwarded-proto=https BODY:","title":"Validation"},{"location":"user-guide/basic-usage/","text":"Basic usage - host based routing \u00b6 ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powerd by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name. First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA , myServiceB . Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org . One possible solution is to create two ingress resources: apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceA annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceA . foo . org http : paths : - path : / backend : serviceName : myServiceA servicePort : 80 --- apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceB annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceB . foo . org http : paths : - path : / backend : serviceName : myServiceB servicePort : 80 When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource. On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running: kubectl get services -n ingress-nginx","title":"Basic usage"},{"location":"user-guide/basic-usage/#basic-usage-host-based-routing","text":"ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powerd by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name. First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA , myServiceB . Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org . One possible solution is to create two ingress resources: apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceA annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceA . foo . org http : paths : - path : / backend : serviceName : myServiceA servicePort : 80 --- apiVersion : extensions / v1beta1 kind : Ingress metadata : name : ingress - myServiceB annotations : # use the shared ingress - nginx kubernetes . io / ingress . class : \"nginx\" spec : rules : - host : myServiceB . foo . org http : paths : - path : / backend : serviceName : myServiceB servicePort : 80 When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation. Please note that the ingress resource should be placed inside the same namespace of the backend resource. On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myServiceA.foo.org and myServiceB.foo.org to the nginx external IP. Get the external IP by running: kubectl get services -n ingress-nginx","title":"Basic usage - host based routing"},{"location":"user-guide/cli-arguments/","text":"Command line arguments \u00b6 The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. If not specified, a 404 page will be returned directly from NGINX. --default-server-port int When default-backend-service is not specified or specified service does not have any endpoint, a local endpoint with this port will be used to serve 404 page from inside Nginx. --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --disable-catch-all Disable support for catch-all Ingresses. --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-certificates Dynamically serves certificates instead of reloading NGINX when certificates are created, updated, or deleted. Currently does not support OCSP stapling, so --enable-ssl-chain-completion must be turned off (default behaviour). Assuming the certificate is generated with a 2048 bit RSA key/cert pair, this feature can store roughly 5000 certificates. (enabled by default) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout duration Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) -v , --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.","title":"Command line arguments"},{"location":"user-guide/cli-arguments/#command-line-arguments","text":"The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. If not specified, a 404 page will be returned directly from NGINX. --default-server-port int When default-backend-service is not specified or specified service does not have any endpoint, a local endpoint with this port will be used to serve 404 page from inside Nginx. --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --disable-catch-all Disable support for catch-all Ingresses. --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-certificates Dynamically serves certificates instead of reloading NGINX when certificates are created, updated, or deleted. Currently does not support OCSP stapling, so --enable-ssl-chain-completion must be turned off (default behaviour). Assuming the certificate is generated with a 2048 bit RSA key/cert pair, this feature can store roughly 5000 certificates. (enabled by default) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout duration Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) -v , --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.","title":"Command line arguments"},{"location":"user-guide/custom-errors/","text":"Custom errors \u00b6 When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error: Header Value X-Code HTTP status code retuned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json , a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML. Important The custom backend is expected to return the correct HTTP status code instead of 200 . NGINX does not change the response from the custom default backend. An example of such custom backend is available inside the source repository at images/custom-error-pages . See also the Custom errors example.","title":"Custom errors"},{"location":"user-guide/custom-errors/#custom-errors","text":"When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error: Header Value X-Code HTTP status code retuned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json , a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML. Important The custom backend is expected to return the correct HTTP status code instead of 200 . NGINX does not change the response from the custom default backend. An example of such custom backend is available inside the source repository at images/custom-error-pages . See also the Custom errors example.","title":"Custom errors"},{"location":"user-guide/default-backend/","text":"Default backend \u00b6 The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs: /healthz that returns 200 / that returns 404 Example The sub-directory /images/404-server provides a service which satisfies the requirements for a default backend. Example The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.","title":"Default backend"},{"location":"user-guide/default-backend/#default-backend","text":"The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). Basically a default backend exposes two URLs: /healthz that returns 200 / that returns 404 Example The sub-directory /images/404-server provides a service which satisfies the requirements for a default backend. Example The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.","title":"Default backend"},{"location":"user-guide/exposing-tcp-udp-services/","text":"Exposing TCP and UDP services \u00b6 Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: ::[PROXY]:[PROXY] It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service https://www.nginx.com/resources/admin-guide/proxy-protocol The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000 apiVersion : v1 kind : ConfigMap metadata : name : tcp-services namespace : ingress-nginx data : 9000 : \"default/example-go:8080\" Since 1.9.13 NGINX provides UDP Load Balancing . The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53 apiVersion : v1 kind : ConfigMap metadata : name : udp-services namespace : ingress-nginx data : 53 : \"kube-system/kube-dns:53\" If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress. apiVersion : v1 kind : Service metadata : name : ingress-nginx namespace : ingress-nginx labels : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx spec : type : LoadBalancer ports : - name : http port : 80 targetPort : 80 protocol : TCP - name : https port : 443 targetPort : 443 protocol : TCP - name : proxied-tcp-9000 port : 9000 targetPort : 9000 protocol : TCP selector : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx","title":"Exposing TCP and UDP services"},{"location":"user-guide/exposing-tcp-udp-services/#exposing-tcp-and-udp-services","text":"Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: ::[PROXY]:[PROXY] It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service https://www.nginx.com/resources/admin-guide/proxy-protocol The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000 apiVersion : v1 kind : ConfigMap metadata : name : tcp-services namespace : ingress-nginx data : 9000 : \"default/example-go:8080\" Since 1.9.13 NGINX provides UDP Load Balancing . The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53 apiVersion : v1 kind : ConfigMap metadata : name : udp-services namespace : ingress-nginx data : 53 : \"kube-system/kube-dns:53\" If TCP/UDP proxy support is used, then those ports need to be exposed in the Service defined for the Ingress. apiVersion : v1 kind : Service metadata : name : ingress-nginx namespace : ingress-nginx labels : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx spec : type : LoadBalancer ports : - name : http port : 80 targetPort : 80 protocol : TCP - name : https port : 443 targetPort : 443 protocol : TCP - name : proxied-tcp-9000 port : 9000 targetPort : 9000 protocol : TCP selector : app.kubernetes.io/name : ingress-nginx app.kubernetes.io/part-of : ingress-nginx","title":"Exposing TCP and UDP services"},{"location":"user-guide/external-articles/","text":"External Articles \u00b6 Pain(less) NGINX Ingress Accessing Kubernetes Pods from Outside of the Cluster Kubernetes - Redirect HTTP to HTTPS with ELB and the nginx ingress controller Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure","title":"External Articles"},{"location":"user-guide/external-articles/#external-articles","text":"Pain(less) NGINX Ingress Accessing Kubernetes Pods from Outside of the Cluster Kubernetes - Redirect HTTP to HTTPS with ELB and the nginx ingress controller Configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure","title":"External Articles"},{"location":"user-guide/ingress-path-matching/","text":"Ingress Path Matching \u00b6 Regular Expression Support \u00b6 Important Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used. The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. See the description of the use-regex annotation for more details. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/.* backend : serviceName : test servicePort : 80 The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server: location ~* \"^/foo/.*\" { ... } Path Priority \u00b6 In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks. Please read the warning before using regular expressions in your ingress definitions. Example \u00b6 Let the following two ingress definitions be created: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-1 spec : rules : - host : test.com http : paths : - path : /foo/bar backend : serviceName : service1 servicePort : 80 - path : /foo/bar/ backend : serviceName : service2 servicePort : 80 apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-2 annotations : nginx.ingress.kubernetes.io/rewrite-target : /$1 spec : rules : - host : test.com http : paths : - path : /foo/bar/(.+) backend : serviceName : service3 servicePort : 80 The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server: location ~* ^/foo/bar/.+ { ... } location ~* \"^/foo/bar/\" { ... } location ~* \"^/foo/bar\" { ... } The following request URI's would match the corresponding location blocks: test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3. test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2. test.com/foo/bar matches ~* ^/foo/bar and will go to service 1. IMPORTANT NOTES : If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Warning \u00b6 The following example describes a case that may inflict unwanted path matching behaviour. This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier . For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\" . Example \u00b6 Let the following ingress be defined: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-3 annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/bar/bar backend : serviceName : test servicePort : 80 - path : /foo/bar/[A-Z0-9]{3} backend : serviceName : test servicePort : 80 The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server: location ~* \"^/foo/bar/[A-Z0-9]{3}\" { ... } location ~* \"^/foo/bar/bar\" { ... } A request to test.com/foo/bar/bar would match the ^/foo/[A-Z0-9]{3} location block instead of the longest EXACT matching path.","title":"Regular expressions in paths"},{"location":"user-guide/ingress-path-matching/#ingress-path-matching","text":"","title":"Ingress Path Matching"},{"location":"user-guide/ingress-path-matching/#regular-expression-support","text":"Important Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used. The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. See the description of the use-regex annotation for more details. apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/.* backend : serviceName : test servicePort : 80 The preceding ingress definition would translate to the following location block within the NGINX configuration for the test.com server: location ~* \"^/foo/.*\" { ... }","title":"Regular Expression Support"},{"location":"user-guide/ingress-path-matching/#path-priority","text":"In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks. Please read the warning before using regular expressions in your ingress definitions.","title":"Path Priority"},{"location":"user-guide/ingress-path-matching/#example","text":"Let the following two ingress definitions be created: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-1 spec : rules : - host : test.com http : paths : - path : /foo/bar backend : serviceName : service1 servicePort : 80 - path : /foo/bar/ backend : serviceName : service2 servicePort : 80 apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-2 annotations : nginx.ingress.kubernetes.io/rewrite-target : /$1 spec : rules : - host : test.com http : paths : - path : /foo/bar/(.+) backend : serviceName : service3 servicePort : 80 The ingress controller would define the following location blocks, in order of descending length, within the NGINX template for the test.com server: location ~* ^/foo/bar/.+ { ... } location ~* \"^/foo/bar/\" { ... } location ~* \"^/foo/bar\" { ... } The following request URI's would match the corresponding location blocks: test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3. test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2. test.com/foo/bar matches ~* ^/foo/bar and will go to service 1. IMPORTANT NOTES : If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.","title":"Example"},{"location":"user-guide/ingress-path-matching/#warning","text":"The following example describes a case that may inflict unwanted path matching behaviour. This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier . For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\" .","title":"Warning"},{"location":"user-guide/ingress-path-matching/#example_1","text":"Let the following ingress be defined: apiVersion : extensions/v1beta1 kind : Ingress metadata : name : test-ingress-3 annotations : nginx.ingress.kubernetes.io/use-regex : \"true\" spec : rules : - host : test.com http : paths : - path : /foo/bar/bar backend : serviceName : test servicePort : 80 - path : /foo/bar/[A-Z0-9]{3} backend : serviceName : test servicePort : 80 The ingress controller would define the following location blocks (in this order) within the NGINX template for the test.com server: location ~* \"^/foo/bar/[A-Z0-9]{3}\" { ... } location ~* \"^/foo/bar/bar\" { ... } A request to test.com/foo/bar/bar would match the ^/foo/[A-Z0-9]{3} location block instead of the longest EXACT matching path.","title":"Example"},{"location":"user-guide/miscellaneous/","text":"Miscellaneous \u00b6 Source IP address \u00b6 By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer. If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Another option is to enable proxy protocol using use-proxy-protocol: \"true\" . In this mode NGINX does not use the content of the header to get the source IP address of the connection. Proxy Protocol \u00b6 If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others ELBs in AWS and HAProxy support Proxy Protocol. Websockets \u00b6 Support for websockets is provided by NGINX out of the box. No special configuration required. The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout . The default value of this settings is 60 seconds . A more adequate value to support websockets is a value higher than one hour ( 3600 ). Important If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP. Optimizing TLS Time To First Byte (TTTFB) \u00b6 NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size. This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k ). Retries in non-idempotent methods \u00b6 Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap. Limitations \u00b6 Ingress rules for TLS require the definition of the field host Why endpoints and not services \u00b6 The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.","title":"Miscellaneous"},{"location":"user-guide/miscellaneous/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"user-guide/miscellaneous/#source-ip-address","text":"By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer. If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Another option is to enable proxy protocol using use-proxy-protocol: \"true\" . In this mode NGINX does not use the content of the header to get the source IP address of the connection.","title":"Source IP address"},{"location":"user-guide/miscellaneous/#proxy-protocol","text":"If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others ELBs in AWS and HAProxy support Proxy Protocol.","title":"Proxy Protocol"},{"location":"user-guide/miscellaneous/#websockets","text":"Support for websockets is provided by NGINX out of the box. No special configuration required. The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout . The default value of this settings is 60 seconds . A more adequate value to support websockets is a value higher than one hour ( 3600 ). Important If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.","title":"Websockets"},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","text":"NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size. This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k ).","title":"Optimizing TLS Time To First Byte (TTTFB)"},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","text":"Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.","title":"Retries in non-idempotent methods"},{"location":"user-guide/miscellaneous/#limitations","text":"Ingress rules for TLS require the definition of the field host","title":"Limitations"},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","text":"The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.","title":"Why endpoints and not services"},{"location":"user-guide/monitoring/","text":"Prometheus and Grafana installation \u00b6 This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the NGINX Ingress controller. Important This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data. Before You Begin \u00b6 The NGINX Ingress controller should already be deployed according to the deployment instructions here . Note that the yaml files used in this tutorial are stored in the deploy/monitoring folder of the GitHub repository kubernetes/ingress-nginx . Deploy and configure Prometheus Server \u00b6 The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed. If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server. Running the following command deploys the prometheus configuration in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/configuration.yaml configmap \"prometheus-configuration\" created Running the following command deploys prometheus in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/prometheus.yaml clusterrole \"prometheus-server\" created serviceaccount \"prometheus-server\" created clusterrolebinding \"prometheus-server\" created deployment \"prometheus-server\" created service \"prometheus-server\" created Prometheus Dashboard \u00b6 Open Prometheus dashboard in a web browser: kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 1m Obtain the IP address of the nodes in the running cluster: kubectl get nodes -o wide In some cases where the node only have internal IP addresses we need to execute: kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address} 10.192.0.2 10.192.0.3 10.192.0.4 Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard. According to the above example, this URL will be http://10.192.0.3:32630 Grafana \u00b6 kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/grafana.yaml kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 10m grafana NodePort 10.98.233.87 3000:31086/TCP 10m Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086 The username and password is admin After the login you can import the Grafana dashboard from https://github.com/kubernetes/ingress-nginx/tree/master/deploy/grafana/dashboards","title":"Prometheus and Grafana installation"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation","text":"This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the NGINX Ingress controller. Important This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.","title":"Prometheus and Grafana installation"},{"location":"user-guide/monitoring/#before-you-begin","text":"The NGINX Ingress controller should already be deployed according to the deployment instructions here . Note that the yaml files used in this tutorial are stored in the deploy/monitoring folder of the GitHub repository kubernetes/ingress-nginx .","title":"Before You Begin"},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","text":"The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed. If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server. Running the following command deploys the prometheus configuration in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/configuration.yaml configmap \"prometheus-configuration\" created Running the following command deploys prometheus in Kubernetes: kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/prometheus.yaml clusterrole \"prometheus-server\" created serviceaccount \"prometheus-server\" created clusterrolebinding \"prometheus-server\" created deployment \"prometheus-server\" created service \"prometheus-server\" created","title":"Deploy and configure Prometheus Server"},{"location":"user-guide/monitoring/#prometheus-dashboard","text":"Open Prometheus dashboard in a web browser: kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 1m Obtain the IP address of the nodes in the running cluster: kubectl get nodes -o wide In some cases where the node only have internal IP addresses we need to execute: kubectl get nodes --selector=kubernetes.io/role!=master -o jsonpath={.items[*].status.addresses[?\\(@.type==\\\"InternalIP\\\"\\)].address} 10.192.0.2 10.192.0.3 10.192.0.4 Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard. According to the above example, this URL will be http://10.192.0.3:32630","title":"Prometheus Dashboard"},{"location":"user-guide/monitoring/#grafana","text":"kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/monitoring/grafana.yaml kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default-http-backend ClusterIP 10.103.59.201 80/TCP 3d ingress-nginx NodePort 10.97.44.72 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h prometheus-server NodePort 10.98.233.86 9090:32630/TCP 10m grafana NodePort 10.98.233.87 3000:31086/TCP 10m Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086 The username and password is admin After the login you can import the Grafana dashboard from https://github.com/kubernetes/ingress-nginx/tree/master/deploy/grafana/dashboards","title":"Grafana"},{"location":"user-guide/multiple-ingress/","text":"Multiple Ingress controllers \u00b6 If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, you need to specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like the ingress-nginx controller to claim. For instance, metadata : name : foo annotations : kubernetes.io/ingress.class : \"gce\" will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like metadata : name : foo annotations : kubernetes.io/ingress.class : \"nginx\" will target the nginx controller, forcing the GCE controller to ignore it. To reiterate, setting the annotation to any value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. Multiple ingress-nginx controllers \u00b6 This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic). To do this, the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example: spec : template : spec : containers : - name : nginx-ingress-internal-controller args : - /nginx-ingress-controller - '--election-id=ingress-controller-leader-internal' - '--ingress-class=nginx-internal' - '--configmap=ingress/nginx-ingress-internal-controller' Important Deploying multiple Ingress controllers, of different types (e.g., ingress-nginx & gce ), and not specifying a class annotation will result in both or all controllers fighting to satisfy the Ingress, and all of them racing to update Ingress status field in confusing ways. When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --ingress-class value (see IsValid method in internal/ingress/annotations/class/main.go ), otherwise the class annotation become required.","title":"Multiple Ingress controllers"},{"location":"user-guide/multiple-ingress/#multiple-ingress-controllers","text":"If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, you need to specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like the ingress-nginx controller to claim. For instance, metadata : name : foo annotations : kubernetes.io/ingress.class : \"gce\" will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like metadata : name : foo annotations : kubernetes.io/ingress.class : \"nginx\" will target the nginx controller, forcing the GCE controller to ignore it. To reiterate, setting the annotation to any value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.","title":"Multiple Ingress controllers"},{"location":"user-guide/multiple-ingress/#multiple-ingress-nginx-controllers","text":"This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic). To do this, the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example: spec : template : spec : containers : - name : nginx-ingress-internal-controller args : - /nginx-ingress-controller - '--election-id=ingress-controller-leader-internal' - '--ingress-class=nginx-internal' - '--configmap=ingress/nginx-ingress-internal-controller' Important Deploying multiple Ingress controllers, of different types (e.g., ingress-nginx & gce ), and not specifying a class annotation will result in both or all controllers fighting to satisfy the Ingress, and all of them racing to update Ingress status field in confusing ways. When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --ingress-class value (see IsValid method in internal/ingress/annotations/class/main.go ), otherwise the class annotation become required.","title":"Multiple ingress-nginx controllers"},{"location":"user-guide/tls/","text":"TLS/HTTPS \u00b6 TLS Secrets \u00b6 Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret. You can generate a self-signed certificate and private key with: $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${ KEY_FILE } -out ${ CERT_FILE } -subj \"/CN= ${ HOST } /O= ${ HOST } \" ` Then create the secret in the cluster via: kubectl create secret tls ${ CERT_NAME } --key ${ KEY_FILE } --cert ${ CERT_FILE } The resulting secret will be of type kubernetes.io/tls . Default SSL Certificate \u00b6 NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required. For this reason the Ingress controller provides the flag --default-ssl-certificate . The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate. For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment. SSL Passthrough \u00b6 The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects. Warning This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend. Note Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints. HTTP Strict Transport Security \u00b6 HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HSTS is enabled by default. To disable this behavior use hsts: \"false\" in the configuration ConfigMap . Server-side HTTPS enforcement through redirect \u00b6 By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map , or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. Tip When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource. Automated Certificate Management with Kube-Lego \u00b6 Tip Kube-Lego has reached end-of-life and is being replaced by cert-manager . Kube-Lego automatically requests missing or expired certificates from Let's Encrypt by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: kubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\" To setup Kube-Lego you can take a look at this full example . The first version to fully support Kube-Lego is Nginx Ingress controller 0.8. Default TLS Version and Ciphers \u00b6 To provide the most secure baseline configuration possible, nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers . Legacy TLS \u00b6 The default configuration, though secure, does not support some older browsers and operating systems. For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress's default configuration. To change this default behavior, use a ConfigMap . A sample ConfigMap fragment to allow these older clients to connect could look something like the following: kind : ConfigMap apiVersion : v1 metadata : name : nginx - config data : ssl - ciphers : \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\" ssl - protocols : \"TLSv1 TLSv1.1 TLSv1.2\"","title":"TLS/HTTPS"},{"location":"user-guide/tls/#tlshttps","text":"","title":"TLS/HTTPS"},{"location":"user-guide/tls/#tls-secrets","text":"Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret. You can generate a self-signed certificate and private key with: $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${ KEY_FILE } -out ${ CERT_FILE } -subj \"/CN= ${ HOST } /O= ${ HOST } \" ` Then create the secret in the cluster via: kubectl create secret tls ${ CERT_NAME } --key ${ KEY_FILE } --cert ${ CERT_FILE } The resulting secret will be of type kubernetes.io/tls .","title":"TLS Secrets"},{"location":"user-guide/tls/#default-ssl-certificate","text":"NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required. For this reason the Ingress controller provides the flag --default-ssl-certificate . The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate. For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.","title":"Default SSL Certificate"},{"location":"user-guide/tls/#ssl-passthrough","text":"The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects. Warning This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend. Note Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.","title":"SSL Passthrough"},{"location":"user-guide/tls/#http-strict-transport-security","text":"HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HSTS is enabled by default. To disable this behavior use hsts: \"false\" in the configuration ConfigMap .","title":"HTTP Strict Transport Security"},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","text":"By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress. This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map , or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. Tip When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.","title":"Server-side HTTPS enforcement through redirect"},{"location":"user-guide/tls/#automated-certificate-management-with-kube-lego","text":"Tip Kube-Lego has reached end-of-life and is being replaced by cert-manager . Kube-Lego automatically requests missing or expired certificates from Let's Encrypt by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation: kubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\" To setup Kube-Lego you can take a look at this full example . The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.","title":"Automated Certificate Management with Kube-Lego"},{"location":"user-guide/tls/#default-tls-version-and-ciphers","text":"To provide the most secure baseline configuration possible, nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers .","title":"Default TLS Version and Ciphers"},{"location":"user-guide/tls/#legacy-tls","text":"The default configuration, though secure, does not support some older browsers and operating systems. For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with nginx-ingress's default configuration. To change this default behavior, use a ConfigMap . A sample ConfigMap fragment to allow these older clients to connect could look something like the following: kind : ConfigMap apiVersion : v1 metadata : name : nginx - config data : ssl - ciphers : \"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\" ssl - protocols : \"TLSv1 TLSv1.1 TLSv1.2\"","title":"Legacy TLS"},{"location":"user-guide/nginx-configuration/","text":"NGINX Configuration \u00b6 There are three ways to customize NGINX: ConfigMap : using a Configmap to set global configurations in NGINX. Annotations : use this if you want a specific configuration for a particular Ingress rule. Custom template : when more specific settings are required, like open_file_cache , adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.","title":"Introduction"},{"location":"user-guide/nginx-configuration/#nginx-configuration","text":"There are three ways to customize NGINX: ConfigMap : using a Configmap to set global configurations in NGINX. Annotations : use this if you want a specific configuration for a particular Ingress rule. Custom template : when more specific settings are required, like open_file_cache , adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.","title":"NGINX Configuration"},{"location":"user-guide/nginx-configuration/annotations/","text":"Annotations \u00b6 You can add these Kubernetes annotations to specific Ingress objects to customize their behavior. Tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\" , \"false\" , \"100\" . Note The annotation prefix can be changed using the --annotations-prefix command line argument , but the default is nginx.ingress.kubernetes.io , as described in the table below. Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-type basic or digest nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/secure-verify-ca-secret string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf string nginx.ingress.kubernetes.io/lua-resty-waf-debug \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets string nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules string nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold number nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-influxdb \"true\" or \"false\" nginx.ingress.kubernetes.io/influxdb-measurement string nginx.ingress.kubernetes.io/influxdb-port string nginx.ingress.kubernetes.io/influxdb-host string nginx.ingress.kubernetes.io/influxdb-server-name string nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string Canary \u00b6 In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set: nginx.ingress.kubernetes.io/canary-by-header : The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always , it will be routed to the canary. When the header is set to never , it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-by-header-value : The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined. nginx.ingress.kubernetes.io/canary-by-cookie : The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always , it will be routed to the canary. When the cookie is set to never , it will never be routed to the canary. For any other value, the cookie will be ingored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-weight : The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress. Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by . Known Limitations Currently a maximum of one canary ingress can be applied per Ingress rule. Rewrite \u00b6 In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service. If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for / . Example Please check the rewrite example. Session Affinity \u00b6 The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie . Attention If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie , then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server. Example Please check the affinity example. Cookie affinity \u00b6 If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'. The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex. Authentication \u00b6 Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key auth . The annotations are: nginx.ingress.kubernetes.io/auth-type: [basic|digest] Indicates the HTTP Authentication Type: Basic or Digest Access Authentication . nginx.ingress.kubernetes.io/auth-secret: secretName The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-realm: \"realm string\" Example Please check the auth example. Custom NGINX upstream hashing \u00b6 NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution. To enable consistent hashing for a backend: nginx.ingress.kubernetes.io/upstream-hash-by : the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" to consistently hash upstream requests by the current request URI. \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset : \"true\". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3). Please check the chashsubset example. Custom NGINX load balancing \u00b6 This is similar to load-balance in ConfigMap , but configures load balancing algorithm per ingress. Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm. Custom NGINX upstream vhost \u00b6 This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host , which forms part of the location block. This is useful if you need to call the upstream server by something other than $host . Client Certificate Authentication \u00b6 It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. The annotations are: nginx.ingress.kubernetes.io/auth-tls-secret: secretName : The name of the Secret that contains the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-tls-verify-depth : The validation depth between the provided client certificate and the Certification Authority chain. nginx.ingress.kubernetes.io/auth-tls-verify-client : Enables verification of client certificates. nginx.ingress.kubernetes.io/auth-tls-error-page : The URL/Page that user should be redirected in case of a Certificate Authentication Error nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream : Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled. Example Please check the client-certs example. Attention TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior. Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/ Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls Configuration snippet \u00b6 Using this annotation you can add additional configuration to the NGINX location. For example: nginx.ingress.kubernetes.io/configuration-snippet : | more_set_headers \"Request-Id: $req_id\"; Custom HTTP Errors \u00b6 Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors , but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path. Example usage: nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\" Default Backend \u00b6 This annotation is of the form nginx.ingress.kubernetes.io/default-backend: to specify a custom default backend. This is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set. Enable CORS \u00b6 To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality. CORS can be controlled with the following annotations: nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case). Default: GET, PUT, POST, DELETE, PATCH, OPTIONS Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\" nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -. Default: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\" nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port Default: * Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\" nginx.ingress.kubernetes.io/cors-allow-credentials controls if credentials can be passed during CORS operations. Default: true Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\" nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600 Note For more information please see https://enable-cors.org HTTP2 Push Preload. \u00b6 Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests. Example nginx.ingress.kubernetes.io/http2-push-preload: \"true\" Server Alias \u00b6 To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: \"\" . This will create a server with the same configuration, but a different server_name as the provided host. Note A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration. For more information please see the server_name documentation . Server snippet \u00b6 Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block. apiVersion : extensions/v1beta1 kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/server-snippet : | set $agentflag 0; if ($http_user_agent ~* \"(Mobile)\" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 https://m.example.com; } Attention This annotation can be used only once per host. Client Body Buffer Size \u00b6 Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule. Note The annotation value must be given in a format understood by Nginx. Example nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte For more information please see http://nginx.org External Authentication \u00b6 To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent. nginx.ingress.kubernetes.io/auth-url : \"URL to the authentication service\" Additionally it is possible to set: nginx.ingress.kubernetes.io/auth-method : to specify the HTTP method to use. nginx.ingress.kubernetes.io/auth-signin : to specify the location of the error page. nginx.ingress.kubernetes.io/auth-response-headers : to specify headers to pass to backend once authentication request completes. nginx.ingress.kubernetes.io/auth-request-redirect : to specify the X-Auth-Request-Redirect header value. nginx.ingress.kubernetes.io/auth-snippet : to specify a custom snippet to use with external authentication, e.g. nginx.ingress.kubernetes.io/auth-url : http://foo.com/external-auth nginx.ingress.kubernetes.io/auth-snippet : | proxy_set_header Foo-Header 42; Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set Example Please check the external-auth example. Rate limiting \u00b6 These annotations define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate DDoS Attacks . nginx.ingress.kubernetes.io/limit-connections : number of concurrent connections allowed from a single IP address. nginx.ingress.kubernetes.io/limit-rps : number of connections that may be accepted from a given IP each second. nginx.ingress.kubernetes.io/limit-rpm : number of connections that may be accepted from a given IP each minute. nginx.ingress.kubernetes.io/limit-rate-after : sets the initial amount after which the further transmission of a response to a client will be rate limited. nginx.ingress.kubernetes.io/limit-rate : rate of request that accepted from a client each second. You can specify the client IP source ranges to be excluded from rate-limiting through the nginx.ingress.kubernetes.io/limit-whitelist annotation. The value is a comma separated list of CIDRs. If you specify multiple annotations in a single Ingress rule, limit-rpm , and then limit-rps takes precedence. The annotation nginx.ingress.kubernetes.io/limit-rate , nginx.ingress.kubernetes.io/limit-rate-after define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. To configure this setting globally for all Ingress rules, the limit-rate-after and limit-rate value may be set in the NGINX ConfigMap . if you set the value in ingress annotation will cover global setting. Permanent Redirect \u00b6 This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google. Permanent Redirect Code \u00b6 This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308. Temporal Redirect \u00b6 This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily) SSL Passthrough \u00b6 The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide. Note SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag. Attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object. Service Upstream \u00b6 By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257 . Known Issues \u00b6 If the service-upstream annotation is specified the following things should be taken into consideration: Sticky Sessions will not work as only round-robin load balancing is supported. The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream. Server-side HTTPS enforcement through redirect \u00b6 By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap . To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource. Redirect from/to www \u00b6 In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\" Attention If at some point a new Ingress is created with a host equal to one of the options (like domain.com ) the annotation will be omitted. Attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate. Whitelist source range \u00b6 You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 . To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap . Note Adding an annotation to an Ingress rule overrides any global restriction. Custom timeouts \u00b6 Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: nginx.ingress.kubernetes.io/proxy-connect-timeout nginx.ingress.kubernetes.io/proxy-send-timeout nginx.ingress.kubernetes.io/proxy-read-timeout nginx.ingress.kubernetes.io/proxy-next-upstream nginx.ingress.kubernetes.io/proxy-next-upstream-tries nginx.ingress.kubernetes.io/proxy-request-buffering Proxy redirect \u00b6 With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to it is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to , otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces. By default the value of each annotation is \"off\". Custom max body size \u00b6 For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size . To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-body-size : 8m Proxy cookie domain \u00b6 Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap . Proxy cookie path \u00b6 Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap . Proxy buffering \u00b6 Enable or disable proxy buffering proxy_buffering . By default proxy buffering is disabled in the NGINX config. To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-buffering : \"on\" Proxy buffers Number \u00b6 Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4 To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffers-number : \"4\" Proxy buffer size \u00b6 Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\" To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffer-size : \"8k\" SSL ciphers \u00b6 Specifies the enabled ciphers . Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host. nginx.ingress.kubernetes.io/ssl-ciphers : \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\" Connection proxy header \u00b6 Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation: nginx.ingress.kubernetes.io/connection-proxy-header : \"keep-alive\" Enable Access Log \u00b6 Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation: nginx.ingress.kubernetes.io/enable-access-log : \"false\" Enable Rewrite Log \u00b6 Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation: nginx.ingress.kubernetes.io/enable-rewrite-log : \"true\" X-Forwarded-Prefix Header \u00b6 To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used: nginx.ingress.kubernetes.io/x-forwarded-prefix : \"/path\" Lua Resty WAF \u00b6 Using lua-resty-waf-* annotations we can enable and control the lua-resty-waf Web Application Firewall per location. Following configuration will enable the WAF for the paths defined in the corresponding ingress: nginx.ingress.kubernetes.io/lua-resty-waf : \"active\" In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug to \"true\" in addition to the above configuration. The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf are inactive and simulate . In inactive mode WAF won't do anything, whereas in simulate mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it. lua-resty-waf comes with predefined set of rules https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules that covers ModSecurity CRS. You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets to ignore a subset of those rulesets. For an example: nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets : \"41000_sqli, 42000_xss\" will ignore the two mentioned rulesets. It is also possible to configure custom WAF rules per ingress using the nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word foo : nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules : '[=[ { \"access\": [ { \"actions\": { \"disrupt\" : \"DENY\" }, \"id\": 10001, \"msg\": \"my custom rule\", \"operator\": \"STR_CONTAINS\", \"pattern\": \"foo\", \"vars\": [ { \"parse\": [ \"values\", 1 ], \"type\": \"REQUEST_ARGS\" } ] } ], \"body_filter\": [], \"header_filter\":[] } ]=]' Since the default allowed contents were \"text/html\", \"text/json\", \"application/json\" We can enable the following annotation for allow all contents type: nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types : \"true\" The default score of lua-resty-waf is 5, which usually triggered if hitting 2 default rules, you can modify the score threshold with following annotation: nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold : \"10\" When you enabled HTTPS in the endpoint and since resty-lua will return 500 error when processing \"multipart\" contents Reference for this issue By default, it will be \"true\" You may enable the following annotation for work around: nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body : \"false\" For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf . ModSecurity \u00b6 ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap . Note this will enable ModSecurity for all paths, and each path must be disabled manually. It can be enabled using the following annotation: nginx.ingress.kubernetes.io/enable-modsecurity : \"true\" ModSecurity will run in \"Detection-Only\" mode using the recommended configuration . You can enable the OWASP Core Rule Set by setting the following annotation: nginx.ingress.kubernetes.io/enable-owasp-core-rules : \"true\" You can pass transactionIDs from nginx by setting up the following: nginx.ingress.kubernetes.io/modsecurity-transaction-id : \"$request_id\" You can also add your own set of modsecurity rules via a snippet: nginx.ingress.kubernetes.io/modsecurity-snippet : | SecRuleEngine On SecDebugLog /tmp/modsec_debug.log Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement: nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf InfluxDB \u00b6 Using influxdb-* annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket using the nginx-influxdb-module . nginx.ingress.kubernetes.io/enable-influxdb : \"true\" nginx.ingress.kubernetes.io/influxdb-measurement : \"nginx-reqs\" nginx.ingress.kubernetes.io/influxdb-port : \"8089\" nginx.ingress.kubernetes.io/influxdb-host : \"127.0.0.1\" nginx.ingress.kubernetes.io/influxdb-server-name : \"nginx-ingress\" For the influxdb-host parameter you have two options: Use an InfluxDB server configured with the UDP protocol enabled. Deploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the socket listener input and to write using anyone of the outputs plugins like InfluxDB, Apache Kafka, Prometheus, etc.. (recommended) It's important to remember that there's no DNS resolver at this stage so you will have to configure an ip address to nginx.ingress.kubernetes.io/influxdb-host . If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use 127.0.0.1 . Backend Protocol \u00b6 Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP By default NGINX uses HTTP . Example: nginx.ingress.kubernetes.io/backend-protocol : \"HTTPS\" Use Regex \u00b6 Attention When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie , nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex. Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false . The following will indicate that regular expression paths are being used: nginx.ingress.kubernetes.io/use-regex : \"true\" The following will indicate that regular expression paths are not being used: nginx.ingress.kubernetes.io/use-regex : \"false\" When this annotation is set to true , the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Please read about ingress path matching before using this modifier. Satisfy \u00b6 By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value. nginx.ingress.kubernetes.io/satisfy : \"any\"","title":"Annotations"},{"location":"user-guide/nginx-configuration/annotations/#annotations","text":"You can add these Kubernetes annotations to specific Ingress objects to customize their behavior. Tip Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\" , \"false\" , \"100\" . Note The annotation prefix can be changed using the --annotations-prefix command line argument , but the default is nginx.ingress.kubernetes.io , as described in the table below. Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-type basic or digest nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/secure-verify-ca-secret string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf string nginx.ingress.kubernetes.io/lua-resty-waf-debug \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets string nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules string nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types \"true\" or \"false\" nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold number nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-influxdb \"true\" or \"false\" nginx.ingress.kubernetes.io/influxdb-measurement string nginx.ingress.kubernetes.io/influxdb-port string nginx.ingress.kubernetes.io/influxdb-host string nginx.ingress.kubernetes.io/influxdb-server-name string nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string","title":"Annotations"},{"location":"user-guide/nginx-configuration/annotations/#canary","text":"In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set: nginx.ingress.kubernetes.io/canary-by-header : The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always , it will be routed to the canary. When the header is set to never , it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-by-header-value : The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined. nginx.ingress.kubernetes.io/canary-by-cookie : The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always , it will be routed to the canary. When the cookie is set to never , it will never be routed to the canary. For any other value, the cookie will be ingored and the request compared against the other canary rules by precedence. nginx.ingress.kubernetes.io/canary-weight : The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress. Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by . Known Limitations Currently a maximum of one canary ingress can be applied per Ingress rule.","title":"Canary"},{"location":"user-guide/nginx-configuration/annotations/#rewrite","text":"In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service. If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for / . Example Please check the rewrite example.","title":"Rewrite"},{"location":"user-guide/nginx-configuration/annotations/#session-affinity","text":"The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie . Attention If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie , then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server. Example Please check the affinity example.","title":"Session Affinity"},{"location":"user-guide/nginx-configuration/annotations/#cookie-affinity","text":"If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'. The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.","title":"Cookie affinity"},{"location":"user-guide/nginx-configuration/annotations/#authentication","text":"Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key auth . The annotations are: nginx.ingress.kubernetes.io/auth-type: [basic|digest] Indicates the HTTP Authentication Type: Basic or Digest Access Authentication . nginx.ingress.kubernetes.io/auth-secret: secretName The name of the Secret that contains the usernames and passwords which are granted access to the path s defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-realm: \"realm string\" Example Please check the auth example.","title":"Authentication"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-hashing","text":"NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes. There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution. To enable consistent hashing for a backend: nginx.ingress.kubernetes.io/upstream-hash-by : the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" to consistently hash upstream requests by the current request URI. \"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset : \"true\". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3). Please check the chashsubset example.","title":"Custom NGINX upstream hashing"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-load-balancing","text":"This is similar to load-balance in ConfigMap , but configures load balancing algorithm per ingress. Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.","title":"Custom NGINX load balancing"},{"location":"user-guide/nginx-configuration/annotations/#custom-nginx-upstream-vhost","text":"This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host , which forms part of the location block. This is useful if you need to call the upstream server by something other than $host .","title":"Custom NGINX upstream vhost"},{"location":"user-guide/nginx-configuration/annotations/#client-certificate-authentication","text":"It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule. The annotations are: nginx.ingress.kubernetes.io/auth-tls-secret: secretName : The name of the Secret that contains the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace. nginx.ingress.kubernetes.io/auth-tls-verify-depth : The validation depth between the provided client certificate and the Certification Authority chain. nginx.ingress.kubernetes.io/auth-tls-verify-client : Enables verification of client certificates. nginx.ingress.kubernetes.io/auth-tls-error-page : The URL/Page that user should be redirected in case of a Certificate Authentication Error nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream : Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled. Example Please check the client-certs example. Attention TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior. Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/ Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls","title":"Client Certificate Authentication"},{"location":"user-guide/nginx-configuration/annotations/#configuration-snippet","text":"Using this annotation you can add additional configuration to the NGINX location. For example: nginx.ingress.kubernetes.io/configuration-snippet : | more_set_headers \"Request-Id: $req_id\";","title":"Configuration snippet"},{"location":"user-guide/nginx-configuration/annotations/#custom-http-errors","text":"Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors , but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path. Example usage: nginx.ingress.kubernetes.io/custom-http-errors: \"404,415\"","title":"Custom HTTP Errors"},{"location":"user-guide/nginx-configuration/annotations/#default-backend","text":"This annotation is of the form nginx.ingress.kubernetes.io/default-backend: to specify a custom default backend. This is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set.","title":"Default Backend"},{"location":"user-guide/nginx-configuration/annotations/#enable-cors","text":"To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality. CORS can be controlled with the following annotations: nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case). Default: GET, PUT, POST, DELETE, PATCH, OPTIONS Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\" nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -. Default: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization Example: nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\" nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port Default: * Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\" nginx.ingress.kubernetes.io/cors-allow-credentials controls if credentials can be passed during CORS operations. Default: true Example: nginx.ingress.kubernetes.io/cors-allow-credentials: \"false\" nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600 Note For more information please see https://enable-cors.org","title":"Enable CORS"},{"location":"user-guide/nginx-configuration/annotations/#http2-push-preload","text":"Enables automatic conversion of preload links specified in the \u201cLink\u201d response header fields into push requests. Example nginx.ingress.kubernetes.io/http2-push-preload: \"true\"","title":"HTTP2 Push Preload."},{"location":"user-guide/nginx-configuration/annotations/#server-alias","text":"To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: \"\" . This will create a server with the same configuration, but a different server_name as the provided host. Note A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration. For more information please see the server_name documentation .","title":"Server Alias"},{"location":"user-guide/nginx-configuration/annotations/#server-snippet","text":"Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block. apiVersion : extensions/v1beta1 kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/server-snippet : | set $agentflag 0; if ($http_user_agent ~* \"(Mobile)\" ){ set $agentflag 1; } if ( $agentflag = 1 ) { return 301 https://m.example.com; } Attention This annotation can be used only once per host.","title":"Server snippet"},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","text":"Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule. Note The annotation value must be given in a format understood by Nginx. Example nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\" # 1000 bytes nginx.ingress.kubernetes.io/client-body-buffer-size: 1k # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1K # 1 kilobyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte For more information please see http://nginx.org","title":"Client Body Buffer Size"},{"location":"user-guide/nginx-configuration/annotations/#external-authentication","text":"To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent. nginx.ingress.kubernetes.io/auth-url : \"URL to the authentication service\" Additionally it is possible to set: nginx.ingress.kubernetes.io/auth-method : to specify the HTTP method to use. nginx.ingress.kubernetes.io/auth-signin : to specify the location of the error page. nginx.ingress.kubernetes.io/auth-response-headers : to specify headers to pass to backend once authentication request completes. nginx.ingress.kubernetes.io/auth-request-redirect : to specify the X-Auth-Request-Redirect header value. nginx.ingress.kubernetes.io/auth-snippet : to specify a custom snippet to use with external authentication, e.g. nginx.ingress.kubernetes.io/auth-url : http://foo.com/external-auth nginx.ingress.kubernetes.io/auth-snippet : | proxy_set_header Foo-Header 42; Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set Example Please check the external-auth example.","title":"External Authentication"},{"location":"user-guide/nginx-configuration/annotations/#rate-limiting","text":"These annotations define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate DDoS Attacks . nginx.ingress.kubernetes.io/limit-connections : number of concurrent connections allowed from a single IP address. nginx.ingress.kubernetes.io/limit-rps : number of connections that may be accepted from a given IP each second. nginx.ingress.kubernetes.io/limit-rpm : number of connections that may be accepted from a given IP each minute. nginx.ingress.kubernetes.io/limit-rate-after : sets the initial amount after which the further transmission of a response to a client will be rate limited. nginx.ingress.kubernetes.io/limit-rate : rate of request that accepted from a client each second. You can specify the client IP source ranges to be excluded from rate-limiting through the nginx.ingress.kubernetes.io/limit-whitelist annotation. The value is a comma separated list of CIDRs. If you specify multiple annotations in a single Ingress rule, limit-rpm , and then limit-rps takes precedence. The annotation nginx.ingress.kubernetes.io/limit-rate , nginx.ingress.kubernetes.io/limit-rate-after define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. To configure this setting globally for all Ingress rules, the limit-rate-after and limit-rate value may be set in the NGINX ConfigMap . if you set the value in ingress annotation will cover global setting.","title":"Rate limiting"},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect","text":"This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.","title":"Permanent Redirect"},{"location":"user-guide/nginx-configuration/annotations/#permanent-redirect-code","text":"This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.","title":"Permanent Redirect Code"},{"location":"user-guide/nginx-configuration/annotations/#temporal-redirect","text":"This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)","title":"Temporal Redirect"},{"location":"user-guide/nginx-configuration/annotations/#ssl-passthrough","text":"The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide. Note SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag. Attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.","title":"SSL Passthrough"},{"location":"user-guide/nginx-configuration/annotations/#service-upstream","text":"By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257 .","title":"Service Upstream"},{"location":"user-guide/nginx-configuration/annotations/#known-issues","text":"If the service-upstream annotation is specified the following things should be taken into consideration: Sticky Sessions will not work as only round-robin load balancing is supported. The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.","title":"Known Issues"},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","text":"By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap . To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource. When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.","title":"Server-side HTTPS enforcement through redirect"},{"location":"user-guide/nginx-configuration/annotations/#redirect-fromto-www","text":"In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\" Attention If at some point a new Ingress is created with a host equal to one of the options (like domain.com ) the annotation will be omitted. Attention For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.","title":"Redirect from/to www"},{"location":"user-guide/nginx-configuration/annotations/#whitelist-source-range","text":"You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs , e.g. 10.0.0.0/24,172.10.0.1 . To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap . Note Adding an annotation to an Ingress rule overrides any global restriction.","title":"Whitelist source range"},{"location":"user-guide/nginx-configuration/annotations/#custom-timeouts","text":"Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization: nginx.ingress.kubernetes.io/proxy-connect-timeout nginx.ingress.kubernetes.io/proxy-send-timeout nginx.ingress.kubernetes.io/proxy-read-timeout nginx.ingress.kubernetes.io/proxy-next-upstream nginx.ingress.kubernetes.io/proxy-next-upstream-tries nginx.ingress.kubernetes.io/proxy-request-buffering","title":"Custom timeouts"},{"location":"user-guide/nginx-configuration/annotations/#proxy-redirect","text":"With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to it is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to , otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces. By default the value of each annotation is \"off\".","title":"Proxy redirect"},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","text":"For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size . To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-body-size : 8m","title":"Custom max body size"},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-domain","text":"Sets a text that should be changed in the domain attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap .","title":"Proxy cookie domain"},{"location":"user-guide/nginx-configuration/annotations/#proxy-cookie-path","text":"Sets a text that should be changed in the path attribute of the \"Set-Cookie\" header fields of a proxied server response. To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap .","title":"Proxy cookie path"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffering","text":"Enable or disable proxy buffering proxy_buffering . By default proxy buffering is disabled in the NGINX config. To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap . To use custom values in an Ingress rule define these annotation: nginx.ingress.kubernetes.io/proxy-buffering : \"on\"","title":"Proxy buffering"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffers-number","text":"Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4 To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffers-number : \"4\"","title":"Proxy buffers Number"},{"location":"user-guide/nginx-configuration/annotations/#proxy-buffer-size","text":"Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\" To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap . To use custom values in an Ingress rule, define this annotation: nginx.ingress.kubernetes.io/proxy-buffer-size : \"8k\"","title":"Proxy buffer size"},{"location":"user-guide/nginx-configuration/annotations/#ssl-ciphers","text":"Specifies the enabled ciphers . Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host. nginx.ingress.kubernetes.io/ssl-ciphers : \"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"","title":"SSL ciphers"},{"location":"user-guide/nginx-configuration/annotations/#connection-proxy-header","text":"Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation: nginx.ingress.kubernetes.io/connection-proxy-header : \"keep-alive\"","title":"Connection proxy header"},{"location":"user-guide/nginx-configuration/annotations/#enable-access-log","text":"Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation: nginx.ingress.kubernetes.io/enable-access-log : \"false\"","title":"Enable Access Log"},{"location":"user-guide/nginx-configuration/annotations/#enable-rewrite-log","text":"Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation: nginx.ingress.kubernetes.io/enable-rewrite-log : \"true\"","title":"Enable Rewrite Log"},{"location":"user-guide/nginx-configuration/annotations/#x-forwarded-prefix-header","text":"To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used: nginx.ingress.kubernetes.io/x-forwarded-prefix : \"/path\"","title":"X-Forwarded-Prefix Header"},{"location":"user-guide/nginx-configuration/annotations/#lua-resty-waf","text":"Using lua-resty-waf-* annotations we can enable and control the lua-resty-waf Web Application Firewall per location. Following configuration will enable the WAF for the paths defined in the corresponding ingress: nginx.ingress.kubernetes.io/lua-resty-waf : \"active\" In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug to \"true\" in addition to the above configuration. The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf are inactive and simulate . In inactive mode WAF won't do anything, whereas in simulate mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it. lua-resty-waf comes with predefined set of rules https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules that covers ModSecurity CRS. You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets to ignore a subset of those rulesets. For an example: nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets : \"41000_sqli, 42000_xss\" will ignore the two mentioned rulesets. It is also possible to configure custom WAF rules per ingress using the nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word foo : nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules : '[=[ { \"access\": [ { \"actions\": { \"disrupt\" : \"DENY\" }, \"id\": 10001, \"msg\": \"my custom rule\", \"operator\": \"STR_CONTAINS\", \"pattern\": \"foo\", \"vars\": [ { \"parse\": [ \"values\", 1 ], \"type\": \"REQUEST_ARGS\" } ] } ], \"body_filter\": [], \"header_filter\":[] } ]=]' Since the default allowed contents were \"text/html\", \"text/json\", \"application/json\" We can enable the following annotation for allow all contents type: nginx.ingress.kubernetes.io/lua-resty-waf-allow-unknown-content-types : \"true\" The default score of lua-resty-waf is 5, which usually triggered if hitting 2 default rules, you can modify the score threshold with following annotation: nginx.ingress.kubernetes.io/lua-resty-waf-score-threshold : \"10\" When you enabled HTTPS in the endpoint and since resty-lua will return 500 error when processing \"multipart\" contents Reference for this issue By default, it will be \"true\" You may enable the following annotation for work around: nginx.ingress.kubernetes.io/lua-resty-waf-process-multipart-body : \"false\" For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf .","title":"Lua Resty WAF"},{"location":"user-guide/nginx-configuration/annotations/#modsecurity","text":"ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap . Note this will enable ModSecurity for all paths, and each path must be disabled manually. It can be enabled using the following annotation: nginx.ingress.kubernetes.io/enable-modsecurity : \"true\" ModSecurity will run in \"Detection-Only\" mode using the recommended configuration . You can enable the OWASP Core Rule Set by setting the following annotation: nginx.ingress.kubernetes.io/enable-owasp-core-rules : \"true\" You can pass transactionIDs from nginx by setting up the following: nginx.ingress.kubernetes.io/modsecurity-transaction-id : \"$request_id\" You can also add your own set of modsecurity rules via a snippet: nginx.ingress.kubernetes.io/modsecurity-snippet : | SecRuleEngine On SecDebugLog /tmp/modsec_debug.log Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement: nginx.ingress.kubernetes.io/modsecurity-snippet : | Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf","title":"ModSecurity"},{"location":"user-guide/nginx-configuration/annotations/#influxdb","text":"Using influxdb-* annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket using the nginx-influxdb-module . nginx.ingress.kubernetes.io/enable-influxdb : \"true\" nginx.ingress.kubernetes.io/influxdb-measurement : \"nginx-reqs\" nginx.ingress.kubernetes.io/influxdb-port : \"8089\" nginx.ingress.kubernetes.io/influxdb-host : \"127.0.0.1\" nginx.ingress.kubernetes.io/influxdb-server-name : \"nginx-ingress\" For the influxdb-host parameter you have two options: Use an InfluxDB server configured with the UDP protocol enabled. Deploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the socket listener input and to write using anyone of the outputs plugins like InfluxDB, Apache Kafka, Prometheus, etc.. (recommended) It's important to remember that there's no DNS resolver at this stage so you will have to configure an ip address to nginx.ingress.kubernetes.io/influxdb-host . If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use 127.0.0.1 .","title":"InfluxDB"},{"location":"user-guide/nginx-configuration/annotations/#backend-protocol","text":"Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP By default NGINX uses HTTP . Example: nginx.ingress.kubernetes.io/backend-protocol : \"HTTPS\"","title":"Backend Protocol"},{"location":"user-guide/nginx-configuration/annotations/#use-regex","text":"Attention When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie , nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex. Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false . The following will indicate that regular expression paths are being used: nginx.ingress.kubernetes.io/use-regex : \"true\" The following will indicate that regular expression paths are not being used: nginx.ingress.kubernetes.io/use-regex : \"false\" When this annotation is set to true , the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on. Please read about ingress path matching before using this modifier.","title":"Use Regex"},{"location":"user-guide/nginx-configuration/annotations/#satisfy","text":"By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value. nginx.ingress.kubernetes.io/satisfy : \"any\"","title":"Satisfy"},{"location":"user-guide/nginx-configuration/configmap/","text":"ConfigMaps \u00b6 ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller. In order to overwrite nginx-controller configuration values as seen in config.go , you can add key-value pairs to the data section of the config-map. For Example: data : map-hash-bucket-size : \"128\" ssl-protocols : SSLv2 Important The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\". \"Slice\" types (defined below as []string or []int can be provided as a comma-delimited string. Configuration options \u00b6 The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" http2-max-requests int 1000 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" use-geoip2 bool \"false\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 upstream-keepalive-timeout int 60 upstream-keepalive-requests int 100 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\" add-headers \u00b6 Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers . example allow-backend-server-header \u00b6 Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled hide-headers \u00b6 Sets additional header that will not be passed from the upstream server to the client response. default: empty References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header access-log-params \u00b6 Additional params for access_log. For example, buffer=16k, gzip, flush=1m References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log access-log-path \u00b6 Access log path. Goes to /var/log/nginx/access.log by default. Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout enable-access-log-for-default-backend \u00b6 Enables logging access to default backend. default: is disabled. error-log-path \u00b6 Error log path. Goes to /var/log/nginx/error.log by default. Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr References: http://nginx.org/en/docs/ngx_core_module.html#error_log enable-dynamic-tls-records \u00b6 Enables dynamically sized TLS records to improve time-to-first-byte. default: is enabled References: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency enable-modsecurity \u00b6 Enables the modsecurity module for NGINX. default: is disabled enable-owasp-modsecurity-crs \u00b6 Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled client-header-buffer-size \u00b6 Allows to configure a custom buffer size for reading client request header. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size client-header-timeout \u00b6 Defines a timeout for reading client request header, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout client-body-buffer-size \u00b6 Sets buffer size for reading client request body. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size client-body-timeout \u00b6 Defines a timeout for reading client request body, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout disable-access-log \u00b6 Disables the Access Log from the entire Ingress Controller. default: '\"false\"' References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log disable-ipv6 \u00b6 Disable listening on IPV6. default: is disabled disable-ipv6-dns \u00b6 Disable IPV6 for nginx DNS resolver. default: is disabled enable-underscores-in-headers \u00b6 Enables underscores in header names. default: is disabled ignore-invalid-headers \u00b6 Set if header fields with invalid names should be ignored. default: is enabled retry-non-idempotent \u00b6 Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\". error-log-level \u00b6 Configures the logging level of errors. Log levels above are listed in the order of increasing severity. References: http://nginx.org/en/docs/ngx_core_module.html#error_log http2-max-field-size \u00b6 Limits the maximum size of an HPACK-compressed request header field. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size http2-max-header-size \u00b6 Limits the maximum size of the entire request header list after HPACK decompression. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size http2-max-requests \u00b6 Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection. References: http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests hsts \u00b6 Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. References: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server hsts-include-subdomains \u00b6 Enables or disables the use of HSTS in all the subdomains of the server-name. hsts-max-age \u00b6 Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. hsts-preload \u00b6 Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd keep-alive \u00b6 Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout keep-alive-requests \u00b6 Sets the maximum number of requests that can be served through one keep-alive connection. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests large-client-header-buffers \u00b6 Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k References: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers log-format-escape-json \u00b6 Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format . log-format-upstream \u00b6 Sets the nginx log format . Example for json output: consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }' Please check the log-format for definition of each field. log-format-stream \u00b6 Sets the nginx stream format . enable-multi-accept \u00b6 If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true References: http://nginx.org/en/docs/ngx_core_module.html#multi_accept max-worker-connections \u00b6 Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files . default: 16384 Tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle). max-worker-open-files \u00b6 Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) / worker-processes - 1024\". default: 0 map-hash-bucket-size \u00b6 Sets the bucket size for the map variables hash tables . The details of setting up hash tables are provided in a separate document . proxy-real-ip-cidr \u00b6 If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer. proxy-set-headers \u00b6 Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example server-name-hash-max-size \u00b6 Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc. References: http://nginx.org/en/docs/hash.html server-name-hash-bucket-size \u00b6 Sets the size of the bucket for the server names hash tables. References: http://nginx.org/en/docs/hash.html http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size proxy-headers-hash-max-size \u00b6 Sets the maximum size of the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size reuse-port \u00b6 Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true proxy-headers-hash-bucket-size \u00b6 Sets the size of the bucket for the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size server-tokens \u00b6 Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled ssl-ciphers \u00b6 Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library. The default cipher list is: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 . The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy . Please check the Mozilla SSL Configuration Generator . ssl-ecdh-curve \u00b6 Specifies a curve for ECDHE ciphers. References: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve ssl-dh-param \u00b6 Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\". References: https://wiki.openssl.org/index.php/Diffie-Hellman_parameters https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam ssl-protocols \u00b6 Sets the SSL protocols to use. The default is: TLSv1.2 . Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh . ssl-session-cache \u00b6 Enables or disables the use of shared SSL cache among worker processes. ssl-session-cache-size \u00b6 Sets the size of the SSL shared session cache between all worker processes. ssl-session-tickets \u00b6 Enables or disables session resumption through TLS session tickets . ssl-session-ticket-key \u00b6 Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64 TLS session ticket-key , by default, a randomly generated key is used. ssl-session-timeout \u00b6 Sets the time during which a client may reuse the session parameters stored in a cache. ssl-buffer-size \u00b6 Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/ use-proxy-protocol \u00b6 Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB). proxy-protocol-header-timeout \u00b6 Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s use-gzip \u00b6 Enables or disables compression of HTTP responses using the \"gzip\" module . The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . use-geoip \u00b6 Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice . Consider use-geoip2 below. use-geoip2 \u00b6 Enables the geoip2 module for NGINX. default: false enable-brotli \u00b6 Enables or disables compression of HTTP responses using the \"brotli\" module . The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . default: is disabled Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli brotli-level \u00b6 Sets the Brotli Compression Level that will be used. default: 4 brotli-types \u00b6 Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component use-http2 \u00b6 Enables or disables HTTP/2 support in secure connections. gzip-level \u00b6 Sets the gzip Compression Level that will be used. default: 5 gzip-types \u00b6 Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. worker-processes \u00b6 Sets the number of worker processes . The default of \"auto\" means number of available CPU cores. worker-cpu-affinity \u00b6 Binds worker processes to the sets of CPUs. worker_cpu_affinity . By default worker processes are not bound to any specific CPUs. The value can be: \"\": empty string indicate no affinity is applied. cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus. auto: binding worker processes automatically to available CPUs. worker-shutdown-timeout \u00b6 Sets a timeout for Nginx to wait for worker to gracefully shutdown . default: \"10s\" load-balance \u00b6 Sets the algorithm to use for load balancing. The value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false ) ip_hash: to use a hash of the server for routing ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false , but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by ) ewma: to use the Peak EWMA method for routing ( implementation ) The default is round_robin . References: http://nginx.org/en/docs/http/load_balancing.html variables-hash-bucket-size \u00b6 Sets the bucket size for the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size variables-hash-max-size \u00b6 Sets the maximum size of the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size upstream-keepalive-connections \u00b6 Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 32 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive upstream-keepalive-timeout \u00b6 Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout upstream-keepalive-requests \u00b6 Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 100 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests limit-conn-zone-variable \u00b6 Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone . The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses. proxy-stream-timeout \u00b6 Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout proxy-stream-responses \u00b6 Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses bind-address \u00b6 Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop. use-forwarded-headers \u00b6 If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers. If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets. forwarded-for-header \u00b6 Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For compute-full-forwarded-for \u00b6 Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies. proxy-add-original-uri-header \u00b6 Adds an X-Original-Uri header with the original request URI to the backend request generate-request-id \u00b6 Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request enable-opentracing \u00b6 Enables the nginx Opentracing extension. default: is disabled References: https://github.com/opentracing-contrib/nginx-opentracing zipkin-collector-host \u00b6 Specifies the host to use when uploading traces. It must be a valid URL. zipkin-collector-port \u00b6 Specifies the port to use when uploading traces. default: 9411 zipkin-service-name \u00b6 Specifies the service name to use for any traces created. default: nginx zipkin-sample-rate \u00b6 Specifies sample rate for any traces created. default: 1.0 jaeger-collector-host \u00b6 Specifies the host to use when uploading traces. It must be a valid URL. jaeger-collector-port \u00b6 Specifies the port to use when uploading traces. default: 6831 jaeger-service-name \u00b6 Specifies the service name to use for any traces created. default: nginx jaeger-sampler-type \u00b6 Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const jaeger-sampler-param \u00b6 Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1 main-snippet \u00b6 Adds custom configuration to the main section of the nginx configuration. http-snippet \u00b6 Adds custom configuration to the http section of the nginx configuration. server-snippet \u00b6 Adds custom configuration to all the servers in the nginx configuration. location-snippet \u00b6 Adds custom configuration to all the locations in the nginx configuration. custom-http-errors \u00b6 Enables which HTTP codes should be passed for processing with the error_page directive Setting at least one code also enables proxy_intercept_errors which are required to process error_page. Example usage: custom-http-errors: 404,415 proxy-body-size \u00b6 Sets the maximum allowed size of the client request body. See NGINX client_max_body_size . proxy-connect-timeout \u00b6 Sets the timeout for establishing a connection with a proxied server . It should be noted that this timeout cannot usually exceed 75 seconds. proxy-read-timeout \u00b6 Sets the timeout in seconds for reading a response from the proxied server . The timeout is set only between two successive read operations, not for the transmission of the whole response. proxy-send-timeout \u00b6 Sets the timeout in seconds for transmitting a request to the proxied server . The timeout is set only between two successive write operations, not for the transmission of the whole request. proxy-buffers-number \u00b6 Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. proxy-buffer-size \u00b6 Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. proxy-cookie-path \u00b6 Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response. proxy-cookie-domain \u00b6 Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response. proxy-next-upstream \u00b6 Specifies in which cases a request should be passed to the next server. proxy-next-upstream-tries \u00b6 Limit the number of possible tries a request should be passed to the next server. proxy-redirect-from \u00b6 Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect proxy-request-buffering \u00b6 Enables or disables buffering of a client request body . ssl-redirect \u00b6 Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\" whitelist-source-range \u00b6 Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module . skip-access-log-urls \u00b6 Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty limit-rate \u00b6 Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate limit-rate-after \u00b6 Sets the initial amount after which the further transmission of a response to a client will be rate limited. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after http-redirect-code \u00b6 Sets the HTTP status code to be used in redirects. Supported codes are 301 , 302 , 307 and 308 default: 308 Why the default code is 308? RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST. proxy-buffering \u00b6 Enables or disables buffering of responses from the proxied server . limit-req-status-code \u00b6 Sets the status code to return in response to rejected requests . default: 503 limit-conn-status-code \u00b6 Sets the status code to return in response to rejected connections . default: 503 no-tls-redirect-locations \u00b6 A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\" no-auth-locations \u00b6 A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\" block-cidrs \u00b6 A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally. References: http://nginx.org/en/docs/http/ngx_http_access_module.html#deny block-user-agents \u00b6 A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map block-referers \u00b6 A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"ConfigMap"},{"location":"user-guide/nginx-configuration/configmap/#configmaps","text":"ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller. In order to overwrite nginx-controller configuration values as seen in config.go , you can add key-value pairs to the data section of the config-map. For Example: data : map-hash-bucket-size : \"128\" ssl-protocols : SSLv2 Important The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\". \"Slice\" types (defined below as []string or []int can be provided as a comma-delimited string.","title":"ConfigMaps"},{"location":"user-guide/nginx-configuration/configmap/#configuration-options","text":"The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-params string \"\" access-log-path string \"/var/log/nginx/access.log\" enable-access-log-for-default-backend bool \"false\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" http2-max-requests int 1000 hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time enable-multi-accept bool \"true\" max-worker-connections int 16384 max-worker-open-files int 0 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 reuse-port bool \"true\" server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" use-geoip2 bool \"false\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 upstream-keepalive-timeout int 60 upstream-keepalive-requests int 100 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" use-forwarded-headers bool \"false\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" main-snippet string \"\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffers-number int 4 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 limit-conn-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\" block-cidrs []string \"\" block-user-agents []string \"\" block-referers []string \"\"","title":"Configuration options"},{"location":"user-guide/nginx-configuration/configmap/#add-headers","text":"Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers . example","title":"add-headers"},{"location":"user-guide/nginx-configuration/configmap/#allow-backend-server-header","text":"Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled","title":"allow-backend-server-header"},{"location":"user-guide/nginx-configuration/configmap/#hide-headers","text":"Sets additional header that will not be passed from the upstream server to the client response. default: empty References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header","title":"hide-headers"},{"location":"user-guide/nginx-configuration/configmap/#access-log-params","text":"Additional params for access_log. For example, buffer=16k, gzip, flush=1m References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log","title":"access-log-params"},{"location":"user-guide/nginx-configuration/configmap/#access-log-path","text":"Access log path. Goes to /var/log/nginx/access.log by default. Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout","title":"access-log-path"},{"location":"user-guide/nginx-configuration/configmap/#enable-access-log-for-default-backend","text":"Enables logging access to default backend. default: is disabled.","title":"enable-access-log-for-default-backend"},{"location":"user-guide/nginx-configuration/configmap/#error-log-path","text":"Error log path. Goes to /var/log/nginx/error.log by default. Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr References: http://nginx.org/en/docs/ngx_core_module.html#error_log","title":"error-log-path"},{"location":"user-guide/nginx-configuration/configmap/#enable-dynamic-tls-records","text":"Enables dynamically sized TLS records to improve time-to-first-byte. default: is enabled References: https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency","title":"enable-dynamic-tls-records"},{"location":"user-guide/nginx-configuration/configmap/#enable-modsecurity","text":"Enables the modsecurity module for NGINX. default: is disabled","title":"enable-modsecurity"},{"location":"user-guide/nginx-configuration/configmap/#enable-owasp-modsecurity-crs","text":"Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled","title":"enable-owasp-modsecurity-crs"},{"location":"user-guide/nginx-configuration/configmap/#client-header-buffer-size","text":"Allows to configure a custom buffer size for reading client request header. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size","title":"client-header-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#client-header-timeout","text":"Defines a timeout for reading client request header, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout","title":"client-header-timeout"},{"location":"user-guide/nginx-configuration/configmap/#client-body-buffer-size","text":"Sets buffer size for reading client request body. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size","title":"client-body-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#client-body-timeout","text":"Defines a timeout for reading client request body, in seconds. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout","title":"client-body-timeout"},{"location":"user-guide/nginx-configuration/configmap/#disable-access-log","text":"Disables the Access Log from the entire Ingress Controller. default: '\"false\"' References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log","title":"disable-access-log"},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6","text":"Disable listening on IPV6. default: is disabled","title":"disable-ipv6"},{"location":"user-guide/nginx-configuration/configmap/#disable-ipv6-dns","text":"Disable IPV6 for nginx DNS resolver. default: is disabled","title":"disable-ipv6-dns"},{"location":"user-guide/nginx-configuration/configmap/#enable-underscores-in-headers","text":"Enables underscores in header names. default: is disabled","title":"enable-underscores-in-headers"},{"location":"user-guide/nginx-configuration/configmap/#ignore-invalid-headers","text":"Set if header fields with invalid names should be ignored. default: is enabled","title":"ignore-invalid-headers"},{"location":"user-guide/nginx-configuration/configmap/#retry-non-idempotent","text":"Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".","title":"retry-non-idempotent"},{"location":"user-guide/nginx-configuration/configmap/#error-log-level","text":"Configures the logging level of errors. Log levels above are listed in the order of increasing severity. References: http://nginx.org/en/docs/ngx_core_module.html#error_log","title":"error-log-level"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-field-size","text":"Limits the maximum size of an HPACK-compressed request header field. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size","title":"http2-max-field-size"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-header-size","text":"Limits the maximum size of the entire request header list after HPACK decompression. References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size","title":"http2-max-header-size"},{"location":"user-guide/nginx-configuration/configmap/#http2-max-requests","text":"Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection. References: http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests","title":"http2-max-requests"},{"location":"user-guide/nginx-configuration/configmap/#hsts","text":"Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft. References: https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server","title":"hsts"},{"location":"user-guide/nginx-configuration/configmap/#hsts-include-subdomains","text":"Enables or disables the use of HSTS in all the subdomains of the server-name.","title":"hsts-include-subdomains"},{"location":"user-guide/nginx-configuration/configmap/#hsts-max-age","text":"Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.","title":"hsts-max-age"},{"location":"user-guide/nginx-configuration/configmap/#hsts-preload","text":"Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd","title":"hsts-preload"},{"location":"user-guide/nginx-configuration/configmap/#keep-alive","text":"Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout","title":"keep-alive"},{"location":"user-guide/nginx-configuration/configmap/#keep-alive-requests","text":"Sets the maximum number of requests that can be served through one keep-alive connection. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests","title":"keep-alive-requests"},{"location":"user-guide/nginx-configuration/configmap/#large-client-header-buffers","text":"Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k References: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers","title":"large-client-header-buffers"},{"location":"user-guide/nginx-configuration/configmap/#log-format-escape-json","text":"Sets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx log format .","title":"log-format-escape-json"},{"location":"user-guide/nginx-configuration/configmap/#log-format-upstream","text":"Sets the nginx log format . Example for json output: consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }' Please check the log-format for definition of each field.","title":"log-format-upstream"},{"location":"user-guide/nginx-configuration/configmap/#log-format-stream","text":"Sets the nginx stream format .","title":"log-format-stream"},{"location":"user-guide/nginx-configuration/configmap/#enable-multi-accept","text":"If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true References: http://nginx.org/en/docs/ngx_core_module.html#multi_accept","title":"enable-multi-accept"},{"location":"user-guide/nginx-configuration/configmap/#max-worker-connections","text":"Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files . default: 16384 Tip Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).","title":"max-worker-connections"},{"location":"user-guide/nginx-configuration/configmap/#max-worker-open-files","text":"Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) / worker-processes - 1024\". default: 0","title":"max-worker-open-files"},{"location":"user-guide/nginx-configuration/configmap/#map-hash-bucket-size","text":"Sets the bucket size for the map variables hash tables . The details of setting up hash tables are provided in a separate document .","title":"map-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr","text":"If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.","title":"proxy-real-ip-cidr"},{"location":"user-guide/nginx-configuration/configmap/#proxy-set-headers","text":"Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example","title":"proxy-set-headers"},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-max-size","text":"Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc. References: http://nginx.org/en/docs/hash.html","title":"server-name-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size","text":"Sets the size of the bucket for the server names hash tables. References: http://nginx.org/en/docs/hash.html http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size","title":"server-name-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-max-size","text":"Sets the maximum size of the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size","title":"proxy-headers-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#reuse-port","text":"Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true","title":"reuse-port"},{"location":"user-guide/nginx-configuration/configmap/#proxy-headers-hash-bucket-size","text":"Sets the size of the bucket for the proxy headers hash tables. References: http://nginx.org/en/docs/hash.html https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size","title":"proxy-headers-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#server-tokens","text":"Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled","title":"server-tokens"},{"location":"user-guide/nginx-configuration/configmap/#ssl-ciphers","text":"Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library. The default cipher list is: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256 . The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy . Please check the Mozilla SSL Configuration Generator .","title":"ssl-ciphers"},{"location":"user-guide/nginx-configuration/configmap/#ssl-ecdh-curve","text":"Specifies a curve for ECDHE ciphers. References: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve","title":"ssl-ecdh-curve"},{"location":"user-guide/nginx-configuration/configmap/#ssl-dh-param","text":"Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\". References: https://wiki.openssl.org/index.php/Diffie-Hellman_parameters https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam","title":"ssl-dh-param"},{"location":"user-guide/nginx-configuration/configmap/#ssl-protocols","text":"Sets the SSL protocols to use. The default is: TLSv1.2 . Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh .","title":"ssl-protocols"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache","text":"Enables or disables the use of shared SSL cache among worker processes.","title":"ssl-session-cache"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-cache-size","text":"Sets the size of the SSL shared session cache between all worker processes.","title":"ssl-session-cache-size"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-tickets","text":"Enables or disables session resumption through TLS session tickets .","title":"ssl-session-tickets"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-ticket-key","text":"Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64 TLS session ticket-key , by default, a randomly generated key is used.","title":"ssl-session-ticket-key"},{"location":"user-guide/nginx-configuration/configmap/#ssl-session-timeout","text":"Sets the time during which a client may reuse the session parameters stored in a cache.","title":"ssl-session-timeout"},{"location":"user-guide/nginx-configuration/configmap/#ssl-buffer-size","text":"Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB). References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/","title":"ssl-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#use-proxy-protocol","text":"Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).","title":"use-proxy-protocol"},{"location":"user-guide/nginx-configuration/configmap/#proxy-protocol-header-timeout","text":"Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s","title":"proxy-protocol-header-timeout"},{"location":"user-guide/nginx-configuration/configmap/#use-gzip","text":"Enables or disables compression of HTTP responses using the \"gzip\" module . The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component .","title":"use-gzip"},{"location":"user-guide/nginx-configuration/configmap/#use-geoip","text":"Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice . Consider use-geoip2 below.","title":"use-geoip"},{"location":"user-guide/nginx-configuration/configmap/#use-geoip2","text":"Enables the geoip2 module for NGINX. default: false","title":"use-geoip2"},{"location":"user-guide/nginx-configuration/configmap/#enable-brotli","text":"Enables or disables compression of HTTP responses using the \"brotli\" module . The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . default: is disabled Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli","title":"enable-brotli"},{"location":"user-guide/nginx-configuration/configmap/#brotli-level","text":"Sets the Brotli Compression Level that will be used. default: 4","title":"brotli-level"},{"location":"user-guide/nginx-configuration/configmap/#brotli-types","text":"Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component","title":"brotli-types"},{"location":"user-guide/nginx-configuration/configmap/#use-http2","text":"Enables or disables HTTP/2 support in secure connections.","title":"use-http2"},{"location":"user-guide/nginx-configuration/configmap/#gzip-level","text":"Sets the gzip Compression Level that will be used. default: 5","title":"gzip-level"},{"location":"user-guide/nginx-configuration/configmap/#gzip-types","text":"Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled.","title":"gzip-types"},{"location":"user-guide/nginx-configuration/configmap/#worker-processes","text":"Sets the number of worker processes . The default of \"auto\" means number of available CPU cores.","title":"worker-processes"},{"location":"user-guide/nginx-configuration/configmap/#worker-cpu-affinity","text":"Binds worker processes to the sets of CPUs. worker_cpu_affinity . By default worker processes are not bound to any specific CPUs. The value can be: \"\": empty string indicate no affinity is applied. cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus. auto: binding worker processes automatically to available CPUs.","title":"worker-cpu-affinity"},{"location":"user-guide/nginx-configuration/configmap/#worker-shutdown-timeout","text":"Sets a timeout for Nginx to wait for worker to gracefully shutdown . default: \"10s\"","title":"worker-shutdown-timeout"},{"location":"user-guide/nginx-configuration/configmap/#load-balance","text":"Sets the algorithm to use for load balancing. The value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false ) ip_hash: to use a hash of the server for routing ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false , but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by ) ewma: to use the Peak EWMA method for routing ( implementation ) The default is round_robin . References: http://nginx.org/en/docs/http/load_balancing.html","title":"load-balance"},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-bucket-size","text":"Sets the bucket size for the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size","title":"variables-hash-bucket-size"},{"location":"user-guide/nginx-configuration/configmap/#variables-hash-max-size","text":"Sets the maximum size of the variables hash table. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size","title":"variables-hash-max-size"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-connections","text":"Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 32 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive","title":"upstream-keepalive-connections"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-timeout","text":"Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout","title":"upstream-keepalive-timeout"},{"location":"user-guide/nginx-configuration/configmap/#upstream-keepalive-requests","text":"Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 100 References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests","title":"upstream-keepalive-requests"},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-zone-variable","text":"Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone . The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.","title":"limit-conn-zone-variable"},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-timeout","text":"Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout","title":"proxy-stream-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-stream-responses","text":"Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used. References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses","title":"proxy-stream-responses"},{"location":"user-guide/nginx-configuration/configmap/#bind-address","text":"Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.","title":"bind-address"},{"location":"user-guide/nginx-configuration/configmap/#use-forwarded-headers","text":"If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers. If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.","title":"use-forwarded-headers"},{"location":"user-guide/nginx-configuration/configmap/#forwarded-for-header","text":"Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For","title":"forwarded-for-header"},{"location":"user-guide/nginx-configuration/configmap/#compute-full-forwarded-for","text":"Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.","title":"compute-full-forwarded-for"},{"location":"user-guide/nginx-configuration/configmap/#proxy-add-original-uri-header","text":"Adds an X-Original-Uri header with the original request URI to the backend request","title":"proxy-add-original-uri-header"},{"location":"user-guide/nginx-configuration/configmap/#generate-request-id","text":"Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request","title":"generate-request-id"},{"location":"user-guide/nginx-configuration/configmap/#enable-opentracing","text":"Enables the nginx Opentracing extension. default: is disabled References: https://github.com/opentracing-contrib/nginx-opentracing","title":"enable-opentracing"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-collector-host","text":"Specifies the host to use when uploading traces. It must be a valid URL.","title":"zipkin-collector-host"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-collector-port","text":"Specifies the port to use when uploading traces. default: 9411","title":"zipkin-collector-port"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-service-name","text":"Specifies the service name to use for any traces created. default: nginx","title":"zipkin-service-name"},{"location":"user-guide/nginx-configuration/configmap/#zipkin-sample-rate","text":"Specifies sample rate for any traces created. default: 1.0","title":"zipkin-sample-rate"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-host","text":"Specifies the host to use when uploading traces. It must be a valid URL.","title":"jaeger-collector-host"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-collector-port","text":"Specifies the port to use when uploading traces. default: 6831","title":"jaeger-collector-port"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-service-name","text":"Specifies the service name to use for any traces created. default: nginx","title":"jaeger-service-name"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-type","text":"Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const","title":"jaeger-sampler-type"},{"location":"user-guide/nginx-configuration/configmap/#jaeger-sampler-param","text":"Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1","title":"jaeger-sampler-param"},{"location":"user-guide/nginx-configuration/configmap/#main-snippet","text":"Adds custom configuration to the main section of the nginx configuration.","title":"main-snippet"},{"location":"user-guide/nginx-configuration/configmap/#http-snippet","text":"Adds custom configuration to the http section of the nginx configuration.","title":"http-snippet"},{"location":"user-guide/nginx-configuration/configmap/#server-snippet","text":"Adds custom configuration to all the servers in the nginx configuration.","title":"server-snippet"},{"location":"user-guide/nginx-configuration/configmap/#location-snippet","text":"Adds custom configuration to all the locations in the nginx configuration.","title":"location-snippet"},{"location":"user-guide/nginx-configuration/configmap/#custom-http-errors","text":"Enables which HTTP codes should be passed for processing with the error_page directive Setting at least one code also enables proxy_intercept_errors which are required to process error_page. Example usage: custom-http-errors: 404,415","title":"custom-http-errors"},{"location":"user-guide/nginx-configuration/configmap/#proxy-body-size","text":"Sets the maximum allowed size of the client request body. See NGINX client_max_body_size .","title":"proxy-body-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-connect-timeout","text":"Sets the timeout for establishing a connection with a proxied server . It should be noted that this timeout cannot usually exceed 75 seconds.","title":"proxy-connect-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-read-timeout","text":"Sets the timeout in seconds for reading a response from the proxied server . The timeout is set only between two successive read operations, not for the transmission of the whole response.","title":"proxy-read-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-send-timeout","text":"Sets the timeout in seconds for transmitting a request to the proxied server . The timeout is set only between two successive write operations, not for the transmission of the whole request.","title":"proxy-send-timeout"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffers-number","text":"Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.","title":"proxy-buffers-number"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffer-size","text":"Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.","title":"proxy-buffer-size"},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-path","text":"Sets a text that should be changed in the path attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.","title":"proxy-cookie-path"},{"location":"user-guide/nginx-configuration/configmap/#proxy-cookie-domain","text":"Sets a text that should be changed in the domain attribute of the \u201cSet-Cookie\u201d header fields of a proxied server response.","title":"proxy-cookie-domain"},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream","text":"Specifies in which cases a request should be passed to the next server.","title":"proxy-next-upstream"},{"location":"user-guide/nginx-configuration/configmap/#proxy-next-upstream-tries","text":"Limit the number of possible tries a request should be passed to the next server.","title":"proxy-next-upstream-tries"},{"location":"user-guide/nginx-configuration/configmap/#proxy-redirect-from","text":"Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. default: off References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect","title":"proxy-redirect-from"},{"location":"user-guide/nginx-configuration/configmap/#proxy-request-buffering","text":"Enables or disables buffering of a client request body .","title":"proxy-request-buffering"},{"location":"user-guide/nginx-configuration/configmap/#ssl-redirect","text":"Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: \"true\"","title":"ssl-redirect"},{"location":"user-guide/nginx-configuration/configmap/#whitelist-source-range","text":"Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module .","title":"whitelist-source-range"},{"location":"user-guide/nginx-configuration/configmap/#skip-access-log-urls","text":"Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty","title":"skip-access-log-urls"},{"location":"user-guide/nginx-configuration/configmap/#limit-rate","text":"Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate","title":"limit-rate"},{"location":"user-guide/nginx-configuration/configmap/#limit-rate-after","text":"Sets the initial amount after which the further transmission of a response to a client will be rate limited. References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after","title":"limit-rate-after"},{"location":"user-guide/nginx-configuration/configmap/#http-redirect-code","text":"Sets the HTTP status code to be used in redirects. Supported codes are 301 , 302 , 307 and 308 default: 308 Why the default code is 308? RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.","title":"http-redirect-code"},{"location":"user-guide/nginx-configuration/configmap/#proxy-buffering","text":"Enables or disables buffering of responses from the proxied server .","title":"proxy-buffering"},{"location":"user-guide/nginx-configuration/configmap/#limit-req-status-code","text":"Sets the status code to return in response to rejected requests . default: 503","title":"limit-req-status-code"},{"location":"user-guide/nginx-configuration/configmap/#limit-conn-status-code","text":"Sets the status code to return in response to rejected connections . default: 503","title":"limit-conn-status-code"},{"location":"user-guide/nginx-configuration/configmap/#no-tls-redirect-locations","text":"A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"","title":"no-tls-redirect-locations"},{"location":"user-guide/nginx-configuration/configmap/#no-auth-locations","text":"A comma-separated list of locations that should not get authenticated. default: \"/.well-known/acme-challenge\"","title":"no-auth-locations"},{"location":"user-guide/nginx-configuration/configmap/#block-cidrs","text":"A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally. References: http://nginx.org/en/docs/http/ngx_http_access_module.html#deny","title":"block-cidrs"},{"location":"user-guide/nginx-configuration/configmap/#block-user-agents","text":"A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"block-user-agents"},{"location":"user-guide/nginx-configuration/configmap/#block-referers","text":"A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation. References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map","title":"block-referers"},{"location":"user-guide/nginx-configuration/custom-template/","text":"Custom NGINX template \u00b6 The NGINX template is located in the file /etc/nginx/template/nginx.tmpl . Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template volumeMounts : - mountPath : /etc/nginx/template name : nginx-template-volume readOnly : true volumes : - name : nginx-template-volume configMap : name : nginx-template items : - key : nginx.tmpl path : nginx.tmpl Please note the template is tied to the Go code. Do not change names in the variable $cfg . For more information about the template syntax please check the Go template package . In addition to the built-in functions provided by the Go package the following functions are also available: empty: returns true if the specified parameter (string) is empty contains: strings.Contains hasPrefix: strings.HasPrefix hasSuffix: strings.HasSuffix toUpper: strings.ToUpper toLower: strings.ToLower buildLocation: helps to build the NGINX Location section in each server buildProxyPass: builds the reverse proxy configuration buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation TODO: buildAuthLocation: buildAuthResponseHeaders: buildResolvers: buildLogFormatUpstream: buildDenyVariable: buildUpstreamName: buildForwardedFor: buildAuthSignURL: buildNextUpstream: filterRateLimits: formatIP: getenv: getIngressInformation: serverConfig: isLocationAllowed: isValidClientBodyBufferSize:","title":"Custom NGINX template"},{"location":"user-guide/nginx-configuration/custom-template/#custom-nginx-template","text":"The NGINX template is located in the file /etc/nginx/template/nginx.tmpl . Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template volumeMounts : - mountPath : /etc/nginx/template name : nginx-template-volume readOnly : true volumes : - name : nginx-template-volume configMap : name : nginx-template items : - key : nginx.tmpl path : nginx.tmpl Please note the template is tied to the Go code. Do not change names in the variable $cfg . For more information about the template syntax please check the Go template package . In addition to the built-in functions provided by the Go package the following functions are also available: empty: returns true if the specified parameter (string) is empty contains: strings.Contains hasPrefix: strings.HasPrefix hasSuffix: strings.HasSuffix toUpper: strings.ToUpper toLower: strings.ToLower buildLocation: helps to build the NGINX Location section in each server buildProxyPass: builds the reverse proxy configuration buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation TODO: buildAuthLocation: buildAuthResponseHeaders: buildResolvers: buildLogFormatUpstream: buildDenyVariable: buildUpstreamName: buildForwardedFor: buildAuthSignURL: buildNextUpstream: filterRateLimits: formatIP: getenv: getIngressInformation: serverConfig: isLocationAllowed: isValidClientBodyBufferSize:","title":"Custom NGINX template"},{"location":"user-guide/nginx-configuration/log-format/","text":"Log format \u00b6 The default configuration uses a custom logging format to add additional information about upstreams, response time and status. log_format upstreaminfo ' {{ if $ cfg.useProxyProtocol }} $proxy_protocol_addr {{ else }} $remote_addr {{ end }} - ' '[$the_real_ip] - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" ' '$request_length $request_time [$proxy_upstream_name] $upstream_addr ' '$upstream_response_length $upstream_response_time $upstream_status $req_id'; Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr remote address if proxy protocol is disabled (default) $the_real_ip the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream--- $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id the randomly generated ID of the request Additional available variables: Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service Sources: Upstream variables Embedded variables","title":"Log format"},{"location":"user-guide/nginx-configuration/log-format/#log-format","text":"The default configuration uses a custom logging format to add additional information about upstreams, response time and status. log_format upstreaminfo ' {{ if $ cfg.useProxyProtocol }} $proxy_protocol_addr {{ else }} $remote_addr {{ end }} - ' '[$the_real_ip] - $remote_user [$time_local] \"$request\" ' '$status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" ' '$request_length $request_time [$proxy_upstream_name] $upstream_addr ' '$upstream_response_length $upstream_response_time $upstream_status $req_id'; Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr remote address if proxy protocol is disabled (default) $the_real_ip the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream--- $upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id the randomly generated ID of the request Additional available variables: Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service Sources: Upstream variables Embedded variables","title":"Log format"},{"location":"user-guide/third-party-addons/modsecurity/","text":"ModSecurity Web Application Firewall \u00b6 ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap. Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. The file /var/log/modsec_audit.log contains the log of ModSecurity. The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the owasp-modsecurity-crs repository . Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.","title":"ModSecurity Web Application Firewall"},{"location":"user-guide/third-party-addons/modsecurity/#modsecurity-web-application-firewall","text":"ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3). The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap. Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. The file /var/log/modsec_audit.log contains the log of ModSecurity. The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the owasp-modsecurity-crs repository . Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.","title":"ModSecurity Web Application Firewall"},{"location":"user-guide/third-party-addons/opentracing/","text":"OpenTracing \u00b6 Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled. Usage \u00b6 To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data : enable - opentracing : \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Examples \u00b6 The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube. Zipkin \u00b6 In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details: Jaeger \u00b6 Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \" $( minikube ip ) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=k8s.gcr.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: example.com http: paths: - backend: serviceName: echoheaders-x servicePort: 80 path: /echo ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address = 172 .17.0.5 command = POST real path = /echo query = nil request_version = 1 .1 request_uri = http://example.com:8080/echo SERVER VALUES: server_version = nginx: 1 .10.0 - lua: 10001 HEADERS RECEIVED: accept = */* connection = close content-length = 4 content-type = application/x-www-form-urlencoded host = example.com user-agent = curl/7.54.0 x-forwarded-for = 192 .168.99.1 x-forwarded-host = example.com x-forwarded-port = 80 x-forwarded-proto = http x-original-uri = /echo x-real-ip = 192 .168.99.1 x-scheme = http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#opentracing","text":"Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled.","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#usage","text":"To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data : enable - opentracing : \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent.","title":"Usage"},{"location":"user-guide/third-party-addons/opentracing/#examples","text":"The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube.","title":"Examples"},{"location":"user-guide/third-party-addons/opentracing/#zipkin","text":"In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details:","title":"Zipkin"},{"location":"user-guide/third-party-addons/opentracing/#jaeger","text":"Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \" $( minikube ip ) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=k8s.gcr.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: extensions/v1beta1 kind: Ingress metadata: name: echo-ingress spec: rules: - host: example.com http: paths: - backend: serviceName: echoheaders-x servicePort: 80 path: /echo ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: nginx-configuration namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address = 172 .17.0.5 command = POST real path = /echo query = nil request_version = 1 .1 request_uri = http://example.com:8080/echo SERVER VALUES: server_version = nginx: 1 .10.0 - lua: 10001 HEADERS RECEIVED: accept = */* connection = close content-length = 4 content-type = application/x-www-form-urlencoded host = example.com user-agent = curl/7.54.0 x-forwarded-for = 192 .168.99.1 x-forwarded-host = example.com x-forwarded-port = 80 x-forwarded-proto = http x-original-uri = /echo x-real-ip = 192 .168.99.1 x-scheme = http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"Jaeger"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 98c22ff1e..bd9f3f25d 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,232 +2,232 @@ None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily None - 2019-03-18 + 2019-03-28 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 3abbe50e3..11274a24a 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ diff --git a/user-guide/custom-errors/index.html b/user-guide/custom-errors/index.html index ad4176250..9280be37a 100644 --- a/user-guide/custom-errors/index.html +++ b/user-guide/custom-errors/index.html @@ -1152,6 +1152,10 @@ that it passes several HTTP headers down to its default X-Service-Port Port number of the Service backing the backend + +X-Request-ID +Unique ID that identifies the request - same as for backend service +

A custom error backend can use this information to return the best possible representation of an error page. For