See Deployment for a whirlwind tour that will get you started.
"},{"location":"e2e-tests/","title":"E2e tests","text":""},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","title":"e2e test suite for Ingress NGINX Controller","text":""},{"location":"e2e-tests/#admission-admission-controller","title":"[Admission] admission controller","text":"
should not allow overlaps of host and paths without canary annotations
should allow overlaps of host and paths with canary annotation
should block ingress with invalid path
should return an error if there is an error validating the ingress definition
should return an error if there is an invalid value in some annotation
should return an error if there is a forbidden value in some annotation
should return an error if there is an invalid path and wrong pathType is set
should not return an error if the Ingress V1 definition is valid with Ingress Class
should not return an error if the Ingress V1 definition is valid with IngressClass annotation
should return an error if the Ingress V1 definition contains invalid annotations
should not return an error for an invalid Ingress when it has unknown class
should apply the annotation to the default backend
"},{"location":"e2e-tests/#disable-leader-routing-works-when-leader-election-was-disabled","title":"[Disable Leader] Routing works when leader election was disabled","text":"
should create multiple ingress routings rules when leader election has disabled
"},{"location":"e2e-tests/#endpointslices-long-service-name","title":"[Endpointslices] long service name","text":"
should return 200 when service name has max allowed number of characters 63
should reload after an update in the configuration
"},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#multiple-controller-in-one-cluster","title":"Multiple controller in one cluster","text":"
Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?
You can install them in different namespaces.
Create a new namespace
kubectl create namespace ingress-nginx-2\n
Use Helm to install the additional instance of the ingress controller
Ensure you have Helm working (refer to the Helm documentation)
We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config;
If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.
Note that controller.ingressClassResource.name and controller.ingressClass have to be set correctly. The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.
"},{"location":"faq/#i-cant-use-multiple-namespaces-what-should-i-do","title":"I can't use multiple namespaces, what should I do?","text":"
If you need to install all instances in the same namespace, then you need to specify a different election id, like this:
Question - How to obtain the real-client-ipaddress ?
The goto solution for retaining the real-client IPaddress is to enable PROXY protocol.
Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.
The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.
Enabling proxy-protocol on the controller is documented here .
For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.
Some more info available here
Some more info on proxy-protocol is here
"},{"location":"faq/#client-ipaddress-on-single-node-cluster","title":"client-ipaddress on single-node cluster","text":"
Single node clusters are created for dev & test uses with tools like \"kind\" or \"minikube\". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.
After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to \"Local\" then the real-ipaddress of a client is visible to the controller.
% kubectl explain service.spec.externalTrafficPolicy\nKIND: Service\nVERSION: v1\n\nFIELD: externalTrafficPolicy <string>\n\nDESCRIPTION:\n externalTrafficPolicy describes how nodes distribute service traffic they\n receive on one of the Service's \"externally-facing\" addresses (NodePorts,\n ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will\n configure the service in a way that assumes that external load balancers\n will take care of balancing the service traffic between nodes, and so each\n node will deliver traffic only to the node-local endpoints of the service,\n without masquerading the client source IP. (Traffic mistakenly sent to a\n node with no endpoints will be dropped.) The default value, \"Cluster\", uses\n the standard behavior of routing to all endpoints evenly (possibly modified\n by topology and other features). Note that traffic sent to an External IP or\n LoadBalancer IP from within the cluster will always get \"Cluster\" semantics,\n but clients sending to a NodePort from within the cluster may need to take\n traffic policy into account when picking a node.\n\n Possible enum values:\n - `\"Cluster\"` routes traffic to all endpoints.\n - `\"Local\"` preserves the source IP of the traffic by routing only to\n endpoints on the same node as the traffic was received on (dropping the\n traffic if there are no local endpoints).\n
The solution is to get the real client IPaddress from the \"X-Forward-For\" HTTP header
Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.
First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented here
Next, edit nginx.conf file inside your app pod, to contain the directives shown below:
set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production)\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n\nlog_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n '$status $body_bytes_sent \"$http_referer\" '\n '\"$http_user_agent\" '\n 'host=$host x-forwarded-for=$http_x_forwarded_for';\n\naccess_log /var/log/nginx/access.log main;\n
If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.
"},{"location":"faq/#validation-of-path","title":"Validation Of path","text":"
For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.
This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.
When \"ingress.spec.rules.http.pathType=Exact\" or \"pathType=Prefix\", this validation will limit the characters accepted on the field \"ingress.spec.rules.http.paths.path\", to \"alphanumeric characters\", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\"
When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \"ImplementationSpecific\".
API Spec on pathType is documented here
When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, \"/,\",\"_\",\"-\", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.
The cluster admin should establish validation rules using mechanisms like \"Open Policy Agent\", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here
A complete example of an Openpolicyagent gatekeeper rule is available here
If you have any issues or concerns, please do one of the following:
Open a GitHub issue
Comment in our Dev Slack Channel
Open a thread in our Google Group ingress-nginx-dev@kubernetes.io
"},{"location":"faq/#why-is-chunking-not-working-since-controller-v110","title":"Why is chunking not working since controller v1.10 ?","text":"
If your code is setting the HTTP header \"Transfer-Encoding: chunked\" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e
More details are available in this issue https://github.com/kubernetes/ingress-nginx/issues/11162
"},{"location":"how-it-works/","title":"How it works","text":"
The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
"},{"location":"how-it-works/#building-the-nginx-model","title":"Building the NGINX model","text":"
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
"},{"location":"how-it-works/#when-a-reload-is-required","title":"When a reload is required","text":"
The next list describes the scenarios when a reload is required:
New Ingress Resource Created.
TLS section is added to existing Ingress.
Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
A path is added/removed from an Ingress.
An Ingress, Service, Secret is removed.
Some missing referenced object from the Ingress is available, like a Service or Secret.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","title":"Avoiding reloads on Endpoints changes","text":"
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","title":"Avoiding outage from wrong configuration","text":"
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
to make sure the plugin is properly installed and to get a list of commands:
kubectl ingress-nginx --help\nA kubectl plugin for inspecting your ingress-nginx deployments\n\nUsage:\n ingress-nginx [command]\n\nAvailable Commands:\n backends Inspect the dynamic backend information of an ingress-nginx instance\n certs Output the certificate data stored in an ingress-nginx pod\n conf Inspect the generated nginx.conf\n exec Execute a command inside an ingress-nginx pod\n general Inspect the other dynamic ingress-nginx information\n help Help about any command\n info Show information about the ingress-nginx service\n ingresses Provide a short summary of all of the ingress definitions\n lint Inspect kubernetes resources for possible issues\n logs Get the kubernetes logs for an ingress-nginx pod\n ssh ssh into a running ingress-nginx pod\n\nFlags:\n --as string Username to impersonate for the operation\n --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\")\n --certificate-authority string Path to a cert file for the certificate authority\n --client-certificate string Path to a client certificate file for TLS\n --client-key string Path to a client key file for TLS\n --cluster string The name of the kubeconfig cluster to use\n --context string The name of the kubeconfig context to use\n -h, --help help for ingress-nginx\n --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n --kubeconfig string Path to the kubeconfig file to use for CLI requests.\n -n, --namespace string If present, the namespace scope for this CLI request\n --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n -s, --server string The address and port of the Kubernetes API server\n --token string Bearer token for authentication to the API server\n --user string The name of the kubeconfig user to use\n\nUse \"ingress-nginx [command] --help\" for more information about a command.\n
Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment>, --pod <pod>, and --container <container> flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller, and the --container flag defaults to controller.
Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.
Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host <hostname> option to view only the server block for that host:
kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local\n\n server {\n server_name testaddr.local ;\n\n listen 80;\n\n set $proxy_upstream_name \"-\";\n set $pass_access_scheme $scheme;\n set $pass_server_port $server_port;\n set $best_http_host $http_host;\n set $pass_port $pass_server_port;\n\n location / {\n\n set $namespace \"\";\n set $ingress_name \"\";\n set $service_name \"\";\n set $service_port \"0\";\n set $location_path \"/\";\n\n...\n
kubectl ingress-nginx exec is exactly the same as kubectl exec, with the same command flags. It will automatically choose an ingress-nginx pod to run the command in.
$ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx\nfastcgi_params\ngeoip\nlua\nmime.types\nmodsecurity\nmodules\nnginx.conf\nopentracing.json\nopentelemetry.toml\nowasp-modsecurity-crs\ntemplate\n
kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions.
$ kubectl ingress-nginx lint --all-namespaces --verbose\nChecking ingresses...\n\u2717 anamespace/this-nginx\n - Contains the removed session-cookie-hash annotation.\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3743\n\u2717 othernamespace/ingress-definition-blah\n - The rewrite-target annotation value does not reference a capture group\n Lint added for version 0.22.0\n https://github.com/kubernetes/ingress-nginx/issues/3174\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n - Uses removed config flag --sort-backends\n Lint added for version 0.22.0\n https://github.com/kubernetes/ingress-nginx/issues/3655\n - Uses removed config flag --enable-dynamic-certificates\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3808\n
To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags:
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0\nChecking ingresses...\n\u2717 anamespace/this-nginx\n - Contains the removed session-cookie-hash annotation.\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3743\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n - Uses removed config flag --enable-dynamic-certificates\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3808\n
kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.
"},{"location":"lua_tests/","title":"Lua Tests","text":""},{"location":"lua_tests/#running-the-lua-tests","title":"Running the Lua Tests","text":"
To run the Lua tests you can run the following from the root directory:
make lua-test\n
This command makes use of docker hence does not need any dependency installations besides docker
"},{"location":"lua_tests/#where-are-the-lua-tests","title":"Where are the Lua Tests?","text":"
Lua Tests can be found in the rootfs/etc/nginx/lua/test directory
"},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"troubleshooting/#ingress-controller-logs-and-events","title":"Ingress-Controller Logs and Events","text":"
There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.
"},{"location":"troubleshooting/#check-the-ingress-resource-events","title":"Check the Ingress Resource Events","text":"
$ kubectl get ing -n <namespace-of-ingress-resource>\nNAME HOSTS ADDRESS PORTS AGE\ncafe-ingress cafe.com 10.0.2.15 80 25s\n\n$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>\nName: cafe-ingress\nNamespace: default\nAddress: 10.0.2.15\nDefault backend: default-http-backend:80 (172.17.0.5:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n cafe.com\n /tea tea-svc:80 (<none>)\n /coffee coffee-svc:80 (<none>)\nAnnotations:\n kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}}\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress\n Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress\n
"},{"location":"troubleshooting/#check-the-ingress-controller-logs","title":"Check the Ingress Controller Logs","text":"
Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment.
$ kubectl get deploy -n <namespace-of-ingress-controller>\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\ndefault-http-backend 1 1 1 1 35m\ningress-nginx-controller 1 1 1 1 35m\n\n$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller\n# Add --v=X to \"- args\", where X is an integer\n
--v=2 shows details using diff about the changes in the configuration in nginx
--v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
--v=5 configures NGINX in debug mode
"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","title":"Authentication to the Kubernetes API Server","text":"
A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.
The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:
Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.
Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host. The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.
Using the flag --apiserver-host: Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy. Please do not use this approach in production.
In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side.
If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server.
Verify with the following commands:
# start a container that contains curl\n$ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh\n\n# check if secret exists\n/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt namespace token\n/ $\n\n# check base connectivity from cluster inside\n/ $ curl -k https://kubernetes.default.svc.cluster.local\n{\n \"kind\": \"Status\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n\n },\n \"status\": \"Failure\",\n \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\",\n \"reason\": \"Forbidden\",\n \"details\": {\n\n },\n \"code\": 403\n}/ $\n\n# connect using tokens\n}/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local\n&& echo\n{\n \"paths\": [\n \"/api\",\n \"/api/v1\",\n \"/apis\",\n \"/apis/\",\n ... TRUNCATED\n \"/readyz/shutdown\",\n \"/version\"\n ]\n}\n/ $\n\n# when you type `exit` or `^D` the test pod will be deleted.\n
If it is not working, there are two possible reasons:
The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret <name>. It will automatically be recreated.
You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter.
Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.
More information:
User Guide: Service Accounts
Cluster Administrator Guide: Managing Service Accounts
If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.
"},{"location":"troubleshooting/#using-gdb-with-nginx","title":"Using GDB with Nginx","text":"
Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.
Note: The below is based on the nginx documentation.
SSH into the worker
$ ssh user@workerIP\n
Obtain the Docker Container Running nginx
$ docker ps | grep ingress-nginx-controller\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nd9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0\n
"},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)","text":"
Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider )
Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF\n
Then please follow the below steps.
During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details
a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null
(\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n (\u2388 |myprompt)\u279c ~\n
b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
Redirection in the proxy is implemented to ensure the pulling of the images.
This is the solution recommended to whitelist the below image repositories :
*.appspot.com \n*.k8s.io \n*.pkg.dev\n*.gcr.io\n
More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.
"},{"location":"troubleshooting/#unable-to-listen-on-port-80443","title":"Unable to listen on port (80/443)","text":"
One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.
If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.
"},{"location":"troubleshooting/#create-a-test-pod","title":"Create a test pod","text":"
The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example:
* update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster
Apply the YAML and open a shell into the pod. Try to manually run the controller process:
$ /nginx-ingress-controller\n
You should get the same error as from the ingress controller pod logs.
Confirm the capabilities are properly surfacing into the pod:
The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container.
"},{"location":"troubleshooting/#create-a-test-pod-as-root","title":"Create a test pod as root","text":"
(Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities.
Some things to try after shelling into this container:
Try running the controller as the www-data (101) user:
Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.
Install the libcap package and check capabilities on the file:
There are multiple ways to install the Ingress-Nginx Controller:
with Helm, using the project repository chart;
with kubectl apply, using YAML manifests;
with specific addons (e.g. for minikube or MicroK8s).
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.
It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.
Info
This command is idempotent:
if the ingress controller is not installed, it will install it,
if the ingress controller is already installed, it will upgrade it.
If you want a full list of values that you can set, while installing with Helm, then run:
helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx\n
Helm install on AWS/GCP/Azure/Other providers
The ingress-nginx-controller helm-chart is a generic install out of the box. The default set of helm values is not configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users. See AWS LB Controller. Examples of some annotations needed for the service resource of --type LoadBalancer on AWS are below:
The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.
Attention
If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml. In general, you need:
Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller.
Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.
This issue shows a typical DNS problem and its solution.
At this point, you can access your deployment using curl ;
If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller.
You can see that IP address or FQDN with the following command:
kubectl get service ingress-nginx-controller --namespace=ingress-nginx\n
It will be the EXTERNAL-IP field. If that field shows <pending>, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer).
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io:
You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89
"},{"location":"deploy/#environment-specific-instructions","title":"Environment-specific instructions","text":""},{"location":"deploy/#local-development-clusters","title":"Local development clusters","text":""},{"location":"deploy/#minikube","title":"minikube","text":"
The ingress controller can be installed through minikube's addons system:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","title":"TLS termination in AWS Load Balancer (NLB)","text":"
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.
See the GKE documentation on adding rules and the Kubernetes issue for more detail.
Proxy-protocol is supported in GCE check the Official Documentations on how to enable.
By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer."},{"location":"deploy/#exoscale","title":"Exoscale","text":"
"},{"location":"deploy/#bare-metal-clusters","title":"Bare metal clusters","text":"
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace.
See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.
The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.
"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","title":"Running on Kubernetes versions older than 1.19","text":"
Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1, then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1.
Here is how these Ingress versions are supported in Kubernetes:
before Kubernetes 1.19, only v1beta1 Ingress resources are supported
from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported
in Kubernetes 1.22 and above, only v1 Ingress resources are supported
And here is how these Ingress versions are supported in Ingress-Nginx Controller:
before version 1.0, only v1beta1 Ingress resources are supported
in version 1.0 and above, only v1 Ingress resources are
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49).
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).
In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.
"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","title":"A pure software solution: MetalLB","text":"
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)\ndefault-http-backend ClusterIP 10.0.64.249 <none> 80/TCP\ningress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP\n
As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:
In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.
"},{"location":"deploy/baremetal/#over-a-nodeport-service","title":"Over a NodePort Service","text":"
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.
Info
A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.
In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.
Example
Given the NodePort 30100 allocated to the ingress-nginx Service
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP PORT(S)\ndefault-http-backend ClusterIP 10.0.64.249 80/TCP\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n
and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.
Impact on the host system
While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.
This approach has a few other limitations one ought to be aware of:
Source IP address
Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.
The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).
Warning
This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.
Example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
with a ingress-nginx-controller Deployment composed of 2 replicas
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3\ningress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2\n
Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.
Ingress status
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.
$ kubectl get ingress\nNAME HOSTS ADDRESS PORTS\ntest-ingress myapp.example.com 80\n
Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.
Warning
There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
one could edit the ingress-nginx Service and add the following field to the object spec
As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.
Example
Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:
"},{"location":"deploy/baremetal/#via-the-host-network","title":"Via the host network","text":"
In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.
Note
This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it.
This can be achieved by enabling the hostNetwork option in the Pods' spec.
template:\n spec:\n hostNetwork: true\n
Security considerations
Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.
Example
Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\ningress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n
One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:
$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>\n...\nEvents:\n Type Reason From Message\n ---- ------ ---- -------\n Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n
One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.
Info
A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.
Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.
Like with NodePorts, this approach has a few quirks it is important to be aware of.
DNS resolution
Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.
Ingress status
Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.
$ kubectl get ingress\nNAME HOSTS ADDRESS PORTS\ntest-ingress myapp.example.com 80\n
Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.
Example
Given a ingress-nginx-controller DaemonSet composed of 2 replicas
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\ningress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n
the controller sets the status of all Ingress objects it manages to the following value:
Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments.
"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","title":"Using a self-provisioned edge","text":"
Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.
Such deployment builds upon the NodePort Service described above in Over a NodePort Service, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.
On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:
This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity.
The externalIPs Service option was previously mentioned in the NodePort section.
As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
and the following ingress-nginx NodePort Service
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP PORT(S)\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n
One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:
There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:
nginx CIS Benchmark
cipherlist.eu (one of many forks of the now dead project cipherli.st)
This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.
Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.
This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself
"},{"location":"deploy/hardening-guide/#configuration-guide","title":"Configuration Guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends"},{"location":"deploy/rbac/","title":"Role Based Access Control (RBAC)","text":""},{"location":"deploy/rbac/#overview","title":"Overview","text":"
This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled.
Role Based Access Control is comprised of four layers:
ClusterRole - permissions assigned to a role that apply to an entire cluster
ClusterRoleBinding - binding a ClusterRole to a specific account
Role - permissions assigned to a role that apply to a specific namespace
RoleBinding - binding a Role to a specific account
In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the ingress-nginx-controller.
"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","title":"Service Accounts created in this example","text":"
One ServiceAccount is created in this example, ingress-nginx.
"},{"location":"deploy/rbac/#permissions-granted-in-this-example","title":"Permissions Granted in this example","text":"
There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx, and namespace specific permissions defined by the Role named ingress-nginx.
These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx
These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx
configmaps, pods, secrets: get
endpoints: get
Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader
Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body).
The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx.
The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.
No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.
simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):
kubectl set image deployment/ingress-nginx-controller \\\n controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\\n -n ingress-nginx\n
For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx.
This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.
It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin
Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.
The image used to build the final ingress controller, used in deploy scripts and Helm charts.
This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.
This document explains how to get started with developing for Ingress-Nginx Controller.
For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide. This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any)
Start a local Kubernetes cluster using kind, build and deploy the ingress controller
make dev-env\n
- If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind, and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file."},{"location":"developer-guide/getting-started/#testing","title":"Testing","text":"
Run go unit tests
make test\n
Run unit-tests for lua code
make lua-test\n
Lua tests are located in the directory rootfs/etc/nginx/lua/test
Important
Test files must follow the naming convention <mytest>_test.lua or it will be ignored
Run e2e test suite
make kind-e2e-test\n
To limit the scope of the tests to execute, we can use the environment variable FOCUS
FOCUS=\"no-auth-locations\" make kind-e2e-test\n
Note
The variable FOCUS defines Ginkgo Focused Specs
Valid values are defined in the describe definition of the e2e tests like Default Backend
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.
"},{"location":"enhancements/#quick-start-for-the-kep-process","title":"Quick start for the KEP process","text":"
Follow the process outlined in the KEP template
"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","title":"Do I have to use the KEP process?","text":"
No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.
KEPs are only required when the changes are wide ranging and impact most of the project.
"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","title":"Why would I want to use the KEP process?","text":"
Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.
Benefits to KEP users (in the limit):
Exposure on a kubernetes blessed web site that is findable via web search engines.
Cross indexing of KEPs so that users can find connections and the current status of any KEP.
A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.
We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.
"},{"location":"enhancements/20190724-only-dynamic-ssl/","title":"Remove static SSL configuration mode","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","title":"Table of Contents","text":"
Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.
Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
Remove any action of the flag --enable-dynamic-certificates
"},{"location":"enhancements/20190815-zone-aware-routing/","title":"Availability zone aware routing","text":""},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","title":"Table of Contents","text":"
Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.
When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.
At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.
This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.
Arguably inter-zone network latency should also be better than cross-zone.
This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.
Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.
How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.
How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.
Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.
How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.
We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.
"},{"location":"enhancements/20231001-split-containers/","title":"Proposal to split containers","text":"
All the NGINX files should live on one container
No file other than NGINX files should exist on this container
This includes not mounting the service account
All the controller files should live on a different container
Controller container should have bare minimum to work (just go program)
ServiceAccount should be mounted just on controller
Inside nginx container, there should be a really small http listener just able to start, stop and reload NGINX
"},{"location":"enhancements/20231001-split-containers/#roadmap-what-needs-to-be-done","title":"Roadmap (what needs to be done)","text":"
Map what needs to be done to mount the SA just on controller container
Map all the required files for NGINX to work
Map all the required network calls between controller and NGINX
eg.: Dynamic lua reconfiguration
Map problematic features that will need attention
SSLPassthrough today happens on controller process and needs to happen on NGINX
"},{"location":"enhancements/20231001-split-containers/#ports-and-endpoints-on-nginx-container","title":"Ports and endpoints on NGINX container","text":"
Public HTTP/HTTPs port - 80 and 443
Lua configuration port - 10246 (HTTP) and 10247 (Stream)
3333 (temp) - Dataplane controller http server
/reload - (POST) Reloads the configuration.
\"config\" argument is the location of temporary file that should be used / moved to nginx.conf
/test - (POST) Test the configuration of a given file location
\"config\" argument is the location of temporary file that should be tested
"},{"location":"enhancements/20231001-split-containers/#mounting-empty-sa-on-controller-container","title":"Mounting empty SA on controller container","text":"
"},{"location":"enhancements/20231001-split-containers/#mapped-folders-on-nginx-configuration","title":"Mapped folders on NGINX configuration","text":"
WARNING We need to be aware of inter mount containers and inode problems. If we mount a file instead of a directory, it may take time to reflect the file value on the target container
This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.
The title should be lowercased and spaces/punctuation should be replaced with -.
To get started with this template:
Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
Create a PR. Assign it to folks that are sponsoring this process.
Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.
The canonical place for the latest set of instructions (and the likely source of this file) is here.
The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.
"},{"location":"enhancements/YYYYMMDD-kep-template/#table-of-contents","title":"Table of Contents","text":"
A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.
The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
A good summary is probably at least a paragraph in length.
This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.
Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down.
What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.
"},{"location":"enhancements/YYYYMMDD-kep-template/#risks-and-mitigations","title":"Risks and Mitigations","text":"
What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.
How will security be reviewed and by whom? How will UX be reviewed and by whom?
Consider including folks that also work outside project.
Note: Section not required until targeted at a release.
Consider the following in developing a test plan for this enhancement:
Will there be e2e and integration tests, in addition to unit tests?
How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.
All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.
"},{"location":"enhancements/YYYYMMDD-kep-template/#removing-a-deprecated-flag","title":"Removing a deprecated flag","text":"
Announce deprecation and support policy of the existing flag
Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
Address feedback on usage/changed behavior, provided on GitHub issues
Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.
This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them.
The examples on these pages include the spec.ingressClassName field which replaces the deprecated kubernetes.io/ingress.class: nginx annotation. Users of ingress-nginx < 1.0.0 (Helm chart < 4.0.0) should use the legacy documentation.
For more information, check out the Migration to apiVersion networking.k8s.io/v1 guide.
Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Features Canary Deployments weighted canary routing to a seperate deployment Intermediate Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO"},{"location":"examples/PREREQUISITES/","title":"Prerequisites","text":"
Many of the examples in this directory have common prerequisites.
CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA.
We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate.
These instructions are based on the following blog
Session affinity can be configured using the following annotations:
Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie) nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent nginx.ingress.kubernetes.io/affinity-canary-behavior Defines session affinity behavior of canaries. By default the behavior is sticky, and canaries respect session affinity configuration. Set this to legacy to restore original canary behavior, when session affinity parameters were not respected. sticky (default) or legacy nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-secure Set the cookie as secure regardless the protocol of the incoming request \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path) nginx.ingress.kubernetes.io/session-cookie-domain Domain that will be set on the cookie string nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)
You can create the session affinity example Ingress to test this:
In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by the Ingress-Nginx Controller, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing) and has an Expires directive. If a client sends a cookie that doesn't correspond to an upstream, NGINX selects an upstream and creates a corresponding cookie.
If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.
When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.
When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.
"},{"location":"examples/auth/basic/#using-kubectl-create-an-ingress-tied-to-the-basic-auth-secret","title":"Using kubectl, create an ingress tied to the basic-auth secret","text":"
$ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-with-auth\n annotations:\n # type of authentication\n nginx.ingress.kubernetes.io/auth-type: basic\n # name of the secret that contains the user/password definitions\n nginx.ingress.kubernetes.io/auth-secret: basic-auth\n # message to display with an appropriate context why the authentication is required\n nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'\nspec:\n ingressClassName: nginx\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service: \n name: http-svc\n port: \n number: 80\n\" | kubectl create -f -\n
"},{"location":"examples/auth/basic/#use-curl-to-confirm-authorization-is-required-by-the-ingress","title":"Use curl to confirm authorization is required by the ingress","text":"
"},{"location":"examples/auth/basic/#use-curl-with-the-correct-credentials-to-connect-to-the-ingress","title":"Use curl with the correct credentials to connect to the ingress","text":"
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'\n* Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n* Server auth using Basic with user 'foo'\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> Authorization: Basic Zm9vOmJhcg==\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 06:05:26 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n< Vary: Accept-Encoding\n<\nCLIENT VALUES:\nclient_address=10.2.29.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.43.0\nx-request-id=e426c7829ef9f3b18d40730857c3eddb\nx-forwarded-for=10.2.29.1\nx-forwarded-host=foo.bar.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.29.1\nx-scheme=http\nBODY:\n* Connection #0 to host 10.2.29.4 left intact\n-no body in request-\n
Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm (Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.
This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.
This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
This example will show you how to deploy Vouch Proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
Ingress Nginx Has the ability to handle canary routing by setting specific annotations, the following is an example of how to configure a canary deployment with weighted canary routing.
"},{"location":"examples/canary/#create-your-main-deployment-and-service","title":"Create your main deployment and service","text":"
This is the main deployment of your application with the service that will be used to route to it
"},{"location":"examples/canary/#create-ingress-pointing-to-your-canary-deployment","title":"Create Ingress Pointing To Your Canary Deployment","text":"
You will then create an Ingress that has the canary specific configuration, please pay special notice of the following:
The host name is identical to the main ingress host name
The nginx.ingress.kubernetes.io/canary: \"true\" annotation is required and defines this as a canary annotation (if you do not have this the Ingresses will clash)
The nginx.ingress.kubernetes.io/canary-weight: \"50\" annotation dictates the weight of the routing, in this case there is a \"50%\" chance a request will hit the canary deployment over the main deployment
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at an example of specifying custom headers.
This example demonstrates how to use a custom backend to render custom error pages.
If you are using Helm Chart, look at example values and don't forget to add configMap to your deployment, otherwise continue with Customized default backend manual deployment.
First, create the custom default-backend. It will be used by the Ingress controller later on. To do that, you can take a look at the example manifest in this project's GitHub repository.
If you do not already have an instance of the Ingress-Nginx Controller running, deploy it according to the deployment guide, then follow these steps:
Edit the ingress-nginx-controller Deployment and set the value of the --default-backend-service flag to the name of the newly created error backend.
Edit the ingress-nginx-controller ConfigMap and create the key custom-http-errors with a value of 404,503.
Take note of the IP address assigned to the Ingress-Nginx Controller Service.
$ kubectl get svc ingress-nginx\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ningress-nginx ClusterIP 10.0.0.13 <none> 80/TCP,443/TCP 10m\n
Note
The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.
Let us send a couple of HTTP requests using cURL and validate everything is working as expected.
A request to the default backend returns a 404 error with a custom message:
$ curl -D- http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:11:24 GMT\nContent-Type: */*\nTransfer-Encoding: chunked\nConnection: keep-alive\n\n<span>The page you're looking for could not be found.</span>\n
A request with a custom Accept header returns the corresponding document type (JSON):
$ curl -D- -H 'Accept: application/json' http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:12:36 GMT\nContent-Type: application/json\nTransfer-Encoding: chunked\nConnection: keep-alive\nVary: Accept-Encoding\n\n{ \"message\": \"The page you're looking for could not be found\" }\n
To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).
configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.
The Ingress-Nginx Controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.
The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key:
Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf
"},{"location":"examples/customization/external-auth-headers/","title":"External authentication, authentication service response headers propagation","text":"
This example demonstrates propagation of selected authentication service response headers to a backend service.
Sample configuration includes:
Sample authentication service producing several response headers
Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
After successful authentication service generates response headers UserID and UserRole
Sample echo service displaying header information
Two ingress objects pointing to echo service
Public, which allows access from unauthenticated users
Private, which allows access from authenticated users only
"},{"location":"examples/customization/external-auth-headers/#test-1-public-service-with-no-auth-header","title":"Test 1: public service with no auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:21 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 20\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: , UserRole:\n
"},{"location":"examples/customization/external-auth-headers/#test-2-secure-service-with-no-auth-header","title":"Test 2: secure service with no auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 403 Forbidden\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:18:48 GMT\n< Content-Type: text/html\n< Content-Length: 170\n< Connection: keep-alive\n<\n<html>\n<head><title>403 Forbidden</title></head>\n<body bgcolor=\"white\">\n<center><h1>403 Forbidden</h1></center>\n<hr><center>nginx/1.11.10</center>\n</body>\n</html>\n* Connection #0 to host 192.168.99.100 left intact\n
"},{"location":"examples/customization/external-auth-headers/#test-3-public-service-with-valid-auth-header","title":"Test 3: public service with valid auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:59 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 44\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 1443635317331776148, UserRole: admin\n
"},{"location":"examples/customization/external-auth-headers/#test-4-secure-service-with-valid-auth-header","title":"Test 4: secure service with valid auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:17:23 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 43\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 605394647632969758, UserRole: admin\n
"},{"location":"examples/customization/jwt/","title":"Accommodation for JWT","text":"
JWT (short for Json Web Token) is an authentication method widely used. Basically an authentication server generates a JWT and you then use this token in every request you make to a backend service. The JWT can be quite big and is present in every http headers. This means you may have to adapt the max-header size of your nginx-ingress in order to support it.
If you use JWT and you get http 502 error from your ingress, it may be a sign that the buffer size is not big enough.
To be 100% sure look at the logs of the ingress-nginx-controller pod, you should see something like this:
upstream sent too big header while reading response header from upstream...\n
"},{"location":"examples/customization/jwt/#increase-buffer-size-for-headers","title":"Increase buffer size for headers","text":"
In nginx, we want to modify the property proxy-buffer-size. The size is arbitrary. It depends on your needs. Be aware that a high value can lower the performance of your ingress proxy. In general a value of 16k should get you covered.
"},{"location":"examples/customization/ssl-dh-param/","title":"Custom DH parameters for perfect forward secrecy","text":"
This example aims to demonstrate the deployment of an Ingress-Nginx Controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".
You have a domain name such as example.com that is configured to route traffic to the Ingress-NGINX controller.
You have the ingress-nginx-controller installed as per docs.
You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
"},{"location":"examples/grpc/#step-1-create-a-kubernetes-deployment-for-grpc-app","title":"Step 1: Create a Kubernetes Deployment for gRPC app","text":"
Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
$ kubectl get po -A -o wide | grep go-grpc-greeter-server\n
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go.
To create a container image for this app, you can use this Dockerfile.
If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.
"},{"location":"examples/grpc/#step-2-create-the-kubernetes-service-for-the-grpc-app","title":"Step 2: Create the Kubernetes Service for the gRPC app","text":"
You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
"},{"location":"examples/grpc/#step-3-create-the-kubernetes-ingress-resource-for-the-grpc-app","title":"Step 3: Create the Kubernetes Ingress resource for the gRPC app","text":"
Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type \"kubernetes.io/tls\" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this:
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive \"insecure\").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\".
A few more things to note:
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.
"},{"location":"examples/grpc/#step-4-test-the-connection","title":"Step 4: test the connection","text":"
Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
Watch the logs for the ingress-nginx-controller (increasing verbosity as needed).
Double-check your address and ports.
Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.
If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
"},{"location":"examples/grpc/#notes-on-using-responserequest-streams","title":"Notes on using response/request streams","text":"
grpc_read_timeout and grpc_send_timeout will be set as proxy_read_timeout and proxy_send_timeout when you set backend protocol to GRPC or GRPCS.
If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate this.
If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
"},{"location":"examples/openpolicyagent/","title":"OpenPolicyAgent and pathType enforcing","text":"
Ingress API allows users to specify different pathType on Ingress object.
While pathType Exact and Prefix should allow only a small set of characters, pathType ImplementationSpecific allows any characters, as it may contain regexes, variables and other features that may be specific of the Ingress Controller being used.
This means that the Ingress Admins (the persona who deployed the Ingress Controller) should trust the users allowed to use pathType: ImplementationSpecific, as this may allow arbitrary configuration, and this configuration may end on the proxy (aka Nginx) configuration.
The example in this repo uses Gatekeeper to block the usage of pathType: ImplementationSpecific, allowing just a specific list of namespaces to use it.
It is recommended that the admin modifies this rules to enforce a specific set of characters when the usage of ImplementationSpecific is allowed, or in ways that best suits their needs.
First, the ConstraintTemplate from template.yaml will define a rule that validates if the Ingress object is being created on an excempted namespace, and case not, will validate its pathType.
Then, the rule K8sBlockIngressPathType contained in rule.yaml will define the parameters: what kind of object should be verified (Ingress), what are the excempted namespaces, and what kinds of pathType are blocked.
In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).
PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.
Before applying any objects, first apply the PSP permissions by running:
You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
Rewriting can be controlled using the following annotations:
Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in / context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool"},{"location":"examples/rewrite/#examples","title":"Examples","text":""},{"location":"examples/rewrite/#rewrite-target","title":"Rewrite Target","text":"
Attention
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
"},{"location":"examples/static-ip/#acquiring-an-ip","title":"Acquiring an IP","text":"
Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades.
To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer.
First, create a loadbalancer Service and wait for it to acquire an IP:
Then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"ingress-nginx-lb\").
"},{"location":"examples/static-ip/#retaining-the-ip","title":"Retaining the IP","text":"
You can test retention by deleting the Ingress:
$ kubectl delete ing ingress-nginx\ningress \"ingress-nginx\" deleted\n\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME HOSTS ADDRESS PORTS AGE\ningress-nginx * 104.154.109.191 80, 443 13m\n
Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.
"},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","title":"Promote ephemeral to static IP","text":"
To promote the allocated IP to static, you can update the Service manifest:
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: nginx-test\nspec:\n tls:\n - hosts:\n - foo.bar.com\n # This assumes tls-secret exists and the SSL\n # certificate contains a CN for foo.bar.com\n secretName: tls-secret\n ingressClassName: nginx\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n # This assumes http-svc exists and routes to healthy endpoints\n service:\n name: http-svc\n port:\n number: 80\n
The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.
"},{"location":"user-guide/basic-usage/","title":"Basic usage - host based routing","text":"
ingress-nginx can be used for many use cases, inside various cloud providers and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.
First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed, myServiceA, myServiceB, and configured as type: ClusterIP.
Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org.
If the cluster version is < 1.19, you can create two ingress resources like this:
If the cluster uses Kubernetes version >= 1.19.x, then its suggested to create 2 ingress resources, using yaml examples shown below. These examples are in conformity with the networking.kubernetes.io/v1 api.
When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation or where ingressClassName: nginx is present. Please note that the ingress resource should be placed inside the same namespace of the backend resource.
On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myservicea.foo.org and myserviceb.foo.org to the nginx external IP. Get the external IP by running:
kubectl get services -n ingress-nginx\n
To test inside minikube refer to this documentation: Set up Ingress on Minikube with the NGINX Ingress Controller
"},{"location":"user-guide/cli-arguments/","title":"Command line arguments","text":"
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the ingress-nginx-controller Deployment manifest
Argument Description --annotations-prefix Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --bucket-factor Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0) --certificate-authority Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified. --configmap Name of the ConfigMap containing custom global configurations for the controller. --controller-class Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched. --deep-inspect Enables ingress object security deep inspector. (default true) --default-backend-service Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --enable-annotation-validation If true, will enable the annotation validation feature. Defaults to true --disable-catch-all Disable support for catch-all Ingresses. (default false) --disable-full-test Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default). --disable-svc-external-name Disable support for Services of type ExternalName. (default false) --disable-sync-events Disables the creation of 'Sync' Event resources, but still logs them --dynamic-configuration-retries Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15) --election-id Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --election-ttl Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s) --enable-metrics Enables the collection of NGINX metrics. (default true) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default false) --enable-ssl-passthrough Enable SSL Passthrough. (default false) --disable-leader-election Disable Leader Election on Nginx Controller. (default false) --enable-topology-aware-routing Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false) --exclude-socket-metrics Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size' --health-check-path URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port Port to use for the healthz endpoint. (default 10254) --healthz-host Address to bind the healthz endpoint. --http-port Port to use for servicing HTTP traffic. (default 80) --https-port Port to use for servicing HTTPS traffic. (default 443) --ingress-class Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation \"kubernetes.io/ingress.class\" (deprecated). If this parameter is not set, or set to the default value of \"nginx\", it will handle ingresses with either an empty or \"nginx\" class name. --ingress-class-by-name Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false). --internal-logger-address Address to be used when binding internal syslogger. (default 127.0.0.1:11514) --kubeconfig Path to a kubeconfig file containing authorization and API server information. --length-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) --max-buckets Maximum number of buckets for native histograms. (default 100) --maxmind-edition-ids Maxmind edition ids to download GeoLite2 Databases. (default \"GeoLite2-City,GeoLite2-ASN\") --maxmind-retries-timeout Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s) --maxmind-retries-count Number of attempts to download the GeoIP DB. (default 1) --maxmind-license-key Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ . --maxmind-mirror Maxmind mirror url (example: http://geoip.local/databases. --metrics-per-host Export metrics per-host. (default true) --monitor-max-batch-size Max batch size of NGINX metrics. (default 10000) --post-shutdown-grace-period Additional delay in seconds before controller container exits. (default 10) --profiler-port Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) --profiling Enable profiling via web interface host:port/debug/pprof/ . (default true) --publish-service Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false) --report-status-classes If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false) --ssl-passthrough-proxy-port Port to use internally for SSL Passthrough. (default 442) --status-port Port to use for the lua HTTP endpoint configuration. (default 10246) --status-update-interval Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) --stream-port Port to use for the lua TCP/UDP endpoint configuration. (default 10247) --sync-period Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit Define the sync frequency upper limit. (default 0.3) --tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --time-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]) --udp-services-configmap Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --shutdown-grace-period Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0) --size-buckets Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07]) -v, --v Level number for the log level verbosity --validating-webhook The address to start an admission controller on to validate incoming ingresses. Takes the form \":port\". If not provided, no admission controller is started. --validating-webhook-certificate The path of the validating webhook certificate PEM. --validating-webhook-key The path of the validating webhook key PEM. --version Show release information about the Ingress-Nginx Controller and exit. --watch-ingress-without-class Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false) --watch-namespace Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty. --watch-namespace-selector The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty."},{"location":"user-guide/custom-errors/","title":"Custom errors","text":"
When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error:
Header Value X-Code HTTP status code returned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service
A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json, a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML.
Important
The custom backend is expected to return the correct HTTP status code instead of 200. NGINX does not change the response from the custom default backend.
An example of such custom backend is available inside the source repository at images/custom-error-pages.
The default backend is a service which handles all URL paths and hosts the Ingress-NGINX controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).
Basically a default backend exposes two URLs:
/healthz that returns 200
/ that returns 404
Example
The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.
"},{"location":"user-guide/exposing-tcp-udp-services/","title":"Exposing TCP and UDP services","text":"
While the Kubernetes Ingress resource only officially supports routing external HTTP(s) traffic to services, ingress-nginx can be configured to receive external TCP/UDP traffic from non-HTTP protocols and route them to internal services using TCP/UDP port mappings that are specified within a ConfigMap.
To support this, the --tcp-services-configmap and --udp-services-configmap flags can be used to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <service port>:<namespace/service name>:[PROXY]:[PROXY]
It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service. The first PROXY controls the decode of the proxy protocol and the second PROXY controls the encoding using proxy protocol. This allows an incoming connection to be decoded or an outgoing connection to be encoded. It is also possible to arbitrate between two different proxies by turning on the decode and encode on a TCP service.
The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000
Since 1.9.13 NGINX provides UDP Load Balancing. The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53
FastCGI is a binary protocol for interfacing interactive programs with a web server. [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.
\u2014 Wikipedia
The ingress-nginx ingress controller can be used to directly expose FastCGI servers. Enabling FastCGI in your Ingress only requires setting the backend-protocol annotation to FCGI, and with a couple more annotations you can customize the way ingress-nginx handles the communication with your FastCGI server.
For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.
This post in a FactCGI feature issue describes a test for the FastCGI feature. The same test is described below here.
"},{"location":"user-guide/fcgi-services/#example-objects-to-expose-a-fastcgi-server-pod","title":"Example Objects to expose a FastCGI server pod","text":""},{"location":"user-guide/fcgi-services/#the-fasctcgi-server-pod","title":"The FasctCGI server pod","text":"
The Pod object example below exposes port 9000, which is the conventional FastCGI port.
For this example to work, a HTML response should be received from the FastCGI server being exposed
A HTTP request to the FastCGI server pod should be sent
The response should be generated by a php script as that is what we are demonstrating here
The image we are using here php:fpm-alpine does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.
Use kubectl exec to get into the example-app pod
You will land at the path /var/www/html
Create a simple php script there at the path /var/www/html called index.php
"},{"location":"user-guide/fcgi-services/#send-a-request-to-the-exposed-fastcgi-server","title":"Send a request to the exposed FastCGI server","text":"
You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.
To specify an index file, the fastcgi-index annotation value can optionally be set. In the example below, the value is set to index.php. This annotation corresponds to the NGINX fastcgi_index directive.
To specify NGINX fastcgi_param directives, the fastcgi-params-configmap annotation is used, which in turn must lead to a ConfigMap object containing the NGINX fastcgi_param directives as key/values.
Regular expressions is not supported in the spec.rules.host field. The wildcard character '*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"*\").
Note
Please see the FAQ for Validation Of path
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The following example describes a case that may inflict unwanted path matching behavior.
This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier. For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\".
A request to test.com/foo/bar/bar would match the ^/foo/bar/[A-Z0-9]{3} location block instead of the longest EXACT matching path.
"},{"location":"user-guide/k8s-122-migration/","title":"FAQ - Migration to Kubernetes 1.22 and apiVersion networking.k8s.io/v1","text":"
If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetes v1.22, this page is relevant to you.
Please read this official blog on deprecated Ingress API versions
Please read this official documentation on the IngressClass object
"},{"location":"user-guide/k8s-122-migration/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now","title":"What is an IngressClass and why is it important for users of ingress-nginx controller now?","text":"
IngressClass is a Kubernetes resource. See the description below. It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object. From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.
On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that.
kubectl explain ingressclass\n
KIND: IngressClass\nVERSION: networking.k8s.io/v1\nDESCRIPTION:\n IngressClass represents the class of the Ingress, referenced by the Ingress\n Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be\n used to indicate that an IngressClass should be considered default. When a\n single IngressClass resource has this annotation set to true, new Ingress\n resources without a class specified will be assigned this default class.\nFIELDS:\n apiVersion <string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n kind <string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n metadata <Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n spec <Object>\n Spec is the desired state of the IngressClass. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`\n
"},{"location":"user-guide/k8s-122-migration/#what-has-caused-this-change-in-behavior","title":"What has caused this change in behavior?","text":"
Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
extensions/v1beta1
networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created.
From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions.
If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22, there are several scenarios where your existing Ingress objects will not work how you expect.
Read this FAQ to check which scenario matches your use case.
"},{"location":"user-guide/k8s-122-migration/#what-is-the-ingressclassname-field","title":"What is the ingressClassName field?","text":"
ingressClassName is a field in the spec of an Ingress object.
kubectl explain ingress.spec.ingressClassName\n
KIND: Ingress\nVERSION: networking.k8s.io/v1\nFIELD: ingressClassName <string>\nDESCRIPTION:\n IngressClassName is the name of the IngressClass cluster resource. The\n associated IngressClass defines which controller will implement the\n resource. This replaces the deprecated `kubernetes.io/ingress.class`\n annotation. For backwards compatibility, when that annotation is set, it\n must be given precedence over this field. The controller may emit a warning\n if the field and annotation have different values. Implementations of this\n API should ignore Ingresses without a class specified. An IngressClass\n resource may be marked as default, which can be used to set a default value\n for this field. For more information, refer to the IngressClass\n documentation.\n
The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation.
"},{"location":"user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do","title":"I have only one ingress controller in my cluster. What should I do?","text":"
If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation \"ingressclass.kubernetes.io/is-default-class\" in your IngressClass, so any new Ingress objects will have this one as default IngressClass.
When using Helm, you can enable this annotation by setting .controller.ingressClassResource.default: true in your Helm chart installation's values file.
If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:
You can manually set the .spec.ingressClassName field in the manifest of your own Ingress resources.
You can re-create them after setting the ingressclass.kubernetes.io/is-default-class annotation to true on the IngressClass
Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag --watch-ingress-without-class=true. When using Helm, you can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true.
We recommend that you create the IngressClass as shown below:
and add the value spec.ingressClassName=nginx in your Ingress objects.
"},{"location":"user-guide/k8s-122-migration/#i-have-many-ingress-objects-in-my-cluster-what-should-i-do","title":"I have many ingress objects in my cluster. What should I do?","text":"
If you have a lot of ingress objects without ingressClass configuration, you can run the ingress controller with the flag --watch-ingress-without-class=true.
"},{"location":"user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class","title":"What is the flag --watch-ingress-without-class?","text":"
It's a flag that is passed, as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this:
"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-in-my-cluster-and-im-already-using-the-annotation","title":"I have more than one controller in my cluster, and I'm already using the annotation","text":"
No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the ingress-nginx controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName.
"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-api","title":"I have more than one controller running in my cluster, and I want to use the new API","text":"
In this scenario, you need to create multiple IngressClasses (see the example above).
Be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value.
Let's see an example, supposing that you have three IngressClasses:
IngressClass ingress-nginx-one, with .spec.controller equal to example.com/ingress-nginx1
IngressClass ingress-nginx-two, with .spec.controller equal to example.com/ingress-nginx2
IngressClass ingress-nginx-three, with .spec.controller equal to example.com/ingress-nginx1
For private use, you can also use a controller name that doesn't contain a /, e.g. ingress-nginx1.
When deploying your ingress controllers, you will have to change the --controller-class field as follows:
Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1
Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2
When you create an Ingress object with its ingressClassName set to ingress-nginx-two, only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object.
Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.
Bear in mind that if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true, it will serve:
Ingresses without any ingressClassName set
Ingresses where the deprecated annotation (kubernetes.io/ingress.class) matches the value set in the command line argument --ingress-class
Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class
If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two ingress-nginx controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict.
"},{"location":"user-guide/k8s-122-migration/#why-am-i-seeing-ingress-class-annotation-is-not-equal-to-the-expected-by-ingress-controller-in-my-controller-logs","title":"Why am I seeing \"ingress class annotation is not equal to the expected by Ingress Controller\" in my controller logs?","text":"
It is highly likely that you will also see the name of the ingress resource in the same error message. This error message has been observed on use the deprecated annotation (kubernetes.io/ingress.class) in an Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.
"},{"location":"user-guide/miscellaneous/","title":"Miscellaneous","text":""},{"location":"user-guide/miscellaneous/#source-ip-address","title":"Source IP address","text":"
By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.
If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.
Another option is to enable proxy protocol using use-proxy-protocol: \"true\".
In this mode NGINX does not use the content of the header to get the source IP address of the connection.
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. By default NGINX path type is Prefix to not break existing definitions
If you are using a L4 proxy to forward the traffic to the Ingress NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the PROXY Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
Amongst others ELBs in AWS and HAProxy support Proxy Protocol.
Support for websockets is provided by NGINX out of the box. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.
The default value of these settings is 60 seconds.
A more adequate value to support websockets is a value higher than one hour (3600).
Important
If the Ingress-Nginx Controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.
"},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","title":"Optimizing TLS Time To First Byte (TTTFB)","text":"
NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.
This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).
"},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","title":"Retries in non-idempotent methods","text":"
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.
Ingress rules for TLS require the definition of the field host
"},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","title":"Why endpoints and not services","text":"
The Ingress-Nginx Controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
Two different methods to install and configure Prometheus and Grafana are described in this doc. * Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress * Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.
"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-pod-annotations","title":"Prometheus and Grafana installation using Pod Annotations","text":"
This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the Ingress-Nginx Controller.
Important
This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.
"},{"location":"user-guide/monitoring/#before-you-begin","title":"Before You Begin","text":"
The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.
The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :
The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :
"},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","title":"Deploy and configure Prometheus Server","text":"
Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click \"Data Sources\"
Click \"Add data source\"
Select \"Prometheus\"
Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
Left menu (hover over +) -> Dashboard
Click \"Import\"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you need to run the ingress controller with --metrics-per-host=false (you will lose labeling by hostname, but still have labeling by ingress).
"},{"location":"user-guide/monitoring/#grafana-dashboard-using-ingress-resource","title":"Grafana dashboard using ingress resource","text":"
If you want to expose the dashboard for grafana using an ingress resource, then you can :
change the service type of the prometheus-server service and the grafana service to \"ClusterIP\" like this :
kubectl -n ingress-nginx edit svc grafana\n
This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other)
scroll down to line 34 that looks like \"type: NodePort\"
change it to look like \"type: ClusterIP\". Save and exit.
create an ingress resource with backend as \"grafana\" and port as \"3000\"
Similarly, you can edit the service \"prometheus-server\" and add an ingress resource.
"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-service-monitors","title":"Prometheus and Grafana installation using Service Monitors","text":"
This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.
"},{"location":"user-guide/monitoring/#verify-ingress-nginx-controller-is-installed","title":"Verify Ingress-Nginx Controller is installed","text":"
The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.
To check if Ingress controller is deployed,
kubectl get pods -n ingress-nginx\n
The result should look something like: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h
"},{"location":"user-guide/monitoring/#verify-prometheus-is-installed","title":"Verify Prometheus is installed","text":"
To check if Prometheus is already deployed, run the following command:
The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :
Here controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" should match the name of the helm release of the kube-prometheus-stack
You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:
helm get values ingress-nginx --namespace ingress-nginx\n
Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set serviceMonitorSelectorNilUsesHelmValues flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting podMonitorSelectorNilUsesHelmValues to false
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:9090 -> 9090\nForwarding from [::1]:9090 -> 9090\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090 "},{"location":"user-guide/monitoring/#connect-and-view-grafana-dashboard","title":"Connect and view Grafana dashboard","text":"
Port forward to Grafana service. Find out the name of the Grafana service by using the following command:
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:3000 -> 3000\nForwarding from [::1]:3000 -> 3000\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000 The default username/ password is admin/prom-operator - After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click \"Data Sources\"
Click \"Add data source\"
Select \"Prometheus\"
Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)
Left menu (hover over +) -> Dashboard
Click \"Import\"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
nginx_ingress_controller_request_duration_seconds Histogram\\ The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\\ nginx var: request_time
nginx_ingress_controller_response_duration_seconds Histogram\\ The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\\ Note: can be up to several millis bigger than the nginx_ingress_controller_request_duration_seconds because of the different measuring method. nginx var: upstream_response_time
nginx_ingress_controller_header_duration_seconds Histogram\\ The time spent on receiving first header from the upstream server\\ nginx var: upstream_header_time
nginx_ingress_controller_connect_duration_seconds Histogram\\ The time spent on establishing a connection with the upstream server\\ nginx var: upstream_connect_time
nginx_ingress_controller_response_size Histogram\\ The response length (including request line, header, and request body)\\ nginx var: bytes_sent
nginx_ingress_controller_request_size Histogram\\ The request length (including request line, header, and request body)\\ nginx var: request_length
nginx_ingress_controller_requests Counter\\ The total number of client requests
nginx_ingress_controller_bytes_sent Histogram\\ The number of bytes sent to a client. Deprecated, use nginx_ingress_controller_response_size\\ nginx var: bytes_sent
# HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size\n# TYPE nginx_ingress_controller_bytes_sent histogram\n# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server\n# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds\n* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server\n# TYPE nginx_ingress_controller_header_duration_seconds histogram\n# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds\n# TYPE nginx_ingress_controller_request_duration_seconds histogram\n# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_request_size histogram\n# HELP nginx_ingress_controller_requests The total number of client requests.\n# TYPE nginx_ingress_controller_requests counter\n# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server\n# TYPE nginx_ingress_controller_response_duration_seconds histogram\n# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_response_size histogram\n
"},{"location":"user-guide/monitoring/#nginx-process-metrics","title":"Nginx process metrics","text":"
# HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}\n# TYPE nginx_ingress_controller_nginx_process_connections gauge\n# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}\n# TYPE nginx_ingress_controller_nginx_process_connections_total counter\n# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds\n# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter\n# HELP nginx_ingress_controller_nginx_process_num_procs number of processes\n# TYPE nginx_ingress_controller_nginx_process_num_procs gauge\n# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970/01/01\n# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge\n# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read\n# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter\n# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests\n# TYPE nginx_ingress_controller_nginx_process_requests_total counter\n# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written\n# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter\n
# HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.\n# TYPE nginx_ingress_controller_build_info gauge\n# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations\n# TYPE nginx_ingress_controller_check_success counter\n# HELP nginx_ingress_controller_config_hash Running configuration hash actually running\n# TYPE nginx_ingress_controller_config_hash gauge\n# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful\n# TYPE nginx_ingress_controller_config_last_reload_successful gauge\n# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.\n# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge\n# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate\n# TYPE nginx_ingress_controller_ssl_certificate_info gauge\n# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations\n# TYPE nginx_ingress_controller_success counter\n# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity\n# TYPE nginx_ingress_controller_orphan_ingress gauge\n
# HELP nginx_ingress_controller_admission_config_size The size of the tested configuration\n# TYPE nginx_ingress_controller_admission_config_size gauge\n# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)\n# TYPE nginx_ingress_controller_admission_render_duration gauge\n# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller\n# TYPE nginx_ingress_controller_admission_render_ingresses gauge\n# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)\n# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge\n# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)\n# TYPE nginx_ingress_controller_admission_tested_duration gauge\n# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller\n# TYPE nginx_ingress_controller_admission_tested_ingresses gauge\n
By default, deploying multiple Ingress controllers (e.g., ingress-nginx & gce) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.
To fix this problem, use IngressClasses. The kubernetes.io/ingress.class annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field ingress.spec.ingressClassName. But, when user has deployed with scope.enabled, then the ingress class resource field is not used.
If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName.
First, ensure the --controller-class= and --ingress-class are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique --election-id for the new instance of the controller.
When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --controller-class value (see IsValid method in internal/ingress/annotations/class/main.go), otherwise the class annotation becomes required.
If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
"},{"location":"user-guide/multiple-ingress/#using-the-kubernetesioingressclass-annotation-in-deprecation","title":"Using the kubernetes.io/ingress.class annotation (in deprecation)","text":"
If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like ingress-nginx to claim.
then setting the corresponding kubernetes.io/ingress.class: \"internal-nginx\" annotation on your Ingresses.
To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress. If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string.
Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.
Warning
Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs W1012 09:15:45.920000 6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key
You can generate a self-signed certificate and private key with:
NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.
For this reason the Ingress controller provides the flag --default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.
For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.
If the tls: section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.
On the other hand, if the tls: section is set - even without specifying a secretName option - NGINX will force HTTPS redirect.
To force redirects for Ingresses that do not specify a TLS-block at all, take a look at force-ssl-redirect in ConfigMap.
The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.
Warning
This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.
If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.
Note
Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.
"},{"location":"user-guide/tls/#http-strict-transport-security","title":"HTTP Strict Transport Security","text":"
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
HSTS is enabled by default.
To disable this behavior use hsts: \"false\" in the configuration ConfigMap.
"},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.
Tip
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.
"},{"location":"user-guide/tls/#automated-certificate-management-with-cert-manager","title":"Automated Certificate Management with cert-manager","text":"
cert-manager automatically requests missing or expired certificates from a range of supported issuers (including Let's Encrypt) by monitoring ingress resources.
To set up cert-manager you should take a look at this full example.
To enable it for an ingress resource you have to deploy cert-manager, configure a certificate issuer update the manifest:
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-demo\n annotations:\n cert-manager.io/issuer: \"letsencrypt-staging\" # Replace this with a production issuer once you've tested it\n [..]\nspec:\n tls:\n - hosts:\n - ingress-demo.example.com\n secretName: ingress-demo-tls\n [...]\n
"},{"location":"user-guide/tls/#default-tls-version-and-ciphers","title":"Default TLS Version and Ciphers","text":"
To provide the most secure baseline configuration possible,
ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.
The default configuration, though secure, does not support some older browsers and operating systems.
For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with ingress-nginx's default configuration.
To change this default behavior, use a ConfigMap.
A sample ConfigMap fragment to allow these older clients to connect could look something like the following (generated using the Mozilla SSL Configuration Generator)mozilla-ssl-config-old:
ConfigMap: using a Configmap to set global configurations in NGINX.
Annotations: use this if you want a specific configuration for a particular Ingress rule.
Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.
"},{"location":"user-guide/nginx-configuration/annotations-risk/","title":"Annotations Scope and Risk","text":"Group Annotation Risk Scope Aliases server-alias High ingress Allowlist allowlist-source-range Medium location BackendProtocol backend-protocol Low location BasicDigestAuth auth-realm Medium location BasicDigestAuth auth-secret Medium location BasicDigestAuth auth-secret-type Low location BasicDigestAuth auth-type Low location Canary canary Low ingress Canary canary-by-cookie Medium ingress Canary canary-by-header Medium ingress Canary canary-by-header-pattern Medium ingress Canary canary-by-header-value Medium ingress Canary canary-weight Low ingress Canary canary-weight-total Low ingress CertificateAuth auth-tls-error-page High location CertificateAuth auth-tls-match-cn High location CertificateAuth auth-tls-pass-certificate-to-upstream Low location CertificateAuth auth-tls-secret Medium location CertificateAuth auth-tls-verify-client Medium location CertificateAuth auth-tls-verify-depth Low location ClientBodyBufferSize client-body-buffer-size Low location ConfigurationSnippet configuration-snippet Critical location Connection connection-proxy-header Low location CorsConfig cors-allow-credentials Low ingress CorsConfig cors-allow-headers Medium ingress CorsConfig cors-allow-methods Medium ingress CorsConfig cors-allow-origin Medium ingress CorsConfig cors-expose-headers Medium ingress CorsConfig cors-max-age Low ingress CorsConfig enable-cors Low ingress CustomHTTPErrors custom-http-errors Low location CustomHeaders custom-headers Medium location DefaultBackend default-backend Low location Denylist denylist-source-range Medium location DisableProxyInterceptErrors disable-proxy-intercept-errors Low location EnableGlobalAuth enable-global-auth Low location ExternalAuth auth-always-set-cookie Low location ExternalAuth auth-cache-duration Medium location ExternalAuth auth-cache-key Medium location ExternalAuth auth-keepalive Low location ExternalAuth auth-keepalive-requests Low location ExternalAuth auth-keepalive-share-vars Low location ExternalAuth auth-keepalive-timeout Low location ExternalAuth auth-method Low location ExternalAuth auth-proxy-set-headers Medium location ExternalAuth auth-request-redirect Medium location ExternalAuth auth-response-headers Medium location ExternalAuth auth-signin High location ExternalAuth auth-signin-redirect-param Medium location ExternalAuth auth-snippet Critical location ExternalAuth auth-url High location FastCGI fastcgi-index Medium location FastCGI fastcgi-params-configmap Medium location HTTP2PushPreload http2-push-preload Low location LoadBalancing load-balance Low location Logs enable-access-log Low location Logs enable-rewrite-log Low location Mirror mirror-host High ingress Mirror mirror-request-body Low ingress Mirror mirror-target High ingress ModSecurity enable-modsecurity Low ingress ModSecurity enable-owasp-core-rules Low ingress ModSecurity modsecurity-snippet Critical ingress ModSecurity modsecurity-transaction-id High ingress Opentelemetry enable-opentelemetry Low location Opentelemetry opentelemetry-operation-name Medium location Opentelemetry opentelemetry-trust-incoming-span Low location Proxy proxy-body-size Medium location Proxy proxy-buffer-size Low location Proxy proxy-buffering Low location Proxy proxy-buffers-number Low location Proxy proxy-connect-timeout Low location Proxy proxy-cookie-domain Medium location Proxy proxy-cookie-path Medium location Proxy proxy-http-version Low location Proxy proxy-max-temp-file-size Low location Proxy proxy-next-upstream Medium location Proxy proxy-next-upstream-timeout Low location Proxy proxy-next-upstream-tries Low location Proxy proxy-read-timeout Low location Proxy proxy-redirect-from Medium location Proxy proxy-redirect-to Medium location Proxy proxy-request-buffering Low location Proxy proxy-send-timeout Low location ProxySSL proxy-ssl-ciphers Medium ingress ProxySSL proxy-ssl-name High ingress ProxySSL proxy-ssl-protocols Low ingress ProxySSL proxy-ssl-secret Medium ingress ProxySSL proxy-ssl-server-name Low ingress ProxySSL proxy-ssl-verify Low ingress ProxySSL proxy-ssl-verify-depth Low ingress RateLimit limit-allowlist Low location RateLimit limit-burst-multiplier Low location RateLimit limit-connections Low location RateLimit limit-rate Low location RateLimit limit-rate-after Low location RateLimit limit-rpm Low location RateLimit limit-rps Low location Redirect from-to-www-redirect Low location Redirect permanent-redirect Medium location Redirect permanent-redirect-code Low location Redirect temporal-redirect Medium location Redirect temporal-redirect-code Low location Rewrite app-root Medium location Rewrite force-ssl-redirect Medium location Rewrite preserve-trailing-slash Medium location Rewrite rewrite-target Medium ingress Rewrite ssl-redirect Low location Rewrite use-regex Low location SSLCipher ssl-ciphers Low ingress SSLCipher ssl-prefer-server-ciphers Low ingress SSLPassthrough ssl-passthrough Low ingress Satisfy satisfy Low location ServerSnippet server-snippet Critical ingress ServiceUpstream service-upstream Low ingress SessionAffinity affinity Low ingress SessionAffinity affinity-canary-behavior Low ingress SessionAffinity affinity-mode Medium ingress SessionAffinity session-cookie-change-on-failure Low ingress SessionAffinity session-cookie-conditional-samesite-none Low ingress SessionAffinity session-cookie-domain Medium ingress SessionAffinity session-cookie-expires Medium ingress SessionAffinity session-cookie-max-age Medium ingress SessionAffinity session-cookie-name Medium ingress SessionAffinity session-cookie-path Medium ingress SessionAffinity session-cookie-samesite Low ingress SessionAffinity session-cookie-secure Low ingress StreamSnippet stream-snippet Critical ingress UpstreamHashBy upstream-hash-by High location UpstreamHashBy upstream-hash-by-subset Low location UpstreamHashBy upstream-hash-by-subset-size Low location UpstreamVhost upstream-vhost Low location UsePortInRedirects use-port-in-redirects Low location XForwardedPrefix x-forwarded-prefix Medium location"},{"location":"user-guide/nginx-configuration/annotations/","title":"Annotations","text":"
You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
Tip
Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\", \"false\", \"100\".
Note
The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.
Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/affinity-mode \"balanced\" or \"persistent\" nginx.ingress.kubernetes.io/affinity-canary-behavior \"sticky\" or \"legacy\" nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-secret-type string nginx.ingress.kubernetes.io/auth-type \"basic\" or \"digest\" nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-tls-match-cn string nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-cache-key string nginx.ingress.kubernetes.io/auth-cache-duration string nginx.ingress.kubernetes.io/auth-keepalive number nginx.ingress.kubernetes.io/auth-keepalive-share-vars \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-keepalive-requests number nginx.ingress.kubernetes.io/auth-keepalive-timeout number nginx.ingress.kubernetes.io/auth-proxy-set-headers string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/enable-global-auth \"true\" or \"false\" nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-header-pattern string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/canary-weight-total number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/custom-headers string nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-expose-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/temporal-redirect-code number nginx.ingress.kubernetes.io/preserve-trailing-slash \"true\" or \"false\" nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/proxy-http-version \"1.0\" or \"1.1\" nginx.ingress.kubernetes.io/proxy-ssl-secret string nginx.ingress.kubernetes.io/proxy-ssl-ciphers string nginx.ingress.kubernetes.io/proxy-ssl-name string nginx.ingress.kubernetes.io/proxy-ssl-protocols string nginx.ingress.kubernetes.io/proxy-ssl-verify string nginx.ingress.kubernetes.io/proxy-ssl-verify-depth number nginx.ingress.kubernetes.io/proxy-ssl-server-name string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-change-on-failure \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-domain string nginx.ingress.kubernetes.io/session-cookie-expires string nginx.ingress.kubernetes.io/session-cookie-max-age string nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/session-cookie-samesite string nginx.ingress.kubernetes.io/session-cookie-secure string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/stream-snippet string nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/denylist-source-range CIDR nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/proxy-max-temp-file-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers \"true\" or \"false\" nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-opentelemetry \"true\" or \"false\" nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span \"true\" or \"false\" nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string nginx.ingress.kubernetes.io/mirror-request-body string nginx.ingress.kubernetes.io/mirror-target string nginx.ingress.kubernetes.io/mirror-host string"},{"location":"user-guide/nginx-configuration/annotations/#canary","title":"Canary","text":"
In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set:
nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with nginx.ingress.kubernetes.io/canary-by-header. The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.
nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.
nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of <weight-total> means implies all requests will be sent to the alternative service specified in the Ingress. <weight-total> defaults to 100, and can be increased via nginx.ingress.kubernetes.io/canary-weight-total.
nginx.ingress.kubernetes.io/canary-weight-total: The total weight of traffic. If unspecified, it defaults to 100.
Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight
Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity. If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the canary ingress definition.
Known Limitations
Currently a maximum of one canary ingress can be applied per Ingress rule.
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.
The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.
The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.
The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.
Attention
If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.
The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.
Use nginx.ingress.kubernetes.io/session-cookie-domain to set the Domain attribute of the sticky cookie.
Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: \"true\".
Use nginx.ingress.kubernetes.io/session-cookie-expires to control the cookie expires, its value is a number of seconds until the cookie expires.
Use nginx.ingress.kubernetes.io/session-cookie-path to control the cookie path when use-regex is set to true.
Use nginx.ingress.kubernetes.io/session-cookie-change-on-failure to control the cookie change after request failure.
It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.
The name of the Secret that contains the usernames and passwords which are granted access to the paths defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.
To enable consistent hashing for a backend:
nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri$host\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"${request_uri}-text-value\" to consistently hash upstream requests by the current request URI.
\"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: \"true\". This maps requests to subset of nodes instead of a single one. nginx.ingress.kubernetes.io/upstream-hash-by-subset-size determines the size of each subset (default 3).
This is similar to load-balance in ConfigMap, but configures load balancing algorithm per ingress.
Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.
This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.
It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.
Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.
To enable, add the annotation nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. This secret must have a file named ca.crt containing the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress.
You can further customize client certificate authentication and behavior with these annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-depth: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
nginx.ingress.kubernetes.io/auth-tls-verify-client: Enables verification of client certificates. Possible values are:
on: Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. Failed certificate verification will result in a status code 400 (Bad Request) (default)
off: Don't request client certificates and don't do client certificate verification.
optional: Do optional client certificate validation against the CAs from auth-tls-secret. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
optional_no_ca: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret. Certificate verification result is sent to the upstream service.
nginx.ingress.kubernetes.io/auth-tls-error-page: The URL/Page that user should be redirected in case of a Certificate Authentication Error
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert. Possible values are \"true\" or \"false\" (default).
nginx.ingress.kubernetes.io/auth-tls-match-cn: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with \"CN=\", example: \"CN=myvalidclient\". If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: \"CN=(option1|option2|myvalidclient)\". In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.
The following headers are sent to the upstream service according to the auth-tls-* annotations:
ssl-client-issuer-dn: The issuer information of the client certificate. Example: \"CN=My CA\"
ssl-client-subject-dn: The subject information of the client certificate. Example: \"CN=My Client\"
ssl-client-verify: The result of the client verification. Possible values: \"SUCCESS\", \"FAILED: \"
ssl-client-cert: The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to \"true\". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A
Example
Please check the client-certs example.
Attention
TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior.
Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/
Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls
It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.
nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form \"namespace/secretName\".
nginx.ingress.kubernetes.io/proxy-ssl-verify: Enables or disables verification of the proxied HTTPS server certificate. (default: off)
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
nginx.ingress.kubernetes.io/proxy-ssl-ciphers: Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
nginx.ingress.kubernetes.io/proxy-ssl-name: Allows to set proxy_ssl_name. This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-server-name: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the related issue on github for more information.
Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.
This annotation is of the form nginx.ingress.kubernetes.io/custom-headers: custom-headers-configmap to specify a configmap name that contains custom headers. This annotation uses more_set_headers nginx directive.
This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has multiple ports, the first one is the one which will receive the backend traffic.
This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the custom-http-errors annotation are set.
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\". This will add a section in the server location enabling this functionality.
CORS can be controlled with the following annotations:
nginx.ingress.kubernetes.io/cors-allow-methods: Controls which methods are accepted.
This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).
Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"
nginx.ingress.kubernetes.io/cors-allow-headers: Controls which headers are accepted.
This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.
It also supports single level wildcard subdomains and follows this format: http(s)://*.foo.bar, http(s)://*.bar.foo:8080 or http(s)://*.abc.bar.foo:9000 - Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://*.origin-site.com:4443, http://*.origin-site.com, https://example.org:1199\"
nginx.ingress.kubernetes.io/cors-allow-credentials: Controls if credentials can be passed during CORS operations.
Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: \"<alias 1>,<alias 2>\". This will create a server with the same configuration, but adding new values to the server_name directive.
Note
A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.
For more information please see the server_name documentation.
Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n annotations:\n nginx.ingress.kubernetes.io/server-snippet: |\n set $agentflag 0;\n\n if ($http_user_agent ~* \"(Mobile)\" ){\n set $agentflag 1;\n }\n\n if ( $agentflag = 1 ) {\n return 301 https://m.example.com;\n }\n
Attention
This annotation can be used only once per host.
"},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","title":"Client Body Buffer Size","text":"
Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.
Note
The annotation value must be given in a format understood by Nginx.
To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.
nginx.ingress.kubernetes.io/auth-url: \"URL to the authentication service\"\n
Additionally it is possible to set:
nginx.ingress.kubernetes.io/auth-keepalive: <Connections> to specify the maximum number of keepalive connections to auth-url. Only takes effect when no variables are used in the host part of the URL. Defaults to 0 (keepalive disabled).
Note: does not work with HTTP/2 listener because of a limitation in Lua subrequests. UseHTTP2 configuration should be disabled!
nginx.ingress.kubernetes.io/auth-keepalive-share-vars: Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to \"true\" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to \"false\".
nginx.ingress.kubernetes.io/auth-keepalive-requests: <Requests> to specify the maximum number of requests that can be served through one keepalive connection. Defaults to 1000 and only applied if auth-keepalive is set to higher than 0.
nginx.ingress.kubernetes.io/auth-keepalive-timeout: <Timeout> to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to 60 and only applied if auth-keepalive is set to higher than 0.
nginx.ingress.kubernetes.io/auth-method: <Method> to specify the HTTP method to use.
nginx.ingress.kubernetes.io/auth-signin: <SignIn_URL> to specify the location of the error page.
nginx.ingress.kubernetes.io/auth-signin-redirect-param: <SignIn_URL> to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
nginx.ingress.kubernetes.io/auth-response-headers: <Response_Header_1, ..., Response_Header_n> to specify headers to pass to backend once authentication request completes.
nginx.ingress.kubernetes.io/auth-proxy-set-headers: <ConfigMap> the name of a ConfigMap that specifies headers to pass to the authentication service
nginx.ingress.kubernetes.io/auth-request-redirect: <Request_Redirect_URL> to specify the X-Auth-Request-Redirect header value.
nginx.ingress.kubernetes.io/auth-cache-key: <Cache_Key> this enables caching for auth requests. specify a lookup key for auth responses. e.g. $remote_user$http_authorization. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
nginx.ingress.kubernetes.io/auth-cache-duration: <Cache_duration> to specify a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
nginx.ingress.kubernetes.io/auth-always-set-cookie: <Boolean_Flag> to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
nginx.ingress.kubernetes.io/auth-snippet: <Auth_Snippet> to specify a custom snippet to use with external authentication, e.g.
Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set
By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use enable-global-auth: \"false\" in the NGINX ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to \"true\".
These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks.
nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-rpm: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-burst-multiplier: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-rate-after: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.
nginx.ingress.kubernetes.io/limit-rate: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.
nginx.ingress.kubernetes.io/limit-whitelist: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections, limit-rpm, limit-rps.
To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap. The value set in an Ingress annotation will override the global setting.
The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.
This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.
This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.
This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)
This annotation allows you to modify the status code used for temporal redirects. For example nginx.ingress.kubernetes.io/temporal-redirect-code: '307' would return your temporal-redirect with a 307.
The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.
Note
SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.
Attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.
By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.
The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.
This can be desirable for things like zero-downtime deployments . See issue #257.
If the service-upstream annotation is specified the following things should be taken into consideration:
Sticky Sessions will not work as only round-robin load balancing is supported.
The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.
"},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"
By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.
To preserve the trailing slash in the URI with ssl-redirect, set nginx.ingress.kubernetes.io/preserve-trailing-slash: \"true\" annotation for that particular resource.
In some scenarios, it is required to redirect from www.domain.com to domain.com or vice versa, which way the redirect is performed depends on the configured host value in the Ingress object.
For example, if .spec.rules.host is configured with a value like www.example.com, then this annotation will redirect from example.com to www.example.com. If .spec.rules.host is configured with a value like example.com, so without a www, then this annotation will redirect from www.example.com to example.com instead.
To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"
Attention
If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.
Attention
For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.
You can specify blocked client IP source ranges through the nginx.ingress.kubernetes.io/denylist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the denylist-source-range value may be set in the NGINX ConfigMap.
Note
Adding an annotation to an Ingress rule overrides any global restriction.
You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.
Note
Adding an annotation to an Ingress rule overrides any global restriction.
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
If you indicate Backend Protocol as GRPC or GRPCS, the following grpc values will be set and inherited from proxy timeouts:
grpc_connect_timeout=5s, from nginx.ingress.kubernetes.io/proxy-connect-timeout
grpc_send_timeout=60s, from nginx.ingress.kubernetes.io/proxy-send-timeout
grpc_read_timeout=60s, from nginx.ingress.kubernetes.io/proxy-read-timeout
Note: All timeout values are unitless and in seconds e.g. nginx.ingress.kubernetes.io/proxy-read-timeout: \"120\" sets a valid 120 seconds proxy read timeout.
The annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response
Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.
By default the value of each annotation is \"off\".
"},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","title":"Custom max body size","text":"
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
Enable or disable proxy buffering proxy_buffering. By default proxy buffering is disabled in the NGINX config.
To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4
To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:
Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\"
To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:
"},{"location":"user-guide/nginx-configuration/annotations/#proxy-max-temp-file-size","title":"Proxy max temp file size","text":"
When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.
The zero value disables buffering of responses to temporary files.
To use custom values in an Ingress rule, define this annotation:
Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to \"1.1\".
The following annotation will set the ssl_prefer_server_ciphers directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.
Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:
Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:
Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)
The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)
ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path must be disabled manually.
Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI
When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie, nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex.
Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false.
The following will indicate that regular expression paths are being used:
nginx.ingress.kubernetes.io/use-regex: \"true\"\n
The following will indicate that regular expression paths are not being used:
When this annotation is set to true, the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Please read about ingress path matching before using this modifier.
By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.
Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in \"test\" backends.
Also by default header Host for mirrored requests will be set the same as a host part of uri in the \"mirror-target\" annotation. You can override it by \"mirror-host\" annotation:
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\".
\"Slice\" types (defined below as []string or []int) can be provided as a comma-delimited string.
Enables users to consume cross namespace resource on annotations, when was previously enabled . default: true
Annotations that may be impacted with this change: * auth-secret * auth-proxy-set-header * auth-tls-secret * fastcgi-params-configmap * proxy-ssl-secret
This option will be defaulted to false in the next major release
Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ false
Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file
This option will be defaulted to false in the next major release
Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742
When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.
default: \"\"
When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".
This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.
Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7
Changes with nginx 1.19.7 16 Feb 2021\n\n *) Change: connections handling in HTTP/2 has been changed to better\n match HTTP/1.x; the \"http2_recv_timeout\", \"http2_idle_timeout\", and\n \"http2_max_requests\" directives have been removed, the\n \"keepalive_timeout\" and \"keepalive_requests\" directives should be\n used instead.\n
References: nginx change log nginx issue tracker nginx mailing list
Sets if the escape parameter is disabled entirely for character escaping in variables (\"true\") or controlled by log-format-escape-json (\"false\") Sets the nginx log format.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files. default: 16384
Tip
Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).
Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) - 1024\". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: \"0.0.0.0/0\"
Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
DHE-based cyphers will not be available until DH parameter is configured Custom DH parameters for perfect forward secrecy
Please check the Mozilla SSL Configuration Generator.
Note: ssl_prefer_server_ciphers directive will be enabled by default for http context.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
TLS session ticket-key, by default, a randomly generated key is used.
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/ingress-controller/geoip/GeoLite2-City.mmdb and /etc/ingress-controller/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the \"brotli\" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: false
Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli
Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. default: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
enable-real-ip enables the configuration of https://nginx.org/en/docs/http/ngx_http_realip_module.html. Specific attributes of the module can be configured further by using forwarded-for-header and proxy-real-ip-cidr settings.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
It will also set the grpc_read_timeout for gRPC connections.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
It will also set the grpc_send_timeout for gRPC connections.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the my_custom_plugin dict is only 512KB.
Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 default: 308
Why the default code is 308?
RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.
A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"
A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to \"false\". default: \"\"
A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: \"\"
Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: \"\"
Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: \"rd\"
Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: \"\"
Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: \"\"
Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-snippet. default: \"\"
Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. default: false
A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule. default: \"false\"
Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress. default: \"false\"
Ingress objects contains a field called pathType that defines the proxy behavior. It can be Exact, Prefix and ImplementationSpecific.
When pathType is configured as Exact or Prefix, there should be a more strict validation, allowing only paths starting with \"/\" and containing only alphanumeric characters and \"-\", \"_\" and additional \"/\".
When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType ImplementationSpecific and containing invalid characters to be denied.
This means that Ingress objects that rely on paths containing regex characters should use ImplementationSpecific pathType.
The cluster admin should establish validation rules using mechanisms like Open Policy Agent to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used.
Please note the template is tied to the Go code. Do not change names in the variable $cfg.
For more information about the template syntax please check the Go template package. In addition to the built-in functions provided by the Go package the following functions are also available:
empty: returns true if the specified parameter (string) is empty
contains: strings.Contains
hasPrefix: strings.HasPrefix
hasSuffix: strings.HasSuffix
toUpper: strings.ToUpper
toLower: strings.ToLower
split: strings.Split
quote: wraps a string in double quotes
buildLocation: helps to build the NGINX Location section in each server
buildProxyPass: builds the reverse proxy configuration
buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation
Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream-<namespace>-<service name>-<service port>$proxy_alternative_upstream_name name of the alternative upstream. The format is upstream-<namespace>-<service name>-<service port>$upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id value of the X-Request-ID HTTP header. If the header is not set, a randomly generated ID.
Additional available variables:
Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service
Sources:
Upstream variables
Embedded variables
"},{"location":"user-guide/third-party-addons/modsecurity/","title":"ModSecurity Web Application Firewall","text":"
ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org
The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).
The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap.
Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. Due to the value of the setting SecAuditLogType=Concurrent the ModSecurity log is stored in multiple files inside the directory /var/log/audit. The default Serial value in SecAuditLogType can impact performance.
The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the OWASP ModSecurity Core Rule Set repository. Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.
For more info on supported annotations, please see annotations/#modsecurity
"},{"location":"user-guide/third-party-addons/modsecurity/#example-of-using-modsecurity-with-plugins-via-the-helm-chart","title":"Example of using ModSecurity with plugins via the helm chart","text":"
Suppose you have a ConfigMap that contains the contents of the nextcloud-rule-exclusions plugin like this:
apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: modsecurity-plugins\ndata:\n empty-after.conf: |\n # no data\n empty-before.conf: |\n # no data\n empty-config.conf: |\n # no data\n nextcloud-rule-exclusions-before.conf:\n # this is just a snippet\n # find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin\n #\n # [ File Manager ]\n # The web interface uploads files, and interacts with the user.\n SecRule REQUEST_FILENAME \"@contains /remote.php/webdav\" \\\n \"id:9508102,\\\n phase:1,\\\n pass,\\\n t:none,\\\n nolog,\\\n ver:'nextcloud-rule-exclusions-plugin/1.2.0',\\\n ctl:ruleRemoveById=920420,\\\n ctl:ruleRemoveById=920440,\\\n ctl:ruleRemoveById=941000-942999,\\\n ctl:ruleRemoveById=951000-951999,\\\n ctl:ruleRemoveById=953100-953130,\\\n ctl:ruleRemoveByTag=attack-injection-php\"\n
If you're using the helm chart, you can pass in the following parameters in your values.yaml:
controller:\n config:\n # Enables Modsecurity\n enable-modsecurity: \"true\"\n\n # Update ModSecurity config and rules\n modsecurity-snippet: |\n # this enables the mod security nextcloud plugin\n Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf\n\n # this enables the default OWASP Core Rule Set\n Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n\n # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)\n SecRuleEngine On\n\n # Enable scanning of the request body\n SecRequestBodyAccess On\n\n # Enable XML and JSON parsing\n SecRule REQUEST_HEADERS:Content-Type \"(?:text|application(?:/soap\\+|/)|application/xml)/\" \\\n \"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML\"\n\n SecRule REQUEST_HEADERS:Content-Type \"application/json\" \\\n \"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON\"\n\n # Reject if larger (we could also let it pass with ProcessPartial)\n SecRequestBodyLimitAction Reject\n\n # Send ModSecurity audit logs to the stdout (only for rejected requests)\n SecAuditLog /dev/stdout\n\n # format the logs in JSON\n SecAuditLogFormat JSON\n\n # could be On/Off/RelevantOnly\n SecAuditEngine RelevantOnly\n\n # Add a volume for the plugins directory\n extraVolumes:\n - name: plugins\n configMap:\n name: modsecurity-plugins\n\n # override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap\n extraVolumeMounts:\n - name: plugins\n mountPath: /etc/nginx/owasp-modsecurity-crs/plugins\n
Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.
Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled.
Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes.
NOTE: While the option is called otlp-collector-host, you will need to point this to any backend that receives otlp-grpc.
Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector, Jaeger Tempo, and zipkin have been tested.
Other optional configuration options:
# specifies the name to use for the server span\nopentelemetry-operation-name\n\n# sets whether or not to trust incoming telemetry spans\nopentelemetry-trust-incoming-span\n\n# specifies the port to use when uploading traces, Default: 4317\notlp-collector-port\n\n# specifies the service name to use for any traces created, Default: nginx\notel-service-name\n\n# The maximum queue size. After the size is reached data are dropped.\notel-max-queuesize\n\n# The delay interval in milliseconds between two consecutive exports.\notel-schedule-delay-millis\n\n# How long the export can run before it is cancelled.\notel-schedule-delay-millis\n\n# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.\notel-max-export-batch-size\n\n# specifies sample rate for any traces created, Default: 0.01\notel-sampler-ratio\n\n# specifies the sampler to be used when sampling traces.\n# The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOff\notel-sampler\n\n# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false\notel-sampler-parent-based\n
Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:
"},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog","text":"
If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:
"},{"location":"user-guide/third-party-addons/opentelemetry/#annotations","title":"Annotations","text":"Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracingnginx.ingress.kubernetes.io/enable-opentelemetrynginx.ingress.kubernetes.io/opentracing-trust-incoming-spannginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span"},{"location":"user-guide/third-party-addons/opentelemetry/#configs","title":"Configs","text":"Legacy OpenTelemetry opentracing-operation-nameopentelemetry-operation-nameopentracing-location-operation-nameopentelemetry-operation-nameopentracing-trust-incoming-spanopentelemetry-trust-incoming-spanzipkin-collector-portotlp-collector-portzipkin-service-nameotel-service-namezipkin-sample-rateotel-sampler-ratiojaeger-collector-portotlp-collector-portjaeger-endpointotlp-collector-port, otlp-collector-hostjaeger-service-nameotel-service-namejaeger-propagation-formatN/Ajaeger-sampler-typeotel-samplerjaeger-sampler-paramotel-samplerjaeger-sampler-hostN/Ajaeger-sampler-portN/Ajaeger-trace-context-header-nameN/Ajaeger-debug-headerN/Ajaeger-baggage-headerN/Ajaeger-tracer-baggage-header-prefixN/Adatadog-collector-portotlp-collector-portdatadog-service-nameotel-service-namedatadog-environmentN/Adatadog-operation-name-overrideN/Adatadog-priority-samplingotel-samplerdatadog-sample-rateotel-sampler-ratio"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Overview","text":"
This is the documentation for the Ingress NGINX Controller.
It is built around the Kubernetes Ingress resource, using a ConfigMap to store the controller configuration.
You can learn more about using Ingress in the official Kubernetes documentation.
See Deployment for a whirlwind tour that will get you started.
"},{"location":"e2e-tests/","title":"E2e tests","text":""},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","title":"e2e test suite for Ingress NGINX Controller","text":""},{"location":"e2e-tests/#admission-admission-controller","title":"[Admission] admission controller","text":"
should not allow overlaps of host and paths without canary annotations
should allow overlaps of host and paths with canary annotation
should block ingress with invalid path
should return an error if there is an error validating the ingress definition
should return an error if there is an invalid value in some annotation
should return an error if there is a forbidden value in some annotation
should return an error if there is an invalid path and wrong pathType is set
should not return an error if the Ingress V1 definition is valid with Ingress Class
should not return an error if the Ingress V1 definition is valid with IngressClass annotation
should return an error if the Ingress V1 definition contains invalid annotations
should not return an error for an invalid Ingress when it has unknown class
should apply the annotation to the default backend
"},{"location":"e2e-tests/#disable-leader-routing-works-when-leader-election-was-disabled","title":"[Disable Leader] Routing works when leader election was disabled","text":"
should create multiple ingress routings rules when leader election has disabled
"},{"location":"e2e-tests/#endpointslices-long-service-name","title":"[Endpointslices] long service name","text":"
should return 200 when service name has max allowed number of characters 63
should reload after an update in the configuration
"},{"location":"faq/","title":"FAQ","text":""},{"location":"faq/#multiple-controller-in-one-cluster","title":"Multiple controller in one cluster","text":"
Question - How can I easily install multiple instances of the ingress-nginx controller in the same cluster?
You can install them in different namespaces.
Create a new namespace
kubectl create namespace ingress-nginx-2\n
Use Helm to install the additional instance of the ingress controller
Ensure you have Helm working (refer to the Helm documentation)
We have to assume that you have the helm repo for the ingress-nginx controller already added to your Helm config. But, if you have not added the helm repo then you can do this to add the repo to your helm config;
If you need to install yet another instance, then repeat the procedure to create a new namespace, change the values such as names & namespaces (for example from \"-2\" to \"-3\"), or anything else that meets your needs.
Note that controller.ingressClassResource.name and controller.ingressClass have to be set correctly. The first is to create the IngressClass object and the other is to modify the deployment of the actual ingress controller pod.
"},{"location":"faq/#i-cant-use-multiple-namespaces-what-should-i-do","title":"I can't use multiple namespaces, what should I do?","text":"
If you need to install all instances in the same namespace, then you need to specify a different election id, like this:
Question - How to obtain the real-client-ipaddress ?
The goto solution for retaining the real-client IPaddress is to enable PROXY protocol.
Enabling PROXY protocol has to be done on both, the Ingress NGINX controller, as well as the L4 load balancer, in front of the controller.
The real-client IP address is lost by default, when traffic is forwarded over the network. But enabling PROXY protocol ensures that the connection details are retained and hence the real-client IP address doesn't get lost.
Enabling proxy-protocol on the controller is documented here .
For enabling proxy-protocol on the LoadBalancer, please refer to the documentation of your infrastructure provider because that is where the LB is provisioned.
Some more info available here
Some more info on proxy-protocol is here
"},{"location":"faq/#client-ipaddress-on-single-node-cluster","title":"client-ipaddress on single-node cluster","text":"
Single node clusters are created for dev & test uses with tools like \"kind\" or \"minikube\". A trick to simulate a real use network with these clusters (kind or minikube) is to install Metallb and configure the ipaddress of the kind container or the minikube vm/container, as the starting and ending of the pool for Metallb in L2 mode. Then the host ip becomes a real client ipaddress, for curl requests sent from the host.
After installing ingress-nginx controller on a kind or a minikube cluster with helm, you can configure it for real-client-ip with a simple change to the service that ingress-nginx controller creates. The service object of --type LoadBalancer has a field service.spec.externalTrafficPolicy. If you set the value of this field to \"Local\" then the real-ipaddress of a client is visible to the controller.
% kubectl explain service.spec.externalTrafficPolicy\nKIND: Service\nVERSION: v1\n\nFIELD: externalTrafficPolicy <string>\n\nDESCRIPTION:\n externalTrafficPolicy describes how nodes distribute service traffic they\n receive on one of the Service's \"externally-facing\" addresses (NodePorts,\n ExternalIPs, and LoadBalancer IPs). If set to \"Local\", the proxy will\n configure the service in a way that assumes that external load balancers\n will take care of balancing the service traffic between nodes, and so each\n node will deliver traffic only to the node-local endpoints of the service,\n without masquerading the client source IP. (Traffic mistakenly sent to a\n node with no endpoints will be dropped.) The default value, \"Cluster\", uses\n the standard behavior of routing to all endpoints evenly (possibly modified\n by topology and other features). Note that traffic sent to an External IP or\n LoadBalancer IP from within the cluster will always get \"Cluster\" semantics,\n but clients sending to a NodePort from within the cluster may need to take\n traffic policy into account when picking a node.\n\n Possible enum values:\n - `\"Cluster\"` routes traffic to all endpoints.\n - `\"Local\"` preserves the source IP of the traffic by routing only to\n endpoints on the same node as the traffic was received on (dropping the\n traffic if there are no local endpoints).\n
The solution is to get the real client IPaddress from the \"X-Forward-For\" HTTP header
Example : If your application pod behind Ingress NGINX controller, uses the NGINX webserver and the reverseproxy inside it, then you can do the following to preserve the remote client IP.
First you need to make sure that the X-Forwarded-For header reaches the backend pod. This is done by using a Ingress NGINX conftroller ConfigMap key. Its documented here
Next, edit nginx.conf file inside your app pod, to contain the directives shown below:
set_real_ip_from 0.0.0.0/0; # Trust all IPs (use your VPC CIDR block in production)\nreal_ip_header X-Forwarded-For;\nreal_ip_recursive on;\n\nlog_format main '$remote_addr - $remote_user [$time_local] \"$request\" '\n '$status $body_bytes_sent \"$http_referer\" '\n '\"$http_user_agent\" '\n 'host=$host x-forwarded-for=$http_x_forwarded_for';\n\naccess_log /var/log/nginx/access.log main;\n
If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.
"},{"location":"faq/#validation-of-path","title":"Validation Of path","text":"
For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.
This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.
When \"ingress.spec.rules.http.pathType=Exact\" or \"pathType=Prefix\", this validation will limit the characters accepted on the field \"ingress.spec.rules.http.paths.path\", to \"alphanumeric characters\", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\"
When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \"ImplementationSpecific\".
API Spec on pathType is documented here
When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, \"/,\",\"_\",\"-\", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.
The cluster admin should establish validation rules using mechanisms like \"Open Policy Agent\", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here
A complete example of an Openpolicyagent gatekeeper rule is available here
If you have any issues or concerns, please do one of the following:
Open a GitHub issue
Comment in our Dev Slack Channel
Open a thread in our Google Group ingress-nginx-dev@kubernetes.io
"},{"location":"faq/#why-is-chunking-not-working-since-controller-v110","title":"Why is chunking not working since controller v1.10 ?","text":"
If your code is setting the HTTP header \"Transfer-Encoding: chunked\" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e
More details are available in this issue https://github.com/kubernetes/ingress-nginx/issues/11162
"},{"location":"how-it-works/","title":"How it works","text":"
The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
"},{"location":"how-it-works/#building-the-nginx-model","title":"Building the NGINX model","text":"
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
"},{"location":"how-it-works/#when-a-reload-is-required","title":"When a reload is required","text":"
The next list describes the scenarios when a reload is required:
New Ingress Resource Created.
TLS section is added to existing Ingress.
Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
A path is added/removed from an Ingress.
An Ingress, Service, Secret is removed.
Some missing referenced object from the Ingress is available, like a Service or Secret.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","title":"Avoiding reloads on Endpoints changes","text":"
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","title":"Avoiding outage from wrong configuration","text":"
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
to make sure the plugin is properly installed and to get a list of commands:
kubectl ingress-nginx --help\nA kubectl plugin for inspecting your ingress-nginx deployments\n\nUsage:\n ingress-nginx [command]\n\nAvailable Commands:\n backends Inspect the dynamic backend information of an ingress-nginx instance\n certs Output the certificate data stored in an ingress-nginx pod\n conf Inspect the generated nginx.conf\n exec Execute a command inside an ingress-nginx pod\n general Inspect the other dynamic ingress-nginx information\n help Help about any command\n info Show information about the ingress-nginx service\n ingresses Provide a short summary of all of the ingress definitions\n lint Inspect kubernetes resources for possible issues\n logs Get the kubernetes logs for an ingress-nginx pod\n ssh ssh into a running ingress-nginx pod\n\nFlags:\n --as string Username to impersonate for the operation\n --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups.\n --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\")\n --certificate-authority string Path to a cert file for the certificate authority\n --client-certificate string Path to a client certificate file for TLS\n --client-key string Path to a client key file for TLS\n --cluster string The name of the kubeconfig cluster to use\n --context string The name of the kubeconfig context to use\n -h, --help help for ingress-nginx\n --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure\n --kubeconfig string Path to the kubeconfig file to use for CLI requests.\n -n, --namespace string If present, the namespace scope for this CLI request\n --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\")\n -s, --server string The address and port of the Kubernetes API server\n --token string Bearer token for authentication to the API server\n --user string The name of the kubeconfig user to use\n\nUse \"ingress-nginx [command] --help\" for more information about a command.\n
Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment>, --pod <pod>, and --container <container> flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller, and the --container flag defaults to controller.
Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.
Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host <hostname> option to view only the server block for that host:
kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local\n\n server {\n server_name testaddr.local ;\n\n listen 80;\n\n set $proxy_upstream_name \"-\";\n set $pass_access_scheme $scheme;\n set $pass_server_port $server_port;\n set $best_http_host $http_host;\n set $pass_port $pass_server_port;\n\n location / {\n\n set $namespace \"\";\n set $ingress_name \"\";\n set $service_name \"\";\n set $service_port \"0\";\n set $location_path \"/\";\n\n...\n
kubectl ingress-nginx exec is exactly the same as kubectl exec, with the same command flags. It will automatically choose an ingress-nginx pod to run the command in.
$ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx\nfastcgi_params\ngeoip\nlua\nmime.types\nmodsecurity\nmodules\nnginx.conf\nopentracing.json\nopentelemetry.toml\nowasp-modsecurity-crs\ntemplate\n
kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions.
$ kubectl ingress-nginx lint --all-namespaces --verbose\nChecking ingresses...\n\u2717 anamespace/this-nginx\n - Contains the removed session-cookie-hash annotation.\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3743\n\u2717 othernamespace/ingress-definition-blah\n - The rewrite-target annotation value does not reference a capture group\n Lint added for version 0.22.0\n https://github.com/kubernetes/ingress-nginx/issues/3174\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n - Uses removed config flag --sort-backends\n Lint added for version 0.22.0\n https://github.com/kubernetes/ingress-nginx/issues/3655\n - Uses removed config flag --enable-dynamic-certificates\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3808\n
To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags:
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0\nChecking ingresses...\n\u2717 anamespace/this-nginx\n - Contains the removed session-cookie-hash annotation.\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3743\n\nChecking deployments...\n\u2717 namespace2/ingress-nginx-controller\n - Uses removed config flag --enable-dynamic-certificates\n Lint added for version 0.24.0\n https://github.com/kubernetes/ingress-nginx/issues/3808\n
kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.
"},{"location":"lua_tests/","title":"Lua Tests","text":""},{"location":"lua_tests/#running-the-lua-tests","title":"Running the Lua Tests","text":"
To run the Lua tests you can run the following from the root directory:
make lua-test\n
This command makes use of docker hence does not need any dependency installations besides docker
"},{"location":"lua_tests/#where-are-the-lua-tests","title":"Where are the Lua Tests?","text":"
Lua Tests can be found in the rootfs/etc/nginx/lua/test directory
"},{"location":"troubleshooting/","title":"Troubleshooting","text":""},{"location":"troubleshooting/#troubleshooting","title":"Troubleshooting","text":""},{"location":"troubleshooting/#ingress-controller-logs-and-events","title":"Ingress-Controller Logs and Events","text":"
There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.
"},{"location":"troubleshooting/#check-the-ingress-resource-events","title":"Check the Ingress Resource Events","text":"
$ kubectl get ing -n <namespace-of-ingress-resource>\nNAME HOSTS ADDRESS PORTS AGE\ncafe-ingress cafe.com 10.0.2.15 80 25s\n\n$ kubectl describe ing <ingress-resource-name> -n <namespace-of-ingress-resource>\nName: cafe-ingress\nNamespace: default\nAddress: 10.0.2.15\nDefault backend: default-http-backend:80 (172.17.0.5:8080)\nRules:\n Host Path Backends\n ---- ---- --------\n cafe.com\n /tea tea-svc:80 (<none>)\n /coffee coffee-svc:80 (<none>)\nAnnotations:\n kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}}\n\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress\n Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress\n
"},{"location":"troubleshooting/#check-the-ingress-controller-logs","title":"Check the Ingress Controller Logs","text":"
Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment.
$ kubectl get deploy -n <namespace-of-ingress-controller>\nNAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE\ndefault-http-backend 1 1 1 1 35m\ningress-nginx-controller 1 1 1 1 35m\n\n$ kubectl edit deploy -n <namespace-of-ingress-controller> ingress-nginx-controller\n# Add --v=X to \"- args\", where X is an integer\n
--v=2 shows details using diff about the changes in the configuration in nginx
--v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
--v=5 configures NGINX in debug mode
"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","title":"Authentication to the Kubernetes API Server","text":"
A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.
The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:
Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.
Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host. The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.
Using the flag --apiserver-host: Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy. Please do not use this approach in production.
In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side.
If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server.
Verify with the following commands:
# start a container that contains curl\n$ kubectl run -it --rm test --image=curlimages/curl --restart=Never -- /bin/sh\n\n# check if secret exists\n/ $ ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt namespace token\n/ $\n\n# check base connectivity from cluster inside\n/ $ curl -k https://kubernetes.default.svc.cluster.local\n{\n \"kind\": \"Status\",\n \"apiVersion\": \"v1\",\n \"metadata\": {\n\n },\n \"status\": \"Failure\",\n \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\",\n \"reason\": \"Forbidden\",\n \"details\": {\n\n },\n \"code\": 403\n}/ $\n\n# connect using tokens\n}/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local\n&& echo\n{\n \"paths\": [\n \"/api\",\n \"/api/v1\",\n \"/apis\",\n \"/apis/\",\n ... TRUNCATED\n \"/readyz/shutdown\",\n \"/version\"\n ]\n}\n/ $\n\n# when you type `exit` or `^D` the test pod will be deleted.\n
If it is not working, there are two possible reasons:
The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret <name>. It will automatically be recreated.
You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter.
Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.
More information:
User Guide: Service Accounts
Cluster Administrator Guide: Managing Service Accounts
If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.
"},{"location":"troubleshooting/#using-gdb-with-nginx","title":"Using GDB with Nginx","text":"
Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.
Note: The below is based on the nginx documentation.
SSH into the worker
$ ssh user@workerIP\n
Obtain the Docker Container Running nginx
$ docker ps | grep ingress-nginx-controller\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\nd9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0\n
"},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)","text":"
Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider )
Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF\n
Then please follow the below steps.
During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details
a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null
(\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null\n % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\n (\u2388 |myprompt)\u279c ~\n
b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
Redirection in the proxy is implemented to ensure the pulling of the images.
This is the solution recommended to whitelist the below image repositories :
*.appspot.com \n*.k8s.io \n*.pkg.dev\n*.gcr.io\n
More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.
"},{"location":"troubleshooting/#unable-to-listen-on-port-80443","title":"Unable to listen on port (80/443)","text":"
One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment.
If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.
"},{"location":"troubleshooting/#create-a-test-pod","title":"Create a test pod","text":"
The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example:
* update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster
Apply the YAML and open a shell into the pod. Try to manually run the controller process:
$ /nginx-ingress-controller\n
You should get the same error as from the ingress controller pod logs.
Confirm the capabilities are properly surfacing into the pod:
The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container.
"},{"location":"troubleshooting/#create-a-test-pod-as-root","title":"Create a test pod as root","text":"
(Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities.
Some things to try after shelling into this container:
Try running the controller as the www-data (101) user:
Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context.
Install the libcap package and check capabilities on the file:
There are multiple ways to install the Ingress-Nginx Controller:
with Helm, using the project repository chart;
with kubectl apply, using YAML manifests;
with specific addons (e.g. for minikube or MicroK8s).
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.
It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.
Info
This command is idempotent:
if the ingress controller is not installed, it will install it,
if the ingress controller is already installed, it will upgrade it.
If you want a full list of values that you can set, while installing with Helm, then run:
helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx\n
Helm install on AWS/GCP/Azure/Other providers
The ingress-nginx-controller helm-chart is a generic install out of the box. The default set of helm values is not configured for installation on any infra provider. The annotations that are applicable to the cloud provider must be customized by the users. See AWS LB Controller. Examples of some annotations needed for the service resource of --type LoadBalancer on AWS are below:
The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.
Attention
If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml. In general, you need:
Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller.
Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.
This issue shows a typical DNS problem and its solution.
At this point, you can access your deployment using curl ;
If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller.
You can see that IP address or FQDN with the following command:
kubectl get service ingress-nginx-controller --namespace=ingress-nginx\n
It will be the EXTERNAL-IP field. If that field shows <pending>, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer).
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io:
You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89
"},{"location":"deploy/#environment-specific-instructions","title":"Environment-specific instructions","text":""},{"location":"deploy/#local-development-clusters","title":"Local development clusters","text":""},{"location":"deploy/#minikube","title":"minikube","text":"
The ingress controller can be installed through minikube's addons system:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","title":"TLS termination in AWS Load Balancer (NLB)","text":"
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.
See the GKE documentation on adding rules and the Kubernetes issue for more detail.
Proxy-protocol is supported in GCE check the Official Documentations on how to enable.
By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
Refer to the dedicated tutorial in the Scaleway documentation for configuring the proxy protocol for ingress-nginx with the Scaleway load balancer."},{"location":"deploy/#exoscale","title":"Exoscale","text":"
"},{"location":"deploy/#bare-metal-clusters","title":"Bare metal clusters","text":"
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace.
See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.
The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.
"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","title":"Running on Kubernetes versions older than 1.19","text":"
Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1, then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1.
Here is how these Ingress versions are supported in Kubernetes:
before Kubernetes 1.19, only v1beta1 Ingress resources are supported
from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported
in Kubernetes 1.22 and above, only v1 Ingress resources are supported
And here is how these Ingress versions are supported in Ingress-Nginx Controller:
before version 1.0, only v1beta1 Ingress resources are supported
in version 1.0 and above, only v1 Ingress resources are
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49).
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).
In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.
"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","title":"A pure software solution: MetalLB","text":"
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)\ndefault-http-backend ClusterIP 10.0.64.249 <none> 80/TCP\ningress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP\n
As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:
In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.
"},{"location":"deploy/baremetal/#over-a-nodeport-service","title":"Over a NodePort Service","text":"
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.
Info
A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.
In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.
Example
Given the NodePort 30100 allocated to the ingress-nginx Service
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP PORT(S)\ndefault-http-backend ClusterIP 10.0.64.249 80/TCP\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n
and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.
Impact on the host system
While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.
This approach has a few other limitations one ought to be aware of:
Source IP address
Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.
The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).
Warning
This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.
Example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
with a ingress-nginx-controller Deployment composed of 2 replicas
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3\ningress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2\n
Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.
Ingress status
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.
$ kubectl get ingress\nNAME HOSTS ADDRESS PORTS\ntest-ingress myapp.example.com 80\n
Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.
Warning
There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
one could edit the ingress-nginx Service and add the following field to the object spec
As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.
Example
Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:
"},{"location":"deploy/baremetal/#via-the-host-network","title":"Via the host network","text":"
In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.
Note
This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it.
This can be achieved by enabling the hostNetwork option in the Pods' spec.
template:\n spec:\n hostNetwork: true\n
Security considerations
Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.
Example
Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\ningress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n
One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:
$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>\n...\nEvents:\n Type Reason From Message\n ---- ------ ---- -------\n Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n
One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.
Info
A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.
Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.
Like with NodePorts, this approach has a few quirks it is important to be aware of.
DNS resolution
Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.
Ingress status
Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.
$ kubectl get ingress\nNAME HOSTS ADDRESS PORTS\ntest-ingress myapp.example.com 80\n
Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.
Example
Given a ingress-nginx-controller DaemonSet composed of 2 replicas
$ kubectl -n ingress-nginx get pod -o wide\nNAME READY STATUS IP NODE\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\ningress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\ningress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n
the controller sets the status of all Ingress objects it manages to the following value:
Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments.
"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","title":"Using a self-provisioned edge","text":"
Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.
Such deployment builds upon the NodePort Service described above in Over a NodePort Service, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.
On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:
This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity.
The externalIPs Service option was previously mentioned in the NodePort section.
As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node\nNAME STATUS ROLES EXTERNAL-IP\nhost-1 Ready master 203.0.113.1\nhost-2 Ready node 203.0.113.2\nhost-3 Ready node 203.0.113.3\n
and the following ingress-nginx NodePort Service
$ kubectl -n ingress-nginx get svc\nNAME TYPE CLUSTER-IP PORT(S)\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n
One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:
There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:
nginx CIS Benchmark
cipherlist.eu (one of many forks of the now dead project cipherli.st)
This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.
Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.
This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself
"},{"location":"deploy/hardening-guide/#configuration-guide","title":"Configuration Guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends"},{"location":"deploy/rbac/","title":"Role Based Access Control (RBAC)","text":""},{"location":"deploy/rbac/#overview","title":"Overview","text":"
This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled.
Role Based Access Control is comprised of four layers:
ClusterRole - permissions assigned to a role that apply to an entire cluster
ClusterRoleBinding - binding a ClusterRole to a specific account
Role - permissions assigned to a role that apply to a specific namespace
RoleBinding - binding a Role to a specific account
In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the ingress-nginx-controller.
"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","title":"Service Accounts created in this example","text":"
One ServiceAccount is created in this example, ingress-nginx.
"},{"location":"deploy/rbac/#permissions-granted-in-this-example","title":"Permissions Granted in this example","text":"
There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx, and namespace specific permissions defined by the Role named ingress-nginx.
These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx
These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx
configmaps, pods, secrets: get
endpoints: get
Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader
Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body).
The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx.
The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.
No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.
simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):
kubectl set image deployment/ingress-nginx-controller \\\n controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\\n -n ingress-nginx\n
For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx.
This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.
It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin
Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.
The image used to build the final ingress controller, used in deploy scripts and Helm charts.
This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.
This document explains how to get started with developing for Ingress-Nginx Controller.
For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide. This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any)
Start a local Kubernetes cluster using kind, build and deploy the ingress controller
make dev-env\n
- If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind, and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file."},{"location":"developer-guide/getting-started/#testing","title":"Testing","text":"
Run go unit tests
make test\n
Run unit-tests for lua code
make lua-test\n
Lua tests are located in the directory rootfs/etc/nginx/lua/test
Important
Test files must follow the naming convention <mytest>_test.lua or it will be ignored
Run e2e test suite
make kind-e2e-test\n
To limit the scope of the tests to execute, we can use the environment variable FOCUS
FOCUS=\"no-auth-locations\" make kind-e2e-test\n
Note
The variable FOCUS defines Ginkgo Focused Specs
Valid values are defined in the describe definition of the e2e tests like Default Backend
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.
"},{"location":"enhancements/#quick-start-for-the-kep-process","title":"Quick start for the KEP process","text":"
Follow the process outlined in the KEP template
"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","title":"Do I have to use the KEP process?","text":"
No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.
KEPs are only required when the changes are wide ranging and impact most of the project.
"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","title":"Why would I want to use the KEP process?","text":"
Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.
Benefits to KEP users (in the limit):
Exposure on a kubernetes blessed web site that is findable via web search engines.
Cross indexing of KEPs so that users can find connections and the current status of any KEP.
A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.
We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.
"},{"location":"enhancements/20190724-only-dynamic-ssl/","title":"Remove static SSL configuration mode","text":""},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","title":"Table of Contents","text":"
Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.
Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
Remove any action of the flag --enable-dynamic-certificates
"},{"location":"enhancements/20190815-zone-aware-routing/","title":"Availability zone aware routing","text":""},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","title":"Table of Contents","text":"
Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.
When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.
At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.
This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.
Arguably inter-zone network latency should also be better than cross-zone.
This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.
Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.
How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.
How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.
Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.
How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.
We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.
"},{"location":"enhancements/20231001-split-containers/","title":"Proposal to split containers","text":"
All the NGINX files should live on one container
No file other than NGINX files should exist on this container
This includes not mounting the service account
All the controller files should live on a different container
Controller container should have bare minimum to work (just go program)
ServiceAccount should be mounted just on controller
Inside nginx container, there should be a really small http listener just able to start, stop and reload NGINX
"},{"location":"enhancements/20231001-split-containers/#roadmap-what-needs-to-be-done","title":"Roadmap (what needs to be done)","text":"
Map what needs to be done to mount the SA just on controller container
Map all the required files for NGINX to work
Map all the required network calls between controller and NGINX
eg.: Dynamic lua reconfiguration
Map problematic features that will need attention
SSLPassthrough today happens on controller process and needs to happen on NGINX
"},{"location":"enhancements/20231001-split-containers/#ports-and-endpoints-on-nginx-container","title":"Ports and endpoints on NGINX container","text":"
Public HTTP/HTTPs port - 80 and 443
Lua configuration port - 10246 (HTTP) and 10247 (Stream)
3333 (temp) - Dataplane controller http server
/reload - (POST) Reloads the configuration.
\"config\" argument is the location of temporary file that should be used / moved to nginx.conf
/test - (POST) Test the configuration of a given file location
\"config\" argument is the location of temporary file that should be tested
"},{"location":"enhancements/20231001-split-containers/#mounting-empty-sa-on-controller-container","title":"Mounting empty SA on controller container","text":"
"},{"location":"enhancements/20231001-split-containers/#mapped-folders-on-nginx-configuration","title":"Mapped folders on NGINX configuration","text":"
WARNING We need to be aware of inter mount containers and inode problems. If we mount a file instead of a directory, it may take time to reflect the file value on the target container
This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.
The title should be lowercased and spaces/punctuation should be replaced with -.
To get started with this template:
Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
Create a PR. Assign it to folks that are sponsoring this process.
Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.
The canonical place for the latest set of instructions (and the likely source of this file) is here.
The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.
"},{"location":"enhancements/YYYYMMDD-kep-template/#table-of-contents","title":"Table of Contents","text":"
A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.
The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
A good summary is probably at least a paragraph in length.
This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.
Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the \"how\" of the system. The goal here is to make this feel real for users without getting bogged down.
What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.
"},{"location":"enhancements/YYYYMMDD-kep-template/#risks-and-mitigations","title":"Risks and Mitigations","text":"
What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.
How will security be reviewed and by whom? How will UX be reviewed and by whom?
Consider including folks that also work outside project.
Note: Section not required until targeted at a release.
Consider the following in developing a test plan for this enhancement:
Will there be e2e and integration tests, in addition to unit tests?
How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.
All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.
"},{"location":"enhancements/YYYYMMDD-kep-template/#removing-a-deprecated-flag","title":"Removing a deprecated flag","text":"
Announce deprecation and support policy of the existing flag
Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
Address feedback on usage/changed behavior, provided on GitHub issues
Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.
This directory contains a catalog of examples on how to run, configure and scale Ingress. Please review the prerequisites before trying them.
The examples on these pages include the spec.ingressClassName field which replaces the deprecated kubernetes.io/ingress.class: nginx annotation. Users of ingress-nginx < 1.0.0 (Helm chart < 4.0.0) should use the legacy documentation.
For more information, check out the Migration to apiVersion networking.k8s.io/v1 guide.
Category Name Description Complexity Level Apps Docker Registry TODO TODO Auth Basic authentication password protect your website Intermediate Auth Client certificate authentication secure your website with client certificate authentication Intermediate Auth External authentication plugin defer to an external authentication service Intermediate Auth OAuth external auth TODO TODO Customization Configuration snippets customize nginx location configuration using annotations Advanced Customization Custom configuration TODO TODO Customization Custom DH parameters for perfect forward secrecy TODO TODO Customization Custom errors serve custom error pages from the default backend Intermediate Customization Custom headers set custom headers before sending traffic to backends Advanced Customization External authentication with response header propagation TODO TODO Customization Sysctl tuning TODO TODO Features Rewrite TODO TODO Features Session stickiness route requests consistently to the same endpoint Advanced Features Canary Deployments weighted canary routing to a seperate deployment Intermediate Scaling Static IP a single ingress gets a single static IP Intermediate TLS Multi TLS certificate termination TODO TODO TLS TLS termination TODO TODO"},{"location":"examples/PREREQUISITES/","title":"Prerequisites","text":"
Many of the examples in this directory have common prerequisites.
CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA.
We have a CA Certificate which we usually obtain from a Certificate Authority and use that to sign both our server certificate and client certificate. Then every time we want to access our backend, we must pass the client certificate.
These instructions are based on the following blog
Session affinity can be configured using the following annotations:
Name Description Value nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie) nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent nginx.ingress.kubernetes.io/affinity-canary-behavior Defines session affinity behavior of canaries. By default the behavior is sticky, and canaries respect session affinity configuration. Set this to legacy to restore original canary behavior, when session affinity parameters were not respected. sticky (default) or legacy nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE) nginx.ingress.kubernetes.io/session-cookie-secure Set the cookie as secure regardless the protocol of the incoming request \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path) nginx.ingress.kubernetes.io/session-cookie-domain Domain that will be set on the cookie string nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)
You can create the session affinity example Ingress to test this:
In the example above, you can see that the response contains a Set-Cookie header with the settings we have defined. This cookie is created by the Ingress-Nginx Controller, it contains a randomly generated key corresponding to the upstream used for that request (selected using consistent hashing) and has an Expires directive. If a client sends a cookie that doesn't correspond to an upstream, NGINX selects an upstream and creates a corresponding cookie.
If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.
When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.
When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.
"},{"location":"examples/auth/basic/#using-kubectl-create-an-ingress-tied-to-the-basic-auth-secret","title":"Using kubectl, create an ingress tied to the basic-auth secret","text":"
$ echo \"\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-with-auth\n annotations:\n # type of authentication\n nginx.ingress.kubernetes.io/auth-type: basic\n # name of the secret that contains the user/password definitions\n nginx.ingress.kubernetes.io/auth-secret: basic-auth\n # message to display with an appropriate context why the authentication is required\n nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - foo'\nspec:\n ingressClassName: nginx\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n service: \n name: http-svc\n port: \n number: 80\n\" | kubectl create -f -\n
"},{"location":"examples/auth/basic/#use-curl-to-confirm-authorization-is-required-by-the-ingress","title":"Use curl to confirm authorization is required by the ingress","text":"
"},{"location":"examples/auth/basic/#use-curl-with-the-correct-credentials-to-connect-to-the-ingress","title":"Use curl with the correct credentials to connect to the ingress","text":"
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'\n* Trying 10.2.29.4...\n* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)\n* Server auth using Basic with user 'foo'\n> GET / HTTP/1.1\n> Host: foo.bar.com\n> Authorization: Basic Zm9vOmJhcg==\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.10.0\n< Date: Wed, 11 May 2016 06:05:26 GMT\n< Content-Type: text/plain\n< Transfer-Encoding: chunked\n< Connection: keep-alive\n< Vary: Accept-Encoding\n<\nCLIENT VALUES:\nclient_address=10.2.29.4\ncommand=GET\nreal path=/\nquery=nil\nrequest_version=1.1\nrequest_uri=http://foo.bar.com:8080/\n\nSERVER VALUES:\nserver_version=nginx: 1.9.11 - lua: 10001\n\nHEADERS RECEIVED:\naccept=*/*\nconnection=close\nhost=foo.bar.com\nuser-agent=curl/7.43.0\nx-request-id=e426c7829ef9f3b18d40730857c3eddb\nx-forwarded-for=10.2.29.1\nx-forwarded-host=foo.bar.com\nx-forwarded-port=80\nx-forwarded-proto=http\nx-real-ip=10.2.29.1\nx-scheme=http\nBODY:\n* Connection #0 to host 10.2.29.4 left intact\n-no body in request-\n
Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm (Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.
This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.
This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
This example will show you how to deploy Vouch Proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
Ingress Nginx Has the ability to handle canary routing by setting specific annotations, the following is an example of how to configure a canary deployment with weighted canary routing.
"},{"location":"examples/canary/#create-your-main-deployment-and-service","title":"Create your main deployment and service","text":"
This is the main deployment of your application with the service that will be used to route to it
"},{"location":"examples/canary/#create-ingress-pointing-to-your-canary-deployment","title":"Create Ingress Pointing To Your Canary Deployment","text":"
You will then create an Ingress that has the canary specific configuration, please pay special notice of the following:
The host name is identical to the main ingress host name
The nginx.ingress.kubernetes.io/canary: \"true\" annotation is required and defines this as a canary annotation (if you do not have this the Ingresses will clash)
The nginx.ingress.kubernetes.io/canary-weight: \"50\" annotation dictates the weight of the routing, in this case there is a \"50%\" chance a request will hit the canary deployment over the main deployment
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at an example of specifying custom headers.
This example demonstrates how to use a custom backend to render custom error pages.
If you are using Helm Chart, look at example values and don't forget to add configMap to your deployment, otherwise continue with Customized default backend manual deployment.
First, create the custom default-backend. It will be used by the Ingress controller later on. To do that, you can take a look at the example manifest in this project's GitHub repository.
If you do not already have an instance of the Ingress-Nginx Controller running, deploy it according to the deployment guide, then follow these steps:
Edit the ingress-nginx-controller Deployment and set the value of the --default-backend-service flag to the name of the newly created error backend.
Edit the ingress-nginx-controller ConfigMap and create the key custom-http-errors with a value of 404,503.
Take note of the IP address assigned to the Ingress-Nginx Controller Service.
$ kubectl get svc ingress-nginx\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ningress-nginx ClusterIP 10.0.0.13 <none> 80/TCP,443/TCP 10m\n
Note
The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.
Let us send a couple of HTTP requests using cURL and validate everything is working as expected.
A request to the default backend returns a 404 error with a custom message:
$ curl -D- http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:11:24 GMT\nContent-Type: */*\nTransfer-Encoding: chunked\nConnection: keep-alive\n\n<span>The page you're looking for could not be found.</span>\n
A request with a custom Accept header returns the corresponding document type (JSON):
$ curl -D- -H 'Accept: application/json' http://10.0.0.13/\nHTTP/1.1 404 Not Found\nServer: nginx/1.13.12\nDate: Tue, 12 Jun 2018 19:12:36 GMT\nContent-Type: application/json\nTransfer-Encoding: chunked\nConnection: keep-alive\nVary: Accept-Encoding\n\n{ \"message\": \"The page you're looking for could not be found\" }\n
To go further with this example, feel free to deploy your own applications and Ingress objects, and validate that the responses are still in the correct format when a backend returns 503 (eg. if you scale a Deployment down to 0 replica).
configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.
The Ingress-Nginx Controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.
The above example was for passing a custom list of headers to the upstream server. To pass the custom headers before sending response traffic to the client, use the add-headers key:
Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf
"},{"location":"examples/customization/external-auth-headers/","title":"External authentication, authentication service response headers propagation","text":"
This example demonstrates propagation of selected authentication service response headers to a backend service.
Sample configuration includes:
Sample authentication service producing several response headers
Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
After successful authentication service generates response headers UserID and UserRole
Sample echo service displaying header information
Two ingress objects pointing to echo service
Public, which allows access from unauthenticated users
Private, which allows access from authenticated users only
"},{"location":"examples/customization/external-auth-headers/#test-1-public-service-with-no-auth-header","title":"Test 1: public service with no auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:21 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 20\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: , UserRole:\n
"},{"location":"examples/customization/external-auth-headers/#test-2-secure-service-with-no-auth-header","title":"Test 2: secure service with no auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n>\n< HTTP/1.1 403 Forbidden\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:18:48 GMT\n< Content-Type: text/html\n< Content-Length: 170\n< Connection: keep-alive\n<\n<html>\n<head><title>403 Forbidden</title></head>\n<body bgcolor=\"white\">\n<center><h1>403 Forbidden</h1></center>\n<hr><center>nginx/1.11.10</center>\n</body>\n</html>\n* Connection #0 to host 192.168.99.100 left intact\n
"},{"location":"examples/customization/external-auth-headers/#test-3-public-service-with-valid-auth-header","title":"Test 3: public service with valid auth header","text":"
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: public-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:19:59 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 44\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 1443635317331776148, UserRole: admin\n
"},{"location":"examples/customization/external-auth-headers/#test-4-secure-service-with-valid-auth-header","title":"Test 4: secure service with valid auth header","text":"
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100\n* Rebuilt URL to: 192.168.99.100/\n* Trying 192.168.99.100...\n* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0)\n> GET / HTTP/1.1\n> Host: secure-demo-echo-service.kube.local\n> User-Agent: curl/7.43.0\n> Accept: */*\n> User:internal\n>\n< HTTP/1.1 200 OK\n< Server: nginx/1.11.10\n< Date: Mon, 13 Mar 2017 20:17:23 GMT\n< Content-Type: text/plain; charset=utf-8\n< Content-Length: 43\n< Connection: keep-alive\n<\n* Connection #0 to host 192.168.99.100 left intact\nUserID: 605394647632969758, UserRole: admin\n
"},{"location":"examples/customization/jwt/","title":"Accommodation for JWT","text":"
JWT (short for Json Web Token) is an authentication method widely used. Basically an authentication server generates a JWT and you then use this token in every request you make to a backend service. The JWT can be quite big and is present in every http headers. This means you may have to adapt the max-header size of your nginx-ingress in order to support it.
If you use JWT and you get http 502 error from your ingress, it may be a sign that the buffer size is not big enough.
To be 100% sure look at the logs of the ingress-nginx-controller pod, you should see something like this:
upstream sent too big header while reading response header from upstream...\n
"},{"location":"examples/customization/jwt/#increase-buffer-size-for-headers","title":"Increase buffer size for headers","text":"
In nginx, we want to modify the property proxy-buffer-size. The size is arbitrary. It depends on your needs. Be aware that a high value can lower the performance of your ingress proxy. In general a value of 16k should get you covered.
"},{"location":"examples/customization/ssl-dh-param/","title":"Custom DH parameters for perfect forward secrecy","text":"
This example aims to demonstrate the deployment of an Ingress-Nginx Controller and use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with \"Perfect Forward Secrecy\".
You have a domain name such as example.com that is configured to route traffic to the Ingress-NGINX controller.
You have the ingress-nginx-controller installed as per docs.
You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
"},{"location":"examples/grpc/#step-1-create-a-kubernetes-deployment-for-grpc-app","title":"Step 1: Create a Kubernetes Deployment for gRPC app","text":"
Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
$ kubectl get po -A -o wide | grep go-grpc-greeter-server\n
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go.
To create a container image for this app, you can use this Dockerfile.
If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.
"},{"location":"examples/grpc/#step-2-create-the-kubernetes-service-for-the-grpc-app","title":"Step 2: Create the Kubernetes Service for the gRPC app","text":"
You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
"},{"location":"examples/grpc/#step-3-create-the-kubernetes-ingress-resource-for-the-grpc-app","title":"Step 3: Create the Kubernetes Ingress resource for the gRPC app","text":"
Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type \"kubernetes.io/tls\" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this:
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive \"insecure\").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\".
A few more things to note:
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: \"GRPC\". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.
"},{"location":"examples/grpc/#step-4-test-the-connection","title":"Step 4: test the connection","text":"
Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
Watch the logs for the ingress-nginx-controller (increasing verbosity as needed).
Double-check your address and ports.
Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.
If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
"},{"location":"examples/grpc/#notes-on-using-responserequest-streams","title":"Notes on using response/request streams","text":"
grpc_read_timeout and grpc_send_timeout will be set as proxy_read_timeout and proxy_send_timeout when you set backend protocol to GRPC or GRPCS.
If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate this.
If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
"},{"location":"examples/openpolicyagent/","title":"OpenPolicyAgent and pathType enforcing","text":"
Ingress API allows users to specify different pathType on Ingress object.
While pathType Exact and Prefix should allow only a small set of characters, pathType ImplementationSpecific allows any characters, as it may contain regexes, variables and other features that may be specific of the Ingress Controller being used.
This means that the Ingress Admins (the persona who deployed the Ingress Controller) should trust the users allowed to use pathType: ImplementationSpecific, as this may allow arbitrary configuration, and this configuration may end on the proxy (aka Nginx) configuration.
The example in this repo uses Gatekeeper to block the usage of pathType: ImplementationSpecific, allowing just a specific list of namespaces to use it.
It is recommended that the admin modifies this rules to enforce a specific set of characters when the usage of ImplementationSpecific is allowed, or in ways that best suits their needs.
First, the ConstraintTemplate from template.yaml will define a rule that validates if the Ingress object is being created on an excempted namespace, and case not, will validate its pathType.
Then, the rule K8sBlockIngressPathType contained in rule.yaml will define the parameters: what kind of object should be verified (Ingress), what are the excempted namespaces, and what kinds of pathType are blocked.
In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).
PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.
Before applying any objects, first apply the PSP permissions by running:
You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
Rewriting can be controlled using the following annotations:
Name Description Values nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate) bool nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in / context string nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool"},{"location":"examples/rewrite/#examples","title":"Examples","text":""},{"location":"examples/rewrite/#rewrite-target","title":"Rewrite Target","text":"
Attention
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
"},{"location":"examples/static-ip/#acquiring-an-ip","title":"Acquiring an IP","text":"
Since instances of the ingress nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrades.
To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer.
First, create a loadbalancer Service and wait for it to acquire an IP:
Then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to \"ingress-nginx-lb\").
"},{"location":"examples/static-ip/#retaining-the-ip","title":"Retaining the IP","text":"
You can test retention by deleting the Ingress:
$ kubectl delete ing ingress-nginx\ningress \"ingress-nginx\" deleted\n\n$ kubectl create -f ingress-nginx.yaml\ningress \"ingress-nginx\" created\n\n$ kubectl get ing ingress-nginx\nNAME HOSTS ADDRESS PORTS AGE\ningress-nginx * 104.154.109.191 80, 443 13m\n
Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.
"},{"location":"examples/static-ip/#promote-ephemeral-to-static-ip","title":"Promote ephemeral to static IP","text":"
To promote the allocated IP to static, you can update the Service manifest:
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: nginx-test\nspec:\n tls:\n - hosts:\n - foo.bar.com\n # This assumes tls-secret exists and the SSL\n # certificate contains a CN for foo.bar.com\n secretName: tls-secret\n ingressClassName: nginx\n rules:\n - host: foo.bar.com\n http:\n paths:\n - path: /\n pathType: Prefix\n backend:\n # This assumes http-svc exists and routes to healthy endpoints\n service:\n name: http-svc\n port:\n number: 80\n
The following command instructs the controller to terminate traffic using the provided TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.
"},{"location":"user-guide/basic-usage/","title":"Basic usage - host based routing","text":"
ingress-nginx can be used for many use cases, inside various cloud providers and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.
First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed, myServiceA, myServiceB, and configured as type: ClusterIP.
Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org.
If the cluster version is < 1.19, you can create two ingress resources like this:
If the cluster uses Kubernetes version >= 1.19.x, then its suggested to create 2 ingress resources, using yaml examples shown below. These examples are in conformity with the networking.kubernetes.io/v1 api.
When you apply this yaml, 2 ingress resources will be created managed by the ingress-nginx instance. Nginx is configured to automatically discover all ingress with the kubernetes.io/ingress.class: \"nginx\" annotation or where ingressClassName: nginx is present. Please note that the ingress resource should be placed inside the same namespace of the backend resource.
On many cloud providers ingress-nginx will also create the corresponding Load Balancer resource. All you have to do is get the external IP and add a DNS A record inside your DNS provider that point myservicea.foo.org and myserviceb.foo.org to the nginx external IP. Get the external IP by running:
kubectl get services -n ingress-nginx\n
To test inside minikube refer to this documentation: Set up Ingress on Minikube with the NGINX Ingress Controller
"},{"location":"user-guide/cli-arguments/","title":"Command line arguments","text":"
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the ingress-nginx-controller Deployment manifest
Argument Description --annotations-prefix Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --bucket-factor Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0) --certificate-authority Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified. --configmap Name of the ConfigMap containing custom global configurations for the controller. --controller-class Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched. --deep-inspect Enables ingress object security deep inspector. (default true) --default-backend-service Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --enable-annotation-validation If true, will enable the annotation validation feature. Defaults to true --disable-catch-all Disable support for catch-all Ingresses. (default false) --disable-full-test Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default). --disable-svc-external-name Disable support for Services of type ExternalName. (default false) --disable-sync-events Disables the creation of 'Sync' Event resources, but still logs them --dynamic-configuration-retries Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15) --election-id Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --election-ttl Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s) --enable-metrics Enables the collection of NGINX metrics. (default true) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default false) --enable-ssl-passthrough Enable SSL Passthrough. (default false) --disable-leader-election Disable Leader Election on Nginx Controller. (default false) --enable-topology-aware-routing Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false) --exclude-socket-metrics Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size' --health-check-path URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --health-check-timeout Time limit, in seconds, for a probe to health-check-path to succeed. (default 10) --healthz-port Port to use for the healthz endpoint. (default 10254) --healthz-host Address to bind the healthz endpoint. --http-port Port to use for servicing HTTP traffic. (default 80) --https-port Port to use for servicing HTTPS traffic. (default 443) --ingress-class Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation \"kubernetes.io/ingress.class\" (deprecated). If this parameter is not set, or set to the default value of \"nginx\", it will handle ingresses with either an empty or \"nginx\" class name. --ingress-class-by-name Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false). --internal-logger-address Address to be used when binding internal syslogger. (default 127.0.0.1:11514) --kubeconfig Path to a kubeconfig file containing authorization and API server information. --length-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]) --max-buckets Maximum number of buckets for native histograms. (default 100) --maxmind-edition-ids Maxmind edition ids to download GeoLite2 Databases. (default \"GeoLite2-City,GeoLite2-ASN\") --maxmind-retries-timeout Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s) --maxmind-retries-count Number of attempts to download the GeoIP DB. (default 1) --maxmind-license-key Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ . --maxmind-mirror Maxmind mirror url (example: http://geoip.local/databases. --metrics-per-host Export metrics per-host. (default true) --metrics-per-undefined-host Export metrics per-host even if the host is not defined in an ingress. Requires --metrics-per-host to be set to true. (default false) --monitor-max-batch-size Max batch size of NGINX metrics. (default 10000) --post-shutdown-grace-period Additional delay in seconds before controller container exits. (default 10) --profiler-port Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245) --profiling Enable profiling via web interface host:port/debug/pprof/ . (default true) --publish-service Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false) --report-status-classes If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false) --ssl-passthrough-proxy-port Port to use internally for SSL Passthrough. (default 442) --status-port Port to use for the lua HTTP endpoint configuration. (default 10246) --status-update-interval Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60) --stream-port Port to use for the lua TCP/UDP endpoint configuration. (default 10247) --sync-period Period at which the controller forces the repopulation of its local object stores. Disabled by default. --sync-rate-limit Define the sync frequency upper limit. (default 0.3) --tcp-services-configmap Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --time-buckets Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10]) --udp-services-configmap Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --shutdown-grace-period Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0) --size-buckets Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07]) -v, --v Level number for the log level verbosity --validating-webhook The address to start an admission controller on to validate incoming ingresses. Takes the form \":port\". If not provided, no admission controller is started. --validating-webhook-certificate The path of the validating webhook certificate PEM. --validating-webhook-key The path of the validating webhook key PEM. --version Show release information about the Ingress-Nginx Controller and exit. --watch-ingress-without-class Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false) --watch-namespace Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty. --watch-namespace-selector The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty."},{"location":"user-guide/custom-errors/","title":"Custom errors","text":"
When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error:
Header Value X-Code HTTP status code returned by the request X-Format Value of the Accept header sent by the client X-Original-URI URI that caused the error X-Namespace Namespace where the backend Service is located X-Ingress-Name Name of the Ingress where the backend is defined X-Service-Name Name of the Service backing the backend X-Service-Port Port number of the Service backing the backend X-Request-ID Unique ID that identifies the request - same as for backend service
A custom error backend can use this information to return the best possible representation of an error page. For example, if the value of the Accept header send by the client was application/json, a carefully crafted backend could decide to return the error payload as a JSON document instead of HTML.
Important
The custom backend is expected to return the correct HTTP status code instead of 200. NGINX does not change the response from the custom default backend.
An example of such custom backend is available inside the source repository at images/custom-error-pages.
The default backend is a service which handles all URL paths and hosts the Ingress-NGINX controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).
Basically a default backend exposes two URLs:
/healthz that returns 200
/ that returns 404
Example
The sub-directory /images/custom-error-pages provides an additional service for the purpose of customizing the error pages served via the default backend.
"},{"location":"user-guide/exposing-tcp-udp-services/","title":"Exposing TCP and UDP services","text":"
While the Kubernetes Ingress resource only officially supports routing external HTTP(s) traffic to services, ingress-nginx can be configured to receive external TCP/UDP traffic from non-HTTP protocols and route them to internal services using TCP/UDP port mappings that are specified within a ConfigMap.
To support this, the --tcp-services-configmap and --udp-services-configmap flags can be used to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <service port>:<namespace/service name>:[PROXY]:[PROXY]
It is also possible to use a number or the name of the port. The two last fields are optional. Adding PROXY in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service. The first PROXY controls the decode of the proxy protocol and the second PROXY controls the encoding using proxy protocol. This allows an incoming connection to be decoded or an outgoing connection to be encoded. It is also possible to arbitrate between two different proxies by turning on the decode and encode on a TCP service.
The next example shows how to expose the service example-go running in the namespace default in the port 8080 using the port 9000
Since 1.9.13 NGINX provides UDP Load Balancing. The next example shows how to expose the service kube-dns running in the namespace kube-system in the port 53 using the port 53
FastCGI is a binary protocol for interfacing interactive programs with a web server. [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.
\u2014 Wikipedia
The ingress-nginx ingress controller can be used to directly expose FastCGI servers. Enabling FastCGI in your Ingress only requires setting the backend-protocol annotation to FCGI, and with a couple more annotations you can customize the way ingress-nginx handles the communication with your FastCGI server.
For most practical use-cases, php applications are a good example. PHP is not HTML so a FastCGI server like php-fpm processes a index.php script for the response to a request. See a working example below.
This post in a FactCGI feature issue describes a test for the FastCGI feature. The same test is described below here.
"},{"location":"user-guide/fcgi-services/#example-objects-to-expose-a-fastcgi-server-pod","title":"Example Objects to expose a FastCGI server pod","text":""},{"location":"user-guide/fcgi-services/#the-fasctcgi-server-pod","title":"The FasctCGI server pod","text":"
The Pod object example below exposes port 9000, which is the conventional FastCGI port.
For this example to work, a HTML response should be received from the FastCGI server being exposed
A HTTP request to the FastCGI server pod should be sent
The response should be generated by a php script as that is what we are demonstrating here
The image we are using here php:fpm-alpine does not ship with a ready to use php script inside it. So we need to provide the image with a simple php-script for this example to work.
Use kubectl exec to get into the example-app pod
You will land at the path /var/www/html
Create a simple php script there at the path /var/www/html called index.php
"},{"location":"user-guide/fcgi-services/#send-a-request-to-the-exposed-fastcgi-server","title":"Send a request to the exposed FastCGI server","text":"
You will have to look at the external-ip of the ingress or you have to send the HTTP request to the ClusterIP address of the ingress-nginx controller pod.
To specify an index file, the fastcgi-index annotation value can optionally be set. In the example below, the value is set to index.php. This annotation corresponds to the NGINX fastcgi_index directive.
To specify NGINX fastcgi_param directives, the fastcgi-params-configmap annotation is used, which in turn must lead to a ConfigMap object containing the NGINX fastcgi_param directives as key/values.
Regular expressions is not supported in the spec.rules.host field. The wildcard character '*' must appear by itself as the first DNS label and matches only a single label. You cannot have a wildcard label by itself (e.g. Host == \"*\").
Note
Please see the FAQ for Validation Of path
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The following example describes a case that may inflict unwanted path matching behavior.
This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier. For more information about how a path is chosen, please read the following article: \"Understanding Nginx Server and Location Block Selection Algorithms\".
A request to test.com/foo/bar/bar would match the ^/foo/bar/[A-Z0-9]{3} location block instead of the longest EXACT matching path.
"},{"location":"user-guide/k8s-122-migration/","title":"FAQ - Migration to Kubernetes 1.22 and apiVersion networking.k8s.io/v1","text":"
If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade to Kubernetes v1.22, this page is relevant to you.
Please read this official blog on deprecated Ingress API versions
Please read this official documentation on the IngressClass object
"},{"location":"user-guide/k8s-122-migration/#what-is-an-ingressclass-and-why-is-it-important-for-users-of-ingress-nginx-controller-now","title":"What is an IngressClass and why is it important for users of ingress-nginx controller now?","text":"
IngressClass is a Kubernetes resource. See the description below. It's important because until now, a default install of the ingress-nginx controller did not require a IngressClass object. From version 1.0.0 of the ingress-nginx controller, an IngressClass object is required.
On clusters with more than one instance of the ingress-nginx controller, all instances of the controllers must be aware of which Ingress objects they serve. The ingressClassName field of an Ingress is the way to let the controller know about that.
kubectl explain ingressclass\n
KIND: IngressClass\nVERSION: networking.k8s.io/v1\nDESCRIPTION:\n IngressClass represents the class of the Ingress, referenced by the Ingress\n Spec. The `ingressclass.kubernetes.io/is-default-class` annotation can be\n used to indicate that an IngressClass should be considered default. When a\n single IngressClass resource has this annotation set to true, new Ingress\n resources without a class specified will be assigned this default class.\nFIELDS:\n apiVersion <string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n kind <string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n metadata <Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n spec <Object>\n Spec is the desired state of the IngressClass. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status`\n
"},{"location":"user-guide/k8s-122-migration/#what-has-caused-this-change-in-behavior","title":"What has caused this change in behavior?","text":"
Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
extensions/v1beta1
networking.k8s.io/v1beta1 You would get a message about deprecation, but the Ingress resource would get created.
From K8s version 1.22 onwards, you can only access the Ingress API via the stable, networking.k8s.io/v1 API. The reason is explained in the official blog on deprecated ingress API versions.
If you are already using the ingress-nginx controller and then upgrade to Kubernetes 1.22, there are several scenarios where your existing Ingress objects will not work how you expect.
Read this FAQ to check which scenario matches your use case.
"},{"location":"user-guide/k8s-122-migration/#what-is-the-ingressclassname-field","title":"What is the ingressClassName field?","text":"
ingressClassName is a field in the spec of an Ingress object.
kubectl explain ingress.spec.ingressClassName\n
KIND: Ingress\nVERSION: networking.k8s.io/v1\nFIELD: ingressClassName <string>\nDESCRIPTION:\n IngressClassName is the name of the IngressClass cluster resource. The\n associated IngressClass defines which controller will implement the\n resource. This replaces the deprecated `kubernetes.io/ingress.class`\n annotation. For backwards compatibility, when that annotation is set, it\n must be given precedence over this field. The controller may emit a warning\n if the field and annotation have different values. Implementations of this\n API should ignore Ingresses without a class specified. An IngressClass\n resource may be marked as default, which can be used to set a default value\n for this field. For more information, refer to the IngressClass\n documentation.\n
The .spec.ingressClassName behavior has precedence over the deprecated kubernetes.io/ingress.class annotation.
"},{"location":"user-guide/k8s-122-migration/#i-have-only-one-ingress-controller-in-my-cluster-what-should-i-do","title":"I have only one ingress controller in my cluster. What should I do?","text":"
If a single instance of the ingress-nginx controller is the sole Ingress controller running in your cluster, you should add the annotation \"ingressclass.kubernetes.io/is-default-class\" in your IngressClass, so any new Ingress objects will have this one as default IngressClass.
When using Helm, you can enable this annotation by setting .controller.ingressClassResource.default: true in your Helm chart installation's values file.
If you have any old Ingress objects remaining without an IngressClass set, you can do one or more of the following to make the ingress-nginx controller aware of the old objects:
You can manually set the .spec.ingressClassName field in the manifest of your own Ingress resources.
You can re-create them after setting the ingressclass.kubernetes.io/is-default-class annotation to true on the IngressClass
Alternatively you can make the ingress-nginx controller watch Ingress objects without the ingressClassName field set by starting your ingress-nginx with the flag --watch-ingress-without-class=true. When using Helm, you can configure your Helm chart installation's values file with .controller.watchIngressWithoutClass: true.
We recommend that you create the IngressClass as shown below:
and add the value spec.ingressClassName=nginx in your Ingress objects.
"},{"location":"user-guide/k8s-122-migration/#i-have-many-ingress-objects-in-my-cluster-what-should-i-do","title":"I have many ingress objects in my cluster. What should I do?","text":"
If you have a lot of ingress objects without ingressClass configuration, you can run the ingress controller with the flag --watch-ingress-without-class=true.
"},{"location":"user-guide/k8s-122-migration/#what-is-the-flag-watch-ingress-without-class","title":"What is the flag --watch-ingress-without-class?","text":"
It's a flag that is passed, as an argument, to the nginx-ingress-controller executable. In the configuration, it looks like this:
"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-in-my-cluster-and-im-already-using-the-annotation","title":"I have more than one controller in my cluster, and I'm already using the annotation","text":"
No problem. This should still keep working, but we highly recommend you to test! Even though kubernetes.io/ingress.class is deprecated, the ingress-nginx controller still understands that annotation. If you want to follow good practice, you should consider migrating to use IngressClass and .spec.ingressClassName.
"},{"location":"user-guide/k8s-122-migration/#i-have-more-than-one-controller-running-in-my-cluster-and-i-want-to-use-the-new-api","title":"I have more than one controller running in my cluster, and I want to use the new API","text":"
In this scenario, you need to create multiple IngressClasses (see the example above).
Be aware that IngressClass works in a very specific way: you will need to change the .spec.controller value in your IngressClass and configure the controller to expect the exact same value.
Let's see an example, supposing that you have three IngressClasses:
IngressClass ingress-nginx-one, with .spec.controller equal to example.com/ingress-nginx1
IngressClass ingress-nginx-two, with .spec.controller equal to example.com/ingress-nginx2
IngressClass ingress-nginx-three, with .spec.controller equal to example.com/ingress-nginx1
For private use, you can also use a controller name that doesn't contain a /, e.g. ingress-nginx1.
When deploying your ingress controllers, you will have to change the --controller-class field as follows:
Ingress-Nginx A, configured to use controller class name example.com/ingress-nginx1
Ingress-Nginx B, configured to use controller class name example.com/ingress-nginx2
When you create an Ingress object with its ingressClassName set to ingress-nginx-two, only controllers looking for the example.com/ingress-nginx2 controller class pay attention to the new object.
Given that Ingress-Nginx B is set up that way, it will serve that object, whereas Ingress-Nginx A ignores the new Ingress.
Bear in mind that if you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true, it will serve:
Ingresses without any ingressClassName set
Ingresses where the deprecated annotation (kubernetes.io/ingress.class) matches the value set in the command line argument --ingress-class
Ingresses that refer to any IngressClass that has the same spec.controller as configured in --controller-class
If you start Ingress-Nginx B with the command line argument --watch-ingress-without-class=true and you run Ingress-Nginx A with the command line argument --watch-ingress-without-class=false then this is a supported configuration. If you have two ingress-nginx controllers for the same cluster, both running with --watch-ingress-without-class=true then there is likely to be a conflict.
"},{"location":"user-guide/k8s-122-migration/#why-am-i-seeing-ingress-class-annotation-is-not-equal-to-the-expected-by-ingress-controller-in-my-controller-logs","title":"Why am I seeing \"ingress class annotation is not equal to the expected by Ingress Controller\" in my controller logs?","text":"
It is highly likely that you will also see the name of the ingress resource in the same error message. This error message has been observed on use the deprecated annotation (kubernetes.io/ingress.class) in an Ingress resource manifest. It is recommended to use the .spec.ingressClassName field of the Ingress resource, to specify the name of the IngressClass of the Ingress you are defining.
"},{"location":"user-guide/miscellaneous/","title":"Miscellaneous","text":""},{"location":"user-guide/miscellaneous/#source-ip-address","title":"Source IP address","text":"
By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.
If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.
Another option is to enable proxy protocol using use-proxy-protocol: \"true\".
In this mode NGINX does not use the content of the header to get the source IP address of the connection.
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit pathType will fail validation. By default NGINX path type is Prefix to not break existing definitions
If you are using a L4 proxy to forward the traffic to the Ingress NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the PROXY Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
Amongst others ELBs in AWS and HAProxy support Proxy Protocol.
Support for websockets is provided by NGINX out of the box. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.
The default value of these settings is 60 seconds.
A more adequate value to support websockets is a value higher than one hour (3600).
Important
If the Ingress-Nginx Controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.
"},{"location":"user-guide/miscellaneous/#optimizing-tls-time-to-first-byte-tttfb","title":"Optimizing TLS Time To First Byte (TTTFB)","text":"
NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.
This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).
"},{"location":"user-guide/miscellaneous/#retries-in-non-idempotent-methods","title":"Retries in non-idempotent methods","text":"
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.
Ingress rules for TLS require the definition of the field host
"},{"location":"user-guide/miscellaneous/#why-endpoints-and-not-services","title":"Why endpoints and not services","text":"
The Ingress-Nginx Controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
Two different methods to install and configure Prometheus and Grafana are described in this doc. * Prometheus and Grafana installation using Pod Annotations. This installs Prometheus and Grafana in the same namespace as NGINX Ingress * Prometheus and Grafana installation using Service Monitors. This installs Prometheus and Grafana in two different namespaces. This is the preferred method, and helm charts supports this by default.
"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-pod-annotations","title":"Prometheus and Grafana installation using Pod Annotations","text":"
This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the Ingress-Nginx Controller.
Important
This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.
"},{"location":"user-guide/monitoring/#before-you-begin","title":"Before You Begin","text":"
The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.
The controller should be configured for exporting metrics. This requires 3 configurations to the controller. These configurations are :
The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingress-nginx, then you can simply type the command shown below :
"},{"location":"user-guide/monitoring/#deploy-and-configure-prometheus-server","title":"Deploy and configure Prometheus Server","text":"
Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click \"Data Sources\"
Click \"Add data source\"
Select \"Prometheus\"
Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
Left menu (hover over +) -> Dashboard
Click \"Import\"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:
Run the ingress controller with --metrics-per-host=false. You will lose labeling by hostname, but still have labeling by ingress.
Run the ingress controller with --metrics-per-undefined-host=true --metrics-per-host=true. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames.
"},{"location":"user-guide/monitoring/#grafana-dashboard-using-ingress-resource","title":"Grafana dashboard using ingress resource","text":"
If you want to expose the dashboard for grafana using an ingress resource, then you can :
change the service type of the prometheus-server service and the grafana service to \"ClusterIP\" like this :
kubectl -n ingress-nginx edit svc grafana\n
This will open the currently deployed service grafana in the default editor configured in your shell (vi/nvim/nano/other)
scroll down to line 34 that looks like \"type: NodePort\"
change it to look like \"type: ClusterIP\". Save and exit.
create an ingress resource with backend as \"grafana\" and port as \"3000\"
Similarly, you can edit the service \"prometheus-server\" and add an ingress resource.
"},{"location":"user-guide/monitoring/#prometheus-and-grafana-installation-using-service-monitors","title":"Prometheus and Grafana installation using Service Monitors","text":"
This document assumes you're using helm and using the kube-prometheus-stack package to install Prometheus and Grafana.
"},{"location":"user-guide/monitoring/#verify-ingress-nginx-controller-is-installed","title":"Verify Ingress-Nginx Controller is installed","text":"
The Ingress-Nginx Controller should already be deployed according to the deployment instructions here.
To check if Ingress controller is deployed,
kubectl get pods -n ingress-nginx\n
The result should look something like: NAME READY STATUS RESTARTS AGE ingress-nginx-controller-7c489dc7b7-ccrf6 1/1 Running 0 19h
"},{"location":"user-guide/monitoring/#verify-prometheus-is-installed","title":"Verify Prometheus is installed","text":"
To check if Prometheus is already deployed, run the following command:
The Ingress NGINX controller needs to be reconfigured for exporting metrics. This requires 3 additional configurations to the controller. These configurations are :
Here controller.metrics.serviceMonitor.additionalLabels.release=\"prometheus\" should match the name of the helm release of the kube-prometheus-stack
You can validate that the controller has been successfully reconfigured to export metrics by looking at the values of the installed release, like this:
helm get values ingress-nginx --namespace ingress-nginx\n
Since Prometheus is running in a different namespace and not in the ingress-nginx namespace, it would not be able to discover ServiceMonitors in other namespaces when installed. Reconfigure your kube-prometheus-stack Helm installation to set serviceMonitorSelectorNilUsesHelmValues flag to false. By default, Prometheus only discovers PodMonitors within its own namespace. This should be disabled by setting podMonitorSelectorNilUsesHelmValues to false
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:9090 -> 9090\nForwarding from [::1]:9090 -> 9090\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:9090 "},{"location":"user-guide/monitoring/#connect-and-view-grafana-dashboard","title":"Connect and view Grafana dashboard","text":"
Port forward to Grafana service. Find out the name of the Grafana service by using the following command:
When you run the above command, you should see something like:
Forwarding from 127.0.0.1:3000 -> 3000\nForwarding from [::1]:3000 -> 3000\n
- Open your browser and visit the following URL http://localhost:{port-forwarded-port} according to the above example it would be, http://localhost:3000 The default username/ password is admin/prom-operator - After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click \"Data Sources\"
Click \"Add data source\"
Select \"Prometheus\"
Enter the details (note: I used http://10.102.72.134:9090 which is the CLUSTER-IP for Prometheus service)
Left menu (hover over +) -> Dashboard
Click \"Import\"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
nginx_ingress_controller_request_duration_seconds Histogram\\ The request processing (time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client) time in seconds (affected by client speed).\\ nginx var: request_time
nginx_ingress_controller_response_duration_seconds Histogram\\ The time spent on receiving the response from the upstream server in seconds (affected by client speed when the response is bigger than proxy buffers).\\ Note: can be up to several millis bigger than the nginx_ingress_controller_request_duration_seconds because of the different measuring method. nginx var: upstream_response_time
nginx_ingress_controller_header_duration_seconds Histogram\\ The time spent on receiving first header from the upstream server\\ nginx var: upstream_header_time
nginx_ingress_controller_connect_duration_seconds Histogram\\ The time spent on establishing a connection with the upstream server\\ nginx var: upstream_connect_time
nginx_ingress_controller_response_size Histogram\\ The response length (including request line, header, and request body)\\ nginx var: bytes_sent
nginx_ingress_controller_request_size Histogram\\ The request length (including request line, header, and request body)\\ nginx var: request_length
nginx_ingress_controller_requests Counter\\ The total number of client requests
nginx_ingress_controller_bytes_sent Histogram\\ The number of bytes sent to a client. Deprecated, use nginx_ingress_controller_response_size\\ nginx var: bytes_sent
# HELP nginx_ingress_controller_bytes_sent The number of bytes sent to a client. DEPRECATED! Use nginx_ingress_controller_response_size\n# TYPE nginx_ingress_controller_bytes_sent histogram\n# HELP nginx_ingress_controller_connect_duration_seconds The time spent on establishing a connection with the upstream server\n# TYPE nginx_ingress_controller_connect_duration_seconds nginx_ingress_controller_connect_duration_seconds\n* HELP nginx_ingress_controller_header_duration_seconds The time spent on receiving first header from the upstream server\n# TYPE nginx_ingress_controller_header_duration_seconds histogram\n# HELP nginx_ingress_controller_request_duration_seconds The request processing time in milliseconds\n# TYPE nginx_ingress_controller_request_duration_seconds histogram\n# HELP nginx_ingress_controller_request_size The request length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_request_size histogram\n# HELP nginx_ingress_controller_requests The total number of client requests.\n# TYPE nginx_ingress_controller_requests counter\n# HELP nginx_ingress_controller_response_duration_seconds The time spent on receiving the response from the upstream server\n# TYPE nginx_ingress_controller_response_duration_seconds histogram\n# HELP nginx_ingress_controller_response_size The response length (including request line, header, and request body)\n# TYPE nginx_ingress_controller_response_size histogram\n
"},{"location":"user-guide/monitoring/#nginx-process-metrics","title":"Nginx process metrics","text":"
# HELP nginx_ingress_controller_nginx_process_connections current number of client connections with state {active, reading, writing, waiting}\n# TYPE nginx_ingress_controller_nginx_process_connections gauge\n# HELP nginx_ingress_controller_nginx_process_connections_total total number of connections with state {accepted, handled}\n# TYPE nginx_ingress_controller_nginx_process_connections_total counter\n# HELP nginx_ingress_controller_nginx_process_cpu_seconds_total Cpu usage in seconds\n# TYPE nginx_ingress_controller_nginx_process_cpu_seconds_total counter\n# HELP nginx_ingress_controller_nginx_process_num_procs number of processes\n# TYPE nginx_ingress_controller_nginx_process_num_procs gauge\n# HELP nginx_ingress_controller_nginx_process_oldest_start_time_seconds start time in seconds since 1970/01/01\n# TYPE nginx_ingress_controller_nginx_process_oldest_start_time_seconds gauge\n# HELP nginx_ingress_controller_nginx_process_read_bytes_total number of bytes read\n# TYPE nginx_ingress_controller_nginx_process_read_bytes_total counter\n# HELP nginx_ingress_controller_nginx_process_requests_total total number of client requests\n# TYPE nginx_ingress_controller_nginx_process_requests_total counter\n# HELP nginx_ingress_controller_nginx_process_resident_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_resident_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_virtual_memory_bytes number of bytes of memory in use\n# TYPE nginx_ingress_controller_nginx_process_virtual_memory_bytes gauge\n# HELP nginx_ingress_controller_nginx_process_write_bytes_total number of bytes written\n# TYPE nginx_ingress_controller_nginx_process_write_bytes_total counter\n
# HELP nginx_ingress_controller_build_info A metric with a constant '1' labeled with information about the build.\n# TYPE nginx_ingress_controller_build_info gauge\n# HELP nginx_ingress_controller_check_success Cumulative number of Ingress controller syntax check operations\n# TYPE nginx_ingress_controller_check_success counter\n# HELP nginx_ingress_controller_config_hash Running configuration hash actually running\n# TYPE nginx_ingress_controller_config_hash gauge\n# HELP nginx_ingress_controller_config_last_reload_successful Whether the last configuration reload attempt was successful\n# TYPE nginx_ingress_controller_config_last_reload_successful gauge\n# HELP nginx_ingress_controller_config_last_reload_successful_timestamp_seconds Timestamp of the last successful configuration reload.\n# TYPE nginx_ingress_controller_config_last_reload_successful_timestamp_seconds gauge\n# HELP nginx_ingress_controller_ssl_certificate_info Hold all labels associated to a certificate\n# TYPE nginx_ingress_controller_ssl_certificate_info gauge\n# HELP nginx_ingress_controller_success Cumulative number of Ingress controller reload operations\n# TYPE nginx_ingress_controller_success counter\n# HELP nginx_ingress_controller_orphan_ingress Gauge reporting status of ingress orphanity, 1 indicates orphaned ingress. 'namespace' is the string used to identify namespace of ingress, 'ingress' for ingress name and 'type' for 'no-service' or 'no-endpoint' of orphanity\n# TYPE nginx_ingress_controller_orphan_ingress gauge\n
# HELP nginx_ingress_controller_admission_config_size The size of the tested configuration\n# TYPE nginx_ingress_controller_admission_config_size gauge\n# HELP nginx_ingress_controller_admission_render_duration The processing duration of ingresses rendering by the admission controller (float seconds)\n# TYPE nginx_ingress_controller_admission_render_duration gauge\n# HELP nginx_ingress_controller_admission_render_ingresses The length of ingresses rendered by the admission controller\n# TYPE nginx_ingress_controller_admission_render_ingresses gauge\n# HELP nginx_ingress_controller_admission_roundtrip_duration The complete duration of the admission controller at the time to process a new event (float seconds)\n# TYPE nginx_ingress_controller_admission_roundtrip_duration gauge\n# HELP nginx_ingress_controller_admission_tested_duration The processing duration of the admission controller tests (float seconds)\n# TYPE nginx_ingress_controller_admission_tested_duration gauge\n# HELP nginx_ingress_controller_admission_tested_ingresses The length of ingresses processed by the admission controller\n# TYPE nginx_ingress_controller_admission_tested_ingresses gauge\n
By default, deploying multiple Ingress controllers (e.g., ingress-nginx & gce) will result in all controllers simultaneously racing to update Ingress status fields in confusing ways.
To fix this problem, use IngressClasses. The kubernetes.io/ingress.class annotation is not being preferred or suggested to use as it can be deprecated in the future. Better to use the field ingress.spec.ingressClassName. But, when user has deployed with scope.enabled, then the ingress class resource field is not used.
If all ingress controllers respect IngressClasses (e.g. multiple instances of ingress-nginx v1.0), you can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName.
First, ensure the --controller-class= and --ingress-class are set to something different on each ingress controller, If your additional ingress controller is to be installed in a namespace, where there is/are one/more-than-one ingress-nginx-controller(s) already installed, then you need to specify a different unique --election-id for the new instance of the controller.
When running multiple ingress-nginx controllers, it will only process an unset class annotation if one of the controllers uses the default --controller-class value (see IsValid method in internal/ingress/annotations/class/main.go), otherwise the class annotation becomes required.
If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
"},{"location":"user-guide/multiple-ingress/#using-the-kubernetesioingressclass-annotation-in-deprecation","title":"Using the kubernetes.io/ingress.class annotation (in deprecation)","text":"
If you're running multiple ingress controllers where one or more do not support IngressClasses, you must specify the annotation kubernetes.io/ingress.class: \"nginx\" in all ingresses that you would like ingress-nginx to claim.
then setting the corresponding kubernetes.io/ingress.class: \"internal-nginx\" annotation on your Ingresses.
To reiterate, setting the annotation to any value which does not match a valid ingress class will force the Ingress-Nginx Controller to ignore your Ingress. If you are only running a single Ingress-Nginx Controller, this can be achieved by setting the annotation to any value except \"nginx\" or an empty string.
Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.
Warning
Ensure that the certificate order is leaf->intermediate->root, otherwise the controller will not be able to import the certificate, and you'll see this error in the logs W1012 09:15:45.920000 6 backend_ssl.go:46] Error obtaining X.509 certificate: unexpected error creating SSL Cert: certificate and private key does not have a matching public key: tls: private key does not match public key
You can generate a self-signed certificate and private key with:
NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works out-of-the-box for HTTP traffic. For HTTPS, a certificate is naturally required.
For this reason the Ingress controller provides the flag --default-ssl-certificate. The secret referred to by this flag contains the default certificate to be used when accessing the catch-all server. If this flag is not provided NGINX will use a self-signed certificate.
For instance, if you have a TLS secret foo-tls in the default namespace, add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.
If the tls: section is not set, NGINX will provide the default certificate but will not force HTTPS redirect.
On the other hand, if the tls: section is set - even without specifying a secretName option - NGINX will force HTTPS redirect.
To force redirects for Ingresses that do not specify a TLS-block at all, take a look at force-ssl-redirect in ConfigMap.
The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.
Warning
This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back and forth between the backend and the client.
If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured passthrough proxy port (default: 442), which proxies the request to the default backend.
Note
Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.
"},{"location":"user-guide/tls/#http-strict-transport-security","title":"HTTP Strict Transport Security","text":"
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
HSTS is enabled by default.
To disable this behavior use hsts: \"false\" in the configuration ConfigMap.
"},{"location":"user-guide/tls/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"
By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.
This can be disabled globally using ssl-redirect: \"false\" in the NGINX config map, or per-Ingress with the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.
Tip
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.
"},{"location":"user-guide/tls/#automated-certificate-management-with-cert-manager","title":"Automated Certificate Management with cert-manager","text":"
cert-manager automatically requests missing or expired certificates from a range of supported issuers (including Let's Encrypt) by monitoring ingress resources.
To set up cert-manager you should take a look at this full example.
To enable it for an ingress resource you have to deploy cert-manager, configure a certificate issuer update the manifest:
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: ingress-demo\n annotations:\n cert-manager.io/issuer: \"letsencrypt-staging\" # Replace this with a production issuer once you've tested it\n [..]\nspec:\n tls:\n - hosts:\n - ingress-demo.example.com\n secretName: ingress-demo-tls\n [...]\n
"},{"location":"user-guide/tls/#default-tls-version-and-ciphers","title":"Default TLS Version and Ciphers","text":"
To provide the most secure baseline configuration possible,
ingress-nginx defaults to using TLS 1.2 and 1.3 only, with a secure set of TLS ciphers.
The default configuration, though secure, does not support some older browsers and operating systems.
For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices are not compatible with ingress-nginx's default configuration.
To change this default behavior, use a ConfigMap.
A sample ConfigMap fragment to allow these older clients to connect could look something like the following (generated using the Mozilla SSL Configuration Generator)mozilla-ssl-config-old:
ConfigMap: using a Configmap to set global configurations in NGINX.
Annotations: use this if you want a specific configuration for a particular Ingress rule.
Custom template: when more specific settings are required, like open_file_cache, adjust listen options as rcvbuf or when is not possible to change the configuration through the ConfigMap.
"},{"location":"user-guide/nginx-configuration/annotations-risk/","title":"Annotations Scope and Risk","text":"Group Annotation Risk Scope Aliases server-alias High ingress Allowlist allowlist-source-range Medium location BackendProtocol backend-protocol Low location BasicDigestAuth auth-realm Medium location BasicDigestAuth auth-secret Medium location BasicDigestAuth auth-secret-type Low location BasicDigestAuth auth-type Low location Canary canary Low ingress Canary canary-by-cookie Medium ingress Canary canary-by-header Medium ingress Canary canary-by-header-pattern Medium ingress Canary canary-by-header-value Medium ingress Canary canary-weight Low ingress Canary canary-weight-total Low ingress CertificateAuth auth-tls-error-page High location CertificateAuth auth-tls-match-cn High location CertificateAuth auth-tls-pass-certificate-to-upstream Low location CertificateAuth auth-tls-secret Medium location CertificateAuth auth-tls-verify-client Medium location CertificateAuth auth-tls-verify-depth Low location ClientBodyBufferSize client-body-buffer-size Low location ConfigurationSnippet configuration-snippet Critical location Connection connection-proxy-header Low location CorsConfig cors-allow-credentials Low ingress CorsConfig cors-allow-headers Medium ingress CorsConfig cors-allow-methods Medium ingress CorsConfig cors-allow-origin Medium ingress CorsConfig cors-expose-headers Medium ingress CorsConfig cors-max-age Low ingress CorsConfig enable-cors Low ingress CustomHTTPErrors custom-http-errors Low location CustomHeaders custom-headers Medium location DefaultBackend default-backend Low location Denylist denylist-source-range Medium location DisableProxyInterceptErrors disable-proxy-intercept-errors Low location EnableGlobalAuth enable-global-auth Low location ExternalAuth auth-always-set-cookie Low location ExternalAuth auth-cache-duration Medium location ExternalAuth auth-cache-key Medium location ExternalAuth auth-keepalive Low location ExternalAuth auth-keepalive-requests Low location ExternalAuth auth-keepalive-share-vars Low location ExternalAuth auth-keepalive-timeout Low location ExternalAuth auth-method Low location ExternalAuth auth-proxy-set-headers Medium location ExternalAuth auth-request-redirect Medium location ExternalAuth auth-response-headers Medium location ExternalAuth auth-signin High location ExternalAuth auth-signin-redirect-param Medium location ExternalAuth auth-snippet Critical location ExternalAuth auth-url High location FastCGI fastcgi-index Medium location FastCGI fastcgi-params-configmap Medium location HTTP2PushPreload http2-push-preload Low location LoadBalancing load-balance Low location Logs enable-access-log Low location Logs enable-rewrite-log Low location Mirror mirror-host High ingress Mirror mirror-request-body Low ingress Mirror mirror-target High ingress ModSecurity enable-modsecurity Low ingress ModSecurity enable-owasp-core-rules Low ingress ModSecurity modsecurity-snippet Critical ingress ModSecurity modsecurity-transaction-id High ingress Opentelemetry enable-opentelemetry Low location Opentelemetry opentelemetry-operation-name Medium location Opentelemetry opentelemetry-trust-incoming-span Low location Proxy proxy-body-size Medium location Proxy proxy-buffer-size Low location Proxy proxy-buffering Low location Proxy proxy-buffers-number Low location Proxy proxy-connect-timeout Low location Proxy proxy-cookie-domain Medium location Proxy proxy-cookie-path Medium location Proxy proxy-http-version Low location Proxy proxy-max-temp-file-size Low location Proxy proxy-next-upstream Medium location Proxy proxy-next-upstream-timeout Low location Proxy proxy-next-upstream-tries Low location Proxy proxy-read-timeout Low location Proxy proxy-redirect-from Medium location Proxy proxy-redirect-to Medium location Proxy proxy-request-buffering Low location Proxy proxy-send-timeout Low location ProxySSL proxy-ssl-ciphers Medium ingress ProxySSL proxy-ssl-name High ingress ProxySSL proxy-ssl-protocols Low ingress ProxySSL proxy-ssl-secret Medium ingress ProxySSL proxy-ssl-server-name Low ingress ProxySSL proxy-ssl-verify Low ingress ProxySSL proxy-ssl-verify-depth Low ingress RateLimit limit-allowlist Low location RateLimit limit-burst-multiplier Low location RateLimit limit-connections Low location RateLimit limit-rate Low location RateLimit limit-rate-after Low location RateLimit limit-rpm Low location RateLimit limit-rps Low location Redirect from-to-www-redirect Low location Redirect permanent-redirect Medium location Redirect permanent-redirect-code Low location Redirect temporal-redirect Medium location Redirect temporal-redirect-code Low location Rewrite app-root Medium location Rewrite force-ssl-redirect Medium location Rewrite preserve-trailing-slash Medium location Rewrite rewrite-target Medium ingress Rewrite ssl-redirect Low location Rewrite use-regex Low location SSLCipher ssl-ciphers Low ingress SSLCipher ssl-prefer-server-ciphers Low ingress SSLPassthrough ssl-passthrough Low ingress Satisfy satisfy Low location ServerSnippet server-snippet Critical ingress ServiceUpstream service-upstream Low ingress SessionAffinity affinity Low ingress SessionAffinity affinity-canary-behavior Low ingress SessionAffinity affinity-mode Medium ingress SessionAffinity session-cookie-change-on-failure Low ingress SessionAffinity session-cookie-conditional-samesite-none Low ingress SessionAffinity session-cookie-domain Medium ingress SessionAffinity session-cookie-expires Medium ingress SessionAffinity session-cookie-max-age Medium ingress SessionAffinity session-cookie-name Medium ingress SessionAffinity session-cookie-path Medium ingress SessionAffinity session-cookie-samesite Low ingress SessionAffinity session-cookie-secure Low ingress StreamSnippet stream-snippet Critical ingress UpstreamHashBy upstream-hash-by High location UpstreamHashBy upstream-hash-by-subset Low location UpstreamHashBy upstream-hash-by-subset-size Low location UpstreamVhost upstream-vhost Low location UsePortInRedirects use-port-in-redirects Low location XForwardedPrefix x-forwarded-prefix Medium location"},{"location":"user-guide/nginx-configuration/annotations/","title":"Annotations","text":"
You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
Tip
Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. \"true\", \"false\", \"100\".
Note
The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.
Name type nginx.ingress.kubernetes.io/app-root string nginx.ingress.kubernetes.io/affinity cookie nginx.ingress.kubernetes.io/affinity-mode \"balanced\" or \"persistent\" nginx.ingress.kubernetes.io/affinity-canary-behavior \"sticky\" or \"legacy\" nginx.ingress.kubernetes.io/auth-realm string nginx.ingress.kubernetes.io/auth-secret string nginx.ingress.kubernetes.io/auth-secret-type string nginx.ingress.kubernetes.io/auth-type \"basic\" or \"digest\" nginx.ingress.kubernetes.io/auth-tls-secret string nginx.ingress.kubernetes.io/auth-tls-verify-depth number nginx.ingress.kubernetes.io/auth-tls-verify-client string nginx.ingress.kubernetes.io/auth-tls-error-page string nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-tls-match-cn string nginx.ingress.kubernetes.io/auth-url string nginx.ingress.kubernetes.io/auth-cache-key string nginx.ingress.kubernetes.io/auth-cache-duration string nginx.ingress.kubernetes.io/auth-keepalive number nginx.ingress.kubernetes.io/auth-keepalive-share-vars \"true\" or \"false\" nginx.ingress.kubernetes.io/auth-keepalive-requests number nginx.ingress.kubernetes.io/auth-keepalive-timeout number nginx.ingress.kubernetes.io/auth-proxy-set-headers string nginx.ingress.kubernetes.io/auth-snippet string nginx.ingress.kubernetes.io/enable-global-auth \"true\" or \"false\" nginx.ingress.kubernetes.io/backend-protocol string nginx.ingress.kubernetes.io/canary \"true\" or \"false\" nginx.ingress.kubernetes.io/canary-by-header string nginx.ingress.kubernetes.io/canary-by-header-value string nginx.ingress.kubernetes.io/canary-by-header-pattern string nginx.ingress.kubernetes.io/canary-by-cookie string nginx.ingress.kubernetes.io/canary-weight number nginx.ingress.kubernetes.io/canary-weight-total number nginx.ingress.kubernetes.io/client-body-buffer-size string nginx.ingress.kubernetes.io/configuration-snippet string nginx.ingress.kubernetes.io/custom-http-errors []int nginx.ingress.kubernetes.io/custom-headers string nginx.ingress.kubernetes.io/default-backend string nginx.ingress.kubernetes.io/enable-cors \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-allow-origin string nginx.ingress.kubernetes.io/cors-allow-methods string nginx.ingress.kubernetes.io/cors-allow-headers string nginx.ingress.kubernetes.io/cors-expose-headers string nginx.ingress.kubernetes.io/cors-allow-credentials \"true\" or \"false\" nginx.ingress.kubernetes.io/cors-max-age number nginx.ingress.kubernetes.io/force-ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/from-to-www-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/http2-push-preload \"true\" or \"false\" nginx.ingress.kubernetes.io/limit-connections number nginx.ingress.kubernetes.io/limit-rps number nginx.ingress.kubernetes.io/permanent-redirect string nginx.ingress.kubernetes.io/permanent-redirect-code number nginx.ingress.kubernetes.io/temporal-redirect string nginx.ingress.kubernetes.io/temporal-redirect-code number nginx.ingress.kubernetes.io/preserve-trailing-slash \"true\" or \"false\" nginx.ingress.kubernetes.io/proxy-body-size string nginx.ingress.kubernetes.io/proxy-cookie-domain string nginx.ingress.kubernetes.io/proxy-cookie-path string nginx.ingress.kubernetes.io/proxy-connect-timeout number nginx.ingress.kubernetes.io/proxy-send-timeout number nginx.ingress.kubernetes.io/proxy-read-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream string nginx.ingress.kubernetes.io/proxy-next-upstream-timeout number nginx.ingress.kubernetes.io/proxy-next-upstream-tries number nginx.ingress.kubernetes.io/proxy-request-buffering string nginx.ingress.kubernetes.io/proxy-redirect-from string nginx.ingress.kubernetes.io/proxy-redirect-to string nginx.ingress.kubernetes.io/proxy-http-version \"1.0\" or \"1.1\" nginx.ingress.kubernetes.io/proxy-ssl-secret string nginx.ingress.kubernetes.io/proxy-ssl-ciphers string nginx.ingress.kubernetes.io/proxy-ssl-name string nginx.ingress.kubernetes.io/proxy-ssl-protocols string nginx.ingress.kubernetes.io/proxy-ssl-verify string nginx.ingress.kubernetes.io/proxy-ssl-verify-depth number nginx.ingress.kubernetes.io/proxy-ssl-server-name string nginx.ingress.kubernetes.io/enable-rewrite-log \"true\" or \"false\" nginx.ingress.kubernetes.io/rewrite-target URI nginx.ingress.kubernetes.io/satisfy string nginx.ingress.kubernetes.io/server-alias string nginx.ingress.kubernetes.io/server-snippet string nginx.ingress.kubernetes.io/service-upstream \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-change-on-failure \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none \"true\" or \"false\" nginx.ingress.kubernetes.io/session-cookie-domain string nginx.ingress.kubernetes.io/session-cookie-expires string nginx.ingress.kubernetes.io/session-cookie-max-age string nginx.ingress.kubernetes.io/session-cookie-name string nginx.ingress.kubernetes.io/session-cookie-path string nginx.ingress.kubernetes.io/session-cookie-samesite string nginx.ingress.kubernetes.io/session-cookie-secure string nginx.ingress.kubernetes.io/ssl-redirect \"true\" or \"false\" nginx.ingress.kubernetes.io/ssl-passthrough \"true\" or \"false\" nginx.ingress.kubernetes.io/stream-snippet string nginx.ingress.kubernetes.io/upstream-hash-by string nginx.ingress.kubernetes.io/x-forwarded-prefix string nginx.ingress.kubernetes.io/load-balance string nginx.ingress.kubernetes.io/upstream-vhost string nginx.ingress.kubernetes.io/denylist-source-range CIDR nginx.ingress.kubernetes.io/whitelist-source-range CIDR nginx.ingress.kubernetes.io/proxy-buffering string nginx.ingress.kubernetes.io/proxy-buffers-number number nginx.ingress.kubernetes.io/proxy-buffer-size string nginx.ingress.kubernetes.io/proxy-max-temp-file-size string nginx.ingress.kubernetes.io/ssl-ciphers string nginx.ingress.kubernetes.io/ssl-prefer-server-ciphers \"true\" or \"false\" nginx.ingress.kubernetes.io/connection-proxy-header string nginx.ingress.kubernetes.io/enable-access-log \"true\" or \"false\" nginx.ingress.kubernetes.io/enable-opentelemetry \"true\" or \"false\" nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span \"true\" or \"false\" nginx.ingress.kubernetes.io/use-regex bool nginx.ingress.kubernetes.io/enable-modsecurity bool nginx.ingress.kubernetes.io/enable-owasp-core-rules bool nginx.ingress.kubernetes.io/modsecurity-transaction-id string nginx.ingress.kubernetes.io/modsecurity-snippet string nginx.ingress.kubernetes.io/mirror-request-body string nginx.ingress.kubernetes.io/mirror-target string nginx.ingress.kubernetes.io/mirror-host string"},{"location":"user-guide/nginx-configuration/annotations/#canary","title":"Canary","text":"
In some cases, you may want to \"canary\" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: \"true\" is set:
nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with nginx.ingress.kubernetes.io/canary-by-header. The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.
nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.
nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - ) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of <weight-total> means implies all requests will be sent to the alternative service specified in the Ingress. <weight-total> defaults to 100, and can be increased via nginx.ingress.kubernetes.io/canary-weight-total.
nginx.ingress.kubernetes.io/canary-weight-total: The total weight of traffic. If unspecified, it defaults to 100.
Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight
Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity. If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the canary ingress definition.
Known Limitations
Currently a maximum of one canary ingress can be applied per Ingress rule.
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.
The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.
The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.
The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.
Attention
If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.
The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.
Use nginx.ingress.kubernetes.io/session-cookie-domain to set the Domain attribute of the sticky cookie.
Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: \"true\".
Use nginx.ingress.kubernetes.io/session-cookie-expires to control the cookie expires, its value is a number of seconds until the cookie expires.
Use nginx.ingress.kubernetes.io/session-cookie-path to control the cookie path when use-regex is set to true.
Use nginx.ingress.kubernetes.io/session-cookie-change-on-failure to control the cookie change after request failure.
It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.
The name of the Secret that contains the usernames and passwords which are granted access to the paths defined in the Ingress rules. This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.
To enable consistent hashing for a backend:
nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri$host\" or nginx.ingress.kubernetes.io/upstream-hash-by: \"${request_uri}-text-value\" to consistently hash upstream requests by the current request URI.
\"subset\" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: \"true\". This maps requests to subset of nodes instead of a single one. nginx.ingress.kubernetes.io/upstream-hash-by-subset-size determines the size of each subset (default 3).
This is similar to load-balance in ConfigMap, but configures load balancing algorithm per ingress.
Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.
This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.
It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.
Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.
To enable, add the annotation nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. This secret must have a file named ca.crt containing the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress.
You can further customize client certificate authentication and behavior with these annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-depth: The validation depth between the provided client certificate and the Certification Authority chain. (default: 1)
nginx.ingress.kubernetes.io/auth-tls-verify-client: Enables verification of client certificates. Possible values are:
on: Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: namespace/secretName. Failed certificate verification will result in a status code 400 (Bad Request) (default)
off: Don't request client certificates and don't do client certificate verification.
optional: Do optional client certificate validation against the CAs from auth-tls-secret. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
optional_no_ca: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret. Certificate verification result is sent to the upstream service.
nginx.ingress.kubernetes.io/auth-tls-error-page: The URL/Page that user should be redirected in case of a Certificate Authentication Error
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert. Possible values are \"true\" or \"false\" (default).
nginx.ingress.kubernetes.io/auth-tls-match-cn: Adds a sanity check for the CN of the client certificate that is sent over using a string / regex starting with \"CN=\", example: \"CN=myvalidclient\". If the certificate CN sent during mTLS does not match your string / regex it will fail with status code 403. Another way of using this is by adding multiple options in your regex, example: \"CN=(option1|option2|myvalidclient)\". In this case, as long as one of the options in the brackets matches the certificate CN then you will receive a 200 status code.
The following headers are sent to the upstream service according to the auth-tls-* annotations:
ssl-client-issuer-dn: The issuer information of the client certificate. Example: \"CN=My CA\"
ssl-client-subject-dn: The subject information of the client certificate. Example: \"CN=My Client\"
ssl-client-verify: The result of the client verification. Possible values: \"SUCCESS\", \"FAILED: \"
ssl-client-cert: The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to \"true\". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A
Example
Please check the client-certs example.
Attention
TLS with Client Authentication is not possible in Cloudflare and might result in unexpected behavior.
Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/
Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls
It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.
nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form \"namespace/secretName\".
nginx.ingress.kubernetes.io/proxy-ssl-verify: Enables or disables verification of the proxied HTTPS server certificate. (default: off)
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
nginx.ingress.kubernetes.io/proxy-ssl-ciphers: Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
nginx.ingress.kubernetes.io/proxy-ssl-name: Allows to set proxy_ssl_name. This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-server-name: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
Be aware this can be dangerous in multi-tenant clusters, as it can lead to people with otherwise limited permissions being able to retrieve all secrets on the cluster. The recommended mitigation for this threat is to disable this feature, so it may not work for you. See CVE-2021-25742 and the related issue on github for more information.
Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.
This annotation is of the form nginx.ingress.kubernetes.io/custom-headers: custom-headers-configmap to specify a configmap name that contains custom headers. This annotation uses more_set_headers nginx directive.
This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend. In case the service has multiple ports, the first one is the one which will receive the backend traffic.
This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints. It will also be used to handle the error responses if both this annotation and the custom-http-errors annotation are set.
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: \"true\". This will add a section in the server location enabling this functionality.
CORS can be controlled with the following annotations:
nginx.ingress.kubernetes.io/cors-allow-methods: Controls which methods are accepted.
This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).
Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
Example: nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"
nginx.ingress.kubernetes.io/cors-allow-headers: Controls which headers are accepted.
This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.
It also supports single level wildcard subdomains and follows this format: http(s)://*.foo.bar, http(s)://*.bar.foo:8080 or http(s)://*.abc.bar.foo:9000 - Example: nginx.ingress.kubernetes.io/cors-allow-origin: \"https://*.origin-site.com:4443, http://*.origin-site.com, https://example.org:1199\"
nginx.ingress.kubernetes.io/cors-allow-credentials: Controls if credentials can be passed during CORS operations.
Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: \"<alias 1>,<alias 2>\". This will create a server with the same configuration, but adding new values to the server_name directive.
Note
A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.
For more information please see the server_name documentation.
Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.
apiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n annotations:\n nginx.ingress.kubernetes.io/server-snippet: |\n set $agentflag 0;\n\n if ($http_user_agent ~* \"(Mobile)\" ){\n set $agentflag 1;\n }\n\n if ( $agentflag = 1 ) {\n return 301 https://m.example.com;\n }\n
Attention
This annotation can be used only once per host.
"},{"location":"user-guide/nginx-configuration/annotations/#client-body-buffer-size","title":"Client Body Buffer Size","text":"
Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.
Note
The annotation value must be given in a format understood by Nginx.
To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.
nginx.ingress.kubernetes.io/auth-url: \"URL to the authentication service\"\n
Additionally it is possible to set:
nginx.ingress.kubernetes.io/auth-keepalive: <Connections> to specify the maximum number of keepalive connections to auth-url. Only takes effect when no variables are used in the host part of the URL. Defaults to 0 (keepalive disabled).
Note: does not work with HTTP/2 listener because of a limitation in Lua subrequests. UseHTTP2 configuration should be disabled!
nginx.ingress.kubernetes.io/auth-keepalive-share-vars: Whether to share Nginx variables among the current request and the auth request. Example use case is to track requests: when set to \"true\" X-Request-ID HTTP header will be the same for the backend and the auth request. Defaults to \"false\".
nginx.ingress.kubernetes.io/auth-keepalive-requests: <Requests> to specify the maximum number of requests that can be served through one keepalive connection. Defaults to 1000 and only applied if auth-keepalive is set to higher than 0.
nginx.ingress.kubernetes.io/auth-keepalive-timeout: <Timeout> to specify a duration in seconds which an idle keepalive connection to an upstream server will stay open. Defaults to 60 and only applied if auth-keepalive is set to higher than 0.
nginx.ingress.kubernetes.io/auth-method: <Method> to specify the HTTP method to use.
nginx.ingress.kubernetes.io/auth-signin: <SignIn_URL> to specify the location of the error page.
nginx.ingress.kubernetes.io/auth-signin-redirect-param: <SignIn_URL> to specify the URL parameter in the error page which should contain the original URL for a failed signin request.
nginx.ingress.kubernetes.io/auth-response-headers: <Response_Header_1, ..., Response_Header_n> to specify headers to pass to backend once authentication request completes.
nginx.ingress.kubernetes.io/auth-proxy-set-headers: <ConfigMap> the name of a ConfigMap that specifies headers to pass to the authentication service
nginx.ingress.kubernetes.io/auth-request-redirect: <Request_Redirect_URL> to specify the X-Auth-Request-Redirect header value.
nginx.ingress.kubernetes.io/auth-cache-key: <Cache_Key> this enables caching for auth requests. specify a lookup key for auth responses. e.g. $remote_user$http_authorization. Each server and location has it's own keyspace. Hence a cached response is only valid on a per-server and per-location basis.
nginx.ingress.kubernetes.io/auth-cache-duration: <Cache_duration> to specify a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
nginx.ingress.kubernetes.io/auth-always-set-cookie: <Boolean_Flag> to set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308.
nginx.ingress.kubernetes.io/auth-snippet: <Auth_Snippet> to specify a custom snippet to use with external authentication, e.g.
Note: nginx.ingress.kubernetes.io/auth-snippet is an optional annotation. However, it may only be used in conjunction with nginx.ingress.kubernetes.io/auth-url and will be ignored if nginx.ingress.kubernetes.io/auth-url is not set
By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use enable-global-auth: \"false\" in the NGINX ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to \"true\".
These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks.
nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
nginx.ingress.kubernetes.io/limit-rps: number of requests accepted from a given IP each second. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-rpm: number of requests accepted from a given IP each minute. The burst limit is set to this limit multiplied by the burst multiplier, the default multiplier is 5. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-burst-multiplier: multiplier of the limit rate for burst size. The default burst multiplier is 5, this annotation override the default multiplier. When clients exceed this limit, limit-req-status-code default: 503 is returned.
nginx.ingress.kubernetes.io/limit-rate-after: initial number of kilobytes after which the further transmission of a response to a given connection will be rate limited. This feature must be used with proxy-buffering enabled.
nginx.ingress.kubernetes.io/limit-rate: number of kilobytes per second allowed to send to a given connection. The zero value disables rate limiting. This feature must be used with proxy-buffering enabled.
nginx.ingress.kubernetes.io/limit-whitelist: client IP source ranges to be excluded from rate-limiting. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections, limit-rpm, limit-rps.
To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap. The value set in an Ingress annotation will override the global setting.
The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.
This annotation allows to return a permanent redirect (Return Code 301) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.
This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.
This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)
This annotation allows you to modify the status code used for temporal redirects. For example nginx.ingress.kubernetes.io/temporal-redirect-code: '307' would return your temporal-redirect with a 307.
The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.
Note
SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag.
Attention
Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.
By default the Ingress-Nginx Controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.
The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.
This can be desirable for things like zero-downtime deployments . See issue #257.
If the service-upstream annotation is specified the following things should be taken into consideration:
Sticky Sessions will not work as only round-robin load balancing is supported.
The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.
"},{"location":"user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect","title":"Server-side HTTPS enforcement through redirect","text":"
By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: \"false\" in the NGINX ConfigMap.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: \"false\" annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: \"true\" annotation in the particular resource.
To preserve the trailing slash in the URI with ssl-redirect, set nginx.ingress.kubernetes.io/preserve-trailing-slash: \"true\" annotation for that particular resource.
In some scenarios, it is required to redirect from www.domain.com to domain.com or vice versa, which way the redirect is performed depends on the configured host value in the Ingress object.
For example, if .spec.rules.host is configured with a value like www.example.com, then this annotation will redirect from example.com to www.example.com. If .spec.rules.host is configured with a value like example.com, so without a www, then this annotation will redirect from www.example.com to example.com instead.
To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"
Attention
If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.
Attention
For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.
You can specify blocked client IP source ranges through the nginx.ingress.kubernetes.io/denylist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the denylist-source-range value may be set in the NGINX ConfigMap.
Note
Adding an annotation to an Ingress rule overrides any global restriction.
You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.
To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.
Note
Adding an annotation to an Ingress rule overrides any global restriction.
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
If you indicate Backend Protocol as GRPC or GRPCS, the following grpc values will be set and inherited from proxy timeouts:
grpc_connect_timeout=5s, from nginx.ingress.kubernetes.io/proxy-connect-timeout
grpc_send_timeout=60s, from nginx.ingress.kubernetes.io/proxy-send-timeout
grpc_read_timeout=60s, from nginx.ingress.kubernetes.io/proxy-read-timeout
Note: All timeout values are unitless and in seconds e.g. nginx.ingress.kubernetes.io/proxy-read-timeout: \"120\" sets a valid 120 seconds proxy read timeout.
The annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to will set the first and second parameters of NGINX's proxy_redirect directive respectively. It is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response
Setting \"off\" or \"default\" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.
By default the value of each annotation is \"off\".
"},{"location":"user-guide/nginx-configuration/annotations/#custom-max-body-size","title":"Custom max body size","text":"
For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.
To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
Enable or disable proxy buffering proxy_buffering. By default proxy buffering is disabled in the NGINX config.
To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:
Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4
To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:
Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as \"4k\"
To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:
"},{"location":"user-guide/nginx-configuration/annotations/#proxy-max-temp-file-size","title":"Proxy max temp file size","text":"
When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.
The zero value disables buffering of responses to temporary files.
To use custom values in an Ingress rule, define this annotation:
Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to \"1.1\".
The following annotation will set the ssl_prefer_server_ciphers directive at the server level. This configuration specifies that server ciphers should be preferred over client ciphers when using the SSLv3 and TLS protocols.
Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:
Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:
Opentelemetry can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. to turn off telemetry of external health check endpoints)
The option to trust incoming trace spans can be enabled or disabled globally through the ConfigMap but this will sometimes need to be overridden to enable it or disable it for a specific ingress (e.g. only enable on a private endpoint)
ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path must be disabled manually.
Note: If you use both enable-owasp-core-rules and modsecurity-snippet annotations together, only the modsecurity-snippet will take effect. If you wish to include the OWASP Core Rule Set or recommended configuration simply use the include statement:
Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, AUTO_HTTP, GRPC, GRPCS and FCGI
When using this annotation with the NGINX annotation nginx.ingress.kubernetes.io/affinity of type cookie, nginx.ingress.kubernetes.io/session-cookie-path must be also set; Session cookie paths do not support regex.
Using the nginx.ingress.kubernetes.io/use-regex annotation will indicate whether or not the paths defined on an Ingress use regular expressions. The default value is false.
The following will indicate that regular expression paths are being used:
nginx.ingress.kubernetes.io/use-regex: \"true\"\n
The following will indicate that regular expression paths are not being used:
When this annotation is set to true, the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
Please read about ingress path matching before using this modifier.
By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.
Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in \"test\" backends.
Also by default header Host for mirrored requests will be set the same as a host part of uri in the \"mirror-target\" annotation. You can override it by \"mirror-host\" annotation:
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\". Same for numbers, like \"100\".
\"Slice\" types (defined below as []string or []int) can be provided as a comma-delimited string.
Enables users to consume cross namespace resource on annotations, when was previously enabled . default: true
Annotations that may be impacted with this change: * auth-secret * auth-proxy-set-header * auth-tls-secret * fastcgi-params-configmap * proxy-ssl-secret
This option will be defaulted to false in the next major release
Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ false
Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file
This option will be defaulted to false in the next major release
Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742
When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.
default: \"\"
When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".
This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.
Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7
Changes with nginx 1.19.7 16 Feb 2021\n\n *) Change: connections handling in HTTP/2 has been changed to better\n match HTTP/1.x; the \"http2_recv_timeout\", \"http2_idle_timeout\", and\n \"http2_max_requests\" directives have been removed, the\n \"keepalive_timeout\" and \"keepalive_requests\" directives should be\n used instead.\n
References: nginx change log nginx issue tracker nginx mailing list
Sets if the escape parameter is disabled entirely for character escaping in variables (\"true\") or controlled by log-format-escape-json (\"false\") Sets the nginx log format.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files. default: 16384
Tip
Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).
Sets the maximum number of files that can be opened by each worker process. The default of 0 means \"max open files (system's limit) - 1024\". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: \"0.0.0.0/0\"
Sets the maximum size of the server names hash tables used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
DHE-based cyphers will not be available until DH parameter is configured Custom DH parameters for perfect forward secrecy
Please check the Mozilla SSL Configuration Generator.
Note: ssl_prefer_server_ciphers directive will be enabled by default for http context.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
TLS session ticket-key, by default, a randomly generated key is used.
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables \"geoip\" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/ingress-controller/geoip/GeoLite2-City.mmdb and /etc/ingress-controller/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the \"brotli\" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: false
Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli
Sets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if use-gzip is enabled. default: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
enable-real-ip enables the configuration of https://nginx.org/en/docs/http/ngx_http_realip_module.html. Specific attributes of the module can be configured further by using forwarded-for-header and proxy-real-ip-cidr settings.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
It will also set the grpc_read_timeout for gRPC connections.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
It will also set the grpc_send_timeout for gRPC connections.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make \"complex\" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
You can optionally set a size unit to allow for kilobyte-granularity. Allowed units are 'm' or 'k' (case-insensitive), and it defaults to MB if no unit is provided. Here is a similar example, but the my_custom_plugin dict is only 512KB.
Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 default: 308
Why the default code is 308?
RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.
A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: \"/.well-known/acme-challenge\"
A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to \"false\". default: \"\"
A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: \"\"
Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: \"\"
Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: \"rd\"
Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: \"\"
Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: \"\"
Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-snippet. default: \"\"
Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
Always set a cookie returned by auth request. By default, the cookie will be set only if an upstream reports with the code 200, 201, 204, 206, 301, 302, 303, 304, 307, or 308. default: false
A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
Set if the service's Cluster IP and port should be used instead of a list of all endpoints. This can be overwritten by an annotation on an Ingress rule. default: \"false\"
Set to reject SSL handshake to an unknown virtualhost. This parameter helps to mitigate the fingerprinting using default certificate of ingress. default: \"false\"
Ingress objects contains a field called pathType that defines the proxy behavior. It can be Exact, Prefix and ImplementationSpecific.
When pathType is configured as Exact or Prefix, there should be a more strict validation, allowing only paths starting with \"/\" and containing only alphanumeric characters and \"-\", \"_\" and additional \"/\".
When this option is enabled, the validation will happen on the Admission Webhook, making any Ingress not using pathType ImplementationSpecific and containing invalid characters to be denied.
This means that Ingress objects that rely on paths containing regex characters should use ImplementationSpecific pathType.
The cluster admin should establish validation rules using mechanisms like Open Policy Agent to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used.
Please note the template is tied to the Go code. Do not change names in the variable $cfg.
For more information about the template syntax please check the Go template package. In addition to the built-in functions provided by the Go package the following functions are also available:
empty: returns true if the specified parameter (string) is empty
contains: strings.Contains
hasPrefix: strings.HasPrefix
hasSuffix: strings.HasSuffix
toUpper: strings.ToUpper
toLower: strings.ToLower
split: strings.Split
quote: wraps a string in double quotes
buildLocation: helps to build the NGINX Location section in each server
buildProxyPass: builds the reverse proxy configuration
buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation
Placeholder Description $proxy_protocol_addr remote address if proxy protocol is enabled $remote_addr the source IP address of the client $remote_user user name supplied with the Basic authentication $time_local local time in the Common Log Format $request full original request line $status response status $body_bytes_sent number of bytes sent to a client, not counting the response header $http_referer value of the Referer header $http_user_agent value of User-Agent header $request_length request length (including request line, header, and request body) $request_time time elapsed since the first bytes were read from the client $proxy_upstream_name name of the upstream. The format is upstream-<namespace>-<service name>-<service port>$proxy_alternative_upstream_name name of the alternative upstream. The format is upstream-<namespace>-<service name>-<service port>$upstream_addr the IP address and port (or the path to the domain socket) of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas. $upstream_response_length the length of the response obtained from the upstream server $upstream_response_time time spent on receiving the response from the upstream server as seconds with millisecond resolution $upstream_status status code of the response obtained from the upstream server $req_id value of the X-Request-ID HTTP header. If the header is not set, a randomly generated ID.
Additional available variables:
Placeholder Description $namespace namespace of the ingress $ingress_name name of the ingress $service_name name of the service $service_port port of the service
Sources:
Upstream variables
Embedded variables
"},{"location":"user-guide/third-party-addons/modsecurity/","title":"ModSecurity Web Application Firewall","text":"
ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org
The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).
The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: \"true\" in the configuration configmap.
Note: the default configuration use detection only, because that minimizes the chances of post-installation disruption. Due to the value of the setting SecAuditLogType=Concurrent the ModSecurity log is stored in multiple files inside the directory /var/log/audit. The default Serial value in SecAuditLogType can impact performance.
The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. The directory /etc/nginx/owasp-modsecurity-crs contains the OWASP ModSecurity Core Rule Set repository. Using enable-owasp-modsecurity-crs: \"true\" we enable the use of the rules.
For more info on supported annotations, please see annotations/#modsecurity
"},{"location":"user-guide/third-party-addons/modsecurity/#example-of-using-modsecurity-with-plugins-via-the-helm-chart","title":"Example of using ModSecurity with plugins via the helm chart","text":"
Suppose you have a ConfigMap that contains the contents of the nextcloud-rule-exclusions plugin like this:
apiVersion: v1\nkind: ConfigMap\nmetadata:\n name: modsecurity-plugins\ndata:\n empty-after.conf: |\n # no data\n empty-before.conf: |\n # no data\n empty-config.conf: |\n # no data\n nextcloud-rule-exclusions-before.conf:\n # this is just a snippet\n # find the full file at https://github.com/coreruleset/nextcloud-rule-exclusions-plugin\n #\n # [ File Manager ]\n # The web interface uploads files, and interacts with the user.\n SecRule REQUEST_FILENAME \"@contains /remote.php/webdav\" \\\n \"id:9508102,\\\n phase:1,\\\n pass,\\\n t:none,\\\n nolog,\\\n ver:'nextcloud-rule-exclusions-plugin/1.2.0',\\\n ctl:ruleRemoveById=920420,\\\n ctl:ruleRemoveById=920440,\\\n ctl:ruleRemoveById=941000-942999,\\\n ctl:ruleRemoveById=951000-951999,\\\n ctl:ruleRemoveById=953100-953130,\\\n ctl:ruleRemoveByTag=attack-injection-php\"\n
If you're using the helm chart, you can pass in the following parameters in your values.yaml:
controller:\n config:\n # Enables Modsecurity\n enable-modsecurity: \"true\"\n\n # Update ModSecurity config and rules\n modsecurity-snippet: |\n # this enables the mod security nextcloud plugin\n Include /etc/nginx/owasp-modsecurity-crs/plugins/nextcloud-rule-exclusions-before.conf\n\n # this enables the default OWASP Core Rule Set\n Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf\n\n # Enable prevention mode. Options: DetectionOnly,On,Off (default is DetectionOnly)\n SecRuleEngine On\n\n # Enable scanning of the request body\n SecRequestBodyAccess On\n\n # Enable XML and JSON parsing\n SecRule REQUEST_HEADERS:Content-Type \"(?:text|application(?:/soap\\+|/)|application/xml)/\" \\\n \"id:200000,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=XML\"\n\n SecRule REQUEST_HEADERS:Content-Type \"application/json\" \\\n \"id:200001,phase:1,t:none,t:lowercase,pass,nolog,ctl:requestBodyProcessor=JSON\"\n\n # Reject if larger (we could also let it pass with ProcessPartial)\n SecRequestBodyLimitAction Reject\n\n # Send ModSecurity audit logs to the stdout (only for rejected requests)\n SecAuditLog /dev/stdout\n\n # format the logs in JSON\n SecAuditLogFormat JSON\n\n # could be On/Off/RelevantOnly\n SecAuditEngine RelevantOnly\n\n # Add a volume for the plugins directory\n extraVolumes:\n - name: plugins\n configMap:\n name: modsecurity-plugins\n\n # override the /etc/nginx/enable-owasp-modsecurity-crs/plugins with your ConfigMap\n extraVolumeMounts:\n - name: plugins\n mountPath: /etc/nginx/owasp-modsecurity-crs/plugins\n
Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project.
Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled.
Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes.
NOTE: While the option is called otlp-collector-host, you will need to point this to any backend that receives otlp-grpc.
Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector, Jaeger Tempo, and zipkin have been tested.
Other optional configuration options:
# specifies the name to use for the server span\nopentelemetry-operation-name\n\n# sets whether or not to trust incoming telemetry spans\nopentelemetry-trust-incoming-span\n\n# specifies the port to use when uploading traces, Default: 4317\notlp-collector-port\n\n# specifies the service name to use for any traces created, Default: nginx\notel-service-name\n\n# The maximum queue size. After the size is reached data are dropped.\notel-max-queuesize\n\n# The delay interval in milliseconds between two consecutive exports.\notel-schedule-delay-millis\n\n# How long the export can run before it is cancelled.\notel-schedule-delay-millis\n\n# The maximum batch size of every export. It must be smaller or equal to maxQueueSize.\notel-max-export-batch-size\n\n# specifies sample rate for any traces created, Default: 0.01\notel-sampler-ratio\n\n# specifies the sampler to be used when sampling traces.\n# The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOff\notel-sampler\n\n# Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false\notel-sampler-parent-based\n
Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following:
"},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog","text":"
If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the ingress-nginx-controller Deployment manifest
Argument
Description
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--bucket-factor
Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0)
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--controller-class
Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched.
--deep-inspect
Enables ingress object security deep inspector. (default true)
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--enable-annotation-validation
If true, will enable the annotation validation feature. Defaults to true
--disable-catch-all
Disable support for catch-all Ingresses. (default false)
--disable-full-test
Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default).
--disable-svc-external-name
Disable support for Services of type ExternalName. (default false)
--disable-sync-events
Disables the creation of 'Sync' Event resources, but still logs them
--dynamic-configuration-retries
Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15)
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--election-ttl
Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s)
--enable-metrics
Enables the collection of NGINX metrics. (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed. (default false)
--enable-ssl-passthrough
Enable SSL Passthrough. (default false)
--disable-leader-election
Disable Leader Election on Nginx Controller. (default false)
--enable-topology-aware-routing
Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false)
--exclude-socket-metrics
Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size'
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--healthz-host
Address to bind the healthz endpoint.
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--ingress-class-by-name
Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false).
--internal-logger-address
Address to be used when binding internal syslogger. (default 127.0.0.1:11514)
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--length-buckets
Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100])
--max-buckets
Maximum number of buckets for native histograms. (default 100)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-retries-timeout
Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s)
--maxmind-retries-count
Number of attempts to download the GeoIP DB. (default 1)
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ .
Additional delay in seconds before controller container exits. (default 10)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ . (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false)
--report-status-classes
If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false)
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit. (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--time-buckets
Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10])
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0)
--size-buckets
Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07])
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the Ingress-Nginx Controller and exit.
--watch-ingress-without-class
Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false)
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
--watch-namespace-selector
The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty.
\ No newline at end of file
+ Command line arguments - Ingress-Nginx Controller
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the ingress-nginx-controller Deployment manifest
Argument
Description
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--bucket-factor
Bucket factor for native histograms. Value must be > 1 for enabling native histograms. (default 0)
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--controller-class
Ingress Class Controller value this Ingress satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.19.0 or higher. The .spec.controller value of the IngressClass referenced in an Ingress Object should be the same value specified here to make this object be watched.
--deep-inspect
Enables ingress object security deep inspector. (default true)
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--enable-annotation-validation
If true, will enable the annotation validation feature. Defaults to true
--disable-catch-all
Disable support for catch-all Ingresses. (default false)
--disable-full-test
Disable full test of all merged ingresses at the admission stage and tests the template of the ingress being created or updated (full test of all ingresses is enabled by default).
--disable-svc-external-name
Disable support for Services of type ExternalName. (default false)
--disable-sync-events
Disables the creation of 'Sync' Event resources, but still logs them
--dynamic-configuration-retries
Number of times to retry failed dynamic configuration before failing to sync an ingress. (default 15)
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--election-ttl
Duration a leader election is valid before it's getting re-elected, e.g. 15s, 10m or 1h. (Default: 30s)
--enable-metrics
Enables the collection of NGINX metrics. (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed. (default false)
--enable-ssl-passthrough
Enable SSL Passthrough. (default false)
--disable-leader-election
Disable Leader Election on Nginx Controller. (default false)
--enable-topology-aware-routing
Enable topology aware routing feature, needs service object annotation service.kubernetes.io/topology-mode sets to auto. (default false)
--exclude-socket-metrics
Set of socket request metrics to exclude which won't be exported nor being calculated. The possible socket request metrics to exclude are documented in the monitoring guide e.g. 'nginx_ingress_controller_request_duration_seconds,nginx_ingress_controller_response_size'
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--healthz-host
Address to bind the healthz endpoint.
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--ingress-class-by-name
Define if Ingress Controller should watch for Ingress Class by Name together with Controller Class. (default false).
--internal-logger-address
Address to be used when binding internal syslogger. (default 127.0.0.1:11514)
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--length-buckets
Set of buckets which will be used for prometheus histogram metrics such as RequestLength, ResponseLength. (default [10, 20, 30, 40, 50, 60, 70, 80, 90, 100])
--max-buckets
Maximum number of buckets for native histograms. (default 100)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-retries-timeout
Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s)
--maxmind-retries-count
Number of attempts to download the GeoIP DB. (default 1)
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/significant-changes-to-accessing-and-using-geolite2-databases/ .
Export metrics per-host even if the host is not defined in an ingress. Requires --metrics-per-host to be set to true. (default false)
--monitor-max-batch-size
Max batch size of NGINX metrics. (default 10000)
--post-shutdown-grace-period
Additional delay in seconds before controller container exits. (default 10)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ . (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. (default false)
--report-status-classes
If true, report status classes in metrics (2xx, 3xx, 4xx and 5xx) instead of full status codes. (default false)
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds. (default 60)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit. (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--time-buckets
Set of buckets which will be used for prometheus histogram metrics such as RequestTime, ResponseTime. (default [0.005, 0.01, 0.025, 0.05, 0.1, 0.25, 0.5, 1, 2.5, 5, 10])
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process. (default 0)
--size-buckets
Set of buckets which will be used for prometheus histogram metrics such as BytesSent. (default [10, 100, 1000, 10000, 100000, 1e+06, 1e+07])
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the Ingress-Nginx Controller and exit.
--watch-ingress-without-class
Define if Ingress Controller should also watch for Ingresses without an IngressClass or the annotation specified. (default false)
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
--watch-namespace-selector
The controller will watch namespaces whose labels match the given selector. This flag only takes effective when --watch-namespace is empty.
\ No newline at end of file
diff --git a/user-guide/monitoring/index.html b/user-guide/monitoring/index.html
index 7e49cf746..9588ccefe 100644
--- a/user-guide/monitoring/index.html
+++ b/user-guide/monitoring/index.html
@@ -54,7 +54,7 @@ default-http-backend ClusterIP 10.103.59.201 <none> 80/TCP
ingress-nginx NodePort 10.97.44.72 <none> 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h
prometheus-server NodePort 10.98.233.86 <none> 9090:32630/TCP 10m
grafana NodePort 10.98.233.87 <none> 3000:31086/TCP 10m
-
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click "Data Sources"
Click "Add data source"
Select "Prometheus"
Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
Left menu (hover over +) -> Dashboard
Click "Import"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you need to run the ingress controller with --metrics-per-host=false (you will lose labeling by hostname, but still have labeling by ingress).
If you want to expose the dashboard for grafana using an ingress resource, then you can :
change the service type of the prometheus-server service and the grafana service to "ClusterIP" like this :
kubectl -n ingress-nginx edit svc grafana
+
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click "Data Sources"
Click "Add data source"
Select "Prometheus"
Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
Left menu (hover over +) -> Dashboard
Click "Import"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/grafana/dashboards/nginx.json
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you have two options:
Run the ingress controller with --metrics-per-host=false. You will lose labeling by hostname, but still have labeling by ingress.
Run the ingress controller with --metrics-per-undefined-host=true --metrics-per-host=true. You will get labeling by hostname even if the hostname is not explicitly defined on an ingress. Be warned that cardinality could explode due to many hostnames.