For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.
For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.
Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
Defines session affinity behavior of canaries. By default the behavior is sticky, and canaries respect session affinity configuration. Set this to legacy to restore original canary behavior, when session affinity parameters were not respected.
sticky (default) or legacy
nginx.ingress.kubernetes.io/session-cookie-name
Name of the cookie that will be created
string (defaults to INGRESSCOOKIE)
nginx.ingress.kubernetes.io/session-cookie-path
Path that will be set on the cookie (required if your Ingress paths use regular expressions)
Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress).
You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build. The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.
Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
$ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict
-{
- "message": "Let us endeavor so to live that when we come to die even the undertaker will be sorry.\n\t\t-- Mark Twain, \"Pudd'nhead Wilson's Calendar\""
-}
+ gRPC - NGINX Ingress Controller
You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
Step 1: Create a Kubernetes Deployment for gRPC app ¶
Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
$ kubectl get po -A -o wide | grep go-grpc-greeter-server
+
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
To create a container image for this app, you can use this Dockerfile.
If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is deployment.go-grpc-greeter-server.yaml ;
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this ;
Step 3: Create the Kubernetes Ingress resource for the gRPC app ¶
Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster, in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress;
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this ;
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".
A few more things to note:
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.
Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate for this.
If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
Values for the timeouts must be specified as e.g. "1200s".
On the most recent versions of nginx-ingress, changing these timeouts requires using the nginx.ingress.kubernetes.io/server-snippet annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout separately.
You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the nginx-ingress-controller Deployment manifest
Argument
Description
--add_dir_header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--disable-catch-all
Disable support for catch-all Ingresses
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--enable-metrics
Enables the collection of NGINX metrics (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed.
--enable-ssl-passthrough
Enable SSL Passthrough.
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--log_backtrace_at
when logging hits line file:N, emit a stack trace (default :0)
--log_dir
If non-empty, write log files in this directory
--log_file
If non-empty, use this log file
--log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr
log to standard error instead of files (default true)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-retries-timeout
Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s)
--maxmind-retries-count
Number of attempts to download the GeoIP DB. (default 1)
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
--metrics-per-host
Export metrics per-host (default true)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.
--skip_headers
If true, avoid header prefixes in the log messages
--skip_log_headers
If true, avoid headers when opening log files
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds (default 60)
--stderrthreshold
logs at or above this threshold go to stderr (default 2)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process.
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the NGINX Ingress controller and exit.
--vmodule
comma-separated list of pattern=N settings for file-filtered logging
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the nginx-ingress-controller Deployment manifest
Argument
Description
--add_dir_header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--disable-catch-all
Disable support for catch-all Ingresses
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--enable-metrics
Enables the collection of NGINX metrics (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed.
--enable-ssl-passthrough
Enable SSL Passthrough.
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--log_backtrace_at
when logging hits line file:N, emit a stack trace (default :0)
--log_dir
If non-empty, write log files in this directory
--log_file
If non-empty, use this log file
--log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr
log to standard error instead of files (default true)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
--metrics-per-host
Export metrics per-host (default true)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.
--skip_headers
If true, avoid header prefixes in the log messages
--skip_log_headers
If true, avoid headers when opening log files
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds (default 60)
--stderrthreshold
logs at or above this threshold go to stderr (default 2)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process.
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the NGINX Ingress controller and exit.
--vmodule
comma-separated list of pattern=N settings for file-filtered logging
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
apiVersion:networking.k8s.io/v1
+ Regular expressions in paths - NGINX Ingress Controller
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingresscontroller0, then you can simply type the command show below :
Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
kubectl get svc -n ingress-nginx
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-default-http-backend ClusterIP 10.103.59.201 <none> 80/TCP 3d
-ingress-nginx NodePort 10.97.44.72 <none> 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h
-prometheus-server NodePort 10.98.233.86 <none> 9090:32630/TCP 10m
-grafana NodePort 10.98.233.87 <none> 3000:31086/TCP 10m
-
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you need to run the ingress controller with --metrics-per-host=false (you will lose labeling by hostname, but still have labeling by ingress).
You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
Tip
Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true", "false", "100".
Note
The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.
In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: "true" is set:
nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.
nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.
nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress.
Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight
Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by.
Known Limitations
Currently a maximum of one canary ingress can be applied per Ingress rule.
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.
The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.
The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.
Attention
If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.
The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.
Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true".
It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.
You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
Tip
Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true", "false", "100".
Note
The annotation prefix can be changed using the --annotations-prefix command line argument, but the default is nginx.ingress.kubernetes.io, as described in the table below.
In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: "true" is set:
nginx.ingress.kubernetes.io/canary-by-header: The header to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to always, it will be routed to the canary. When the header is set to never, it will never be routed to the canary. For any other value, the header will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-by-header-value: The header value to match for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the request header is set to this value, it will be routed to the canary. For any other header value, the header will be ignored and the request compared against the other canary rules by precedence. This annotation has to be used together with . The annotation is an extension of the nginx.ingress.kubernetes.io/canary-by-header to allow customizing the header value instead of using hardcoded values. It doesn't have any effect if the nginx.ingress.kubernetes.io/canary-by-header annotation is not defined.
nginx.ingress.kubernetes.io/canary-by-header-pattern: This works the same way as canary-by-header-value except it does PCRE Regex matching. Note that when canary-by-header-value is set this annotation will be ignored. When the given Regex causes error during request processing, the request will be considered as not matching.
nginx.ingress.kubernetes.io/canary-by-cookie: The cookie to use for notifying the Ingress to route the request to the service specified in the Canary Ingress. When the cookie value is set to always, it will be routed to the canary. When the cookie is set to never, it will never be routed to the canary. For any other value, the cookie will be ignored and the request compared against the other canary rules by precedence.
nginx.ingress.kubernetes.io/canary-weight: The integer based (0 - 100) percent of random requests that should be routed to the service specified in the canary Ingress. A weight of 0 implies that no requests will be sent to the service in the Canary ingress by this canary rule. A weight of 100 means implies all requests will be sent to the alternative service specified in the Ingress.
Canary rules are evaluated in order of precedence. Precedence is as follows: canary-by-header -> canary-by-cookie -> canary-weight
Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance, nginx.ingress.kubernetes.io/upstream-hash-by, and annotations related to session affinity. If you want to restore the original behavior of canaries when session affinity was ignored, set nginx.ingress.kubernetes.io/affinity-canary-behavior annotation with value legacy on the non-canary ingress definition.
Known Limitations
Currently a maximum of one canary ingress can be applied per Ingress rule.
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.
The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.
The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickiness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickiness.
The annotation nginx.ingress.kubernetes.io/affinity-canary-behavior defines the behavior of canaries when session affinity is enabled. Setting this to sticky (default) will ensure that users that were served by canaries, will continue to be served by canaries. Setting this to legacy will restore original canary behavior, when session affinity was ignored.
Attention
If more than one Ingress is defined for a host and at least one Ingress uses nginx.ingress.kubernetes.io/affinity: cookie, then only paths on the Ingress using nginx.ingress.kubernetes.io/affinity will use session cookie affinity. All paths defined on other Ingresses for the host will be load balanced through the random selection of a backend server.
If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.
The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.
Use nginx.ingress.kubernetes.io/session-cookie-samesite to apply a SameSite attribute to the sticky cookie. Browser accepted values are None, Lax, and Strict. Some browsers reject cookies with SameSite=None, including those created before the SameSite=None specification (e.g. Chrome 5X). Other browsers mistakenly treat SameSite=None cookies as SameSite=Strict (e.g. Safari running on OSX 14). To omit SameSite=None from browsers with these incompatibilities, add the annotation nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none: "true".
It is possible to add authentication by adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.
The name of the Secret that contains the usernames and passwords which are granted access to the paths defined in the Ingress rules. This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.
To enable consistent hashing for a backend:
nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example: nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri" or nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri$host" or nginx.ingress.kubernetes.io/upstream-hash-by: "${request_uri}-text-value" to consistently hash upstream requests by the current request URI.
"subset" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: "true". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3).
Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.
This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.
It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.
Client Certificate Authentication is applied per host and it is not possible to specify rules that differ for individual paths.
The annotations are:
nginx.ingress.kubernetes.io/auth-tls-secret: secretName: The name of the Secret that contains the full Certificate Authority chain ca.crt that is enabled to authenticate against this Ingress. This annotation expects the Secret name in the form "namespace/secretName".
nginx.ingress.kubernetes.io/auth-tls-verify-depth: The validation depth between the provided client certificate and the Certification Authority chain.
nginx.ingress.kubernetes.io/auth-tls-verify-client: Enables verification of client certificates. Possible values are:
off: Don't request client certificates and don't do client certificate verification. (default)
on: Request a client certificate that must be signed by a certificate that is included in the secret key ca.crt of the secret specified by nginx.ingress.kubernetes.io/auth-tls-secret: secretName. Failed certificate verification will result in a status code 400 (Bad Request).
optional: Do optional client certificate validation against the CAs from auth-tls-secret. The request fails with status code 400 (Bad Request) when a certificate is provided that is not signed by the CA. When no or an otherwise invalid certificate is provided, the request does not fail, but instead the verification result is sent to the upstream service.
optional_no_ca: Do optional client certificate validation, but do not fail the request when the client certificate is not signed by the CAs from auth-tls-secret. Certificate verification result is sent to the upstream service.
nginx.ingress.kubernetes.io/auth-tls-error-page: The URL/Page that user should be redirected in case of a Certificate Authentication Error
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: Indicates if the received certificates should be passed or not to the upstream server in the header ssl-client-cert. Possible values are "true" or "false" (default).
The following headers are sent to the upstream service according to the auth-tls-* annotations:
ssl-client-issuer-dn: The issuer information of the client certificate. Example: "CN=My CA"
ssl-client-subject-dn: The subject information of the client certificate. Example: "CN=My Client"
ssl-client-verify: The result of the client verification. Possible values: "SUCCESS", "FAILED: "
ssl-client-cert: The full client certificate in PEM format. Will only be sent when nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream is set to "true". Example: -----BEGIN%20CERTIFICATE-----%0A...---END%20CERTIFICATE-----%0A
It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.
nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: Specifies a Secret with the certificate tls.crt, key tls.key in PEM format used for authentication to a proxied HTTPS server. It should also contain trusted CA certificates ca.crt in PEM format used to verify the certificate of the proxied HTTPS server. This annotation expects the Secret name in the form "namespace/secretName".
nginx.ingress.kubernetes.io/proxy-ssl-verify: Enables or disables verification of the proxied HTTPS server certificate. (default: off)
nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: Sets the verification depth in the proxied HTTPS server certificates chain. (default: 1)
nginx.ingress.kubernetes.io/proxy-ssl-ciphers: Specifies the enabled ciphers for requests to a proxied HTTPS server. The ciphers are specified in the format understood by the OpenSSL library.
nginx.ingress.kubernetes.io/proxy-ssl-name: Allows to set proxy_ssl_name. This allows overriding the server name used to verify the certificate of the proxied HTTPS server. This value is also passed through SNI when a connection is established to the proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
nginx.ingress.kubernetes.io/proxy-ssl-server-name: Enables passing of the server name through TLS Server Name Indication extension (SNI, RFC 6066) when establishing a connection with the proxied HTTPS server.
Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.
This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set.
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: "true". This will add a section in the server location enabling this functionality.
CORS can be controlled with the following annotations:
nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).
Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
Example: nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.
nginx.ingress.kubernetes.io/cors-expose-headers controls which headers are exposed to response. This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.
nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port
nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600
Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: "<alias 1>,<alias 2>". This will create a server with the same configuration, but adding new values to the server_name directive.
Note
A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.
This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend.
This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set.
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: "true". This will add a section in the server location enabling this functionality.
CORS can be controlled with the following annotations:
nginx.ingress.kubernetes.io/cors-allow-methods controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).
Default: GET, PUT, POST, DELETE, PATCH, OPTIONS
Example: nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.
nginx.ingress.kubernetes.io/cors-expose-headers controls which headers are exposed to response. This is a multi-valued field, separated by ',' and accepts letters, numbers, _, - and *.
nginx.ingress.kubernetes.io/cors-allow-origin controls what's the accepted Origin for CORS. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port
nginx.ingress.kubernetes.io/cors-max-age controls how long preflight requests can be cached. Default: 1728000 Example: nginx.ingress.kubernetes.io/cors-max-age: 600
Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: "<alias 1>,<alias 2>". This will create a server with the same configuration, but adding new values to the server_name directive.
Note
A server-alias name cannot conflict with the hostname of an existing server. If it does, the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created, the new server configuration will take place over the alias configuration.
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".
"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".
"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of files that can be opened by each worker process. The default of 0 means "max open files (system's limit) - 1024". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: "0.0.0.0/0"
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Activates plugins installed in /etc/nginx/lua/plugins. Refer to ingress-nginx plugins README for more information on how to write and install a plugin.
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/nginx/geoip/GeoLite2-City.mmdb and /etc/nginx/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the "brotli" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled
Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if use-gzip is enabled. default:application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of files that can be opened by each worker process. The default of 0 means "max open files (system's limit) / worker-processes - 1024". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: "0.0.0.0/0"
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Activates plugins installed in /etc/nginx/lua/plugins. Refer to ingress-nginx plugins README for more information on how to write and install a plugin.
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/nginx/geoip/GeoLite2-City.mmdb and /etc/nginx/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the "brotli" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled
Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if use-gzip is enabled. default:application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308default: 308
Why the default code is 308?
RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.
A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: "/.well-known/acme-challenge"
A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to "false". default: ""
A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: ""
Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: ""
Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: "rd"
Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: ""
Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""
Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""
Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
global-rate-limit-memcached-host: IP/FQDN of memcached server to use. Required to enable Global Rate Limiting.
global-rate-limit-memcached-port: port of memcached server to use. Defaults default memcached port of 11211.
global-rate-limit-memcached-connect-timeout: configure timeout for connect, send and receive operations. Unit is millisecond. Defaults to 50ms.
global-rate-limit-memcached-max-idle-timeout: configure timeout for cleaning idle connections. Unit is millisecond. Defaults to 50ms.
global-rate-limit-memcached-pool-size: configure number of max connections to keep alive. Make sure your memcached server can handle global-rate-limit-memcached-pool-size * worker-processes * <number of ingress-nginx replicas> simultaneous connections.
These settings get used by lua-resty-global-throttle that ingress-nginx includes. Refer to the link to learn more about lua-resty-global-throttle.