For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.
For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.
You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.
Step 1: Create a Kubernetes Deployment for gRPC app ¶
Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
$ kubectl get po -A -o wide | grep go-grpc-greeter-server
-
If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
To create a container image for this app, you can use this Dockerfile.
If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is deployment.go-grpc-greeter-server.yaml ;
You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this ;
Step 3: Create the Kubernetes Ingress resource for the gRPC app ¶
Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster, in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress;
If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this ;
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".
A few more things to note:
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.
Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress).
You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.
The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive "insecure").
For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".
We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build. The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.
Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
$ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict
+{
+ "message": "Let us endeavor so to live that when we come to die even the undertaker will be sorry.\n\t\t-- Mark Twain, \"Pudd'nhead Wilson's Calendar\""
+}
If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.
See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate for this.
If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.
Values for the timeouts must be specified as e.g. "1200s".
On the most recent versions of nginx-ingress, changing these timeouts requires using the nginx.ingress.kubernetes.io/server-snippet annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout separately.
You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.
Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.
Note
Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.
In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.
For example, the ingress definition above will result in the following rewrites:
rewrite.bar.com/something rewrites to rewrite.bar.com/
rewrite.bar.com/something/ rewrites to rewrite.bar.com/
rewrite.bar.com/something/new rewrites to rewrite.bar.com/new
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.
To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.
One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.
The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.
Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.
Operations to build the model:
Order Ingress rules by CreationTimestamp field, i.e., old rules first.
If the same path for the same host is defined in more than one Ingress, the oldest rule wins.
If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.
Create a list of NGINX Servers (per hostname)
Create a list of NGINX Upstreams
If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
Annotations are applied to all the paths in the Ingress.
Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.
On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.
In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.
Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.
To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the nginx-ingress-controller Deployment manifest
Argument
Description
--add_dir_header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--disable-catch-all
Disable support for catch-all Ingresses
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--enable-metrics
Enables the collection of NGINX metrics (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed.
--enable-ssl-passthrough
Enable SSL Passthrough.
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--log_backtrace_at
when logging hits line file:N, emit a stack trace (default :0)
--log_dir
If non-empty, write log files in this directory
--log_file
If non-empty, use this log file
--log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr
log to standard error instead of files (default true)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
--metrics-per-host
Export metrics per-host (default true)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.
--skip_headers
If true, avoid header prefixes in the log messages
--skip_log_headers
If true, avoid headers when opening log files
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds (default 60)
--stderrthreshold
logs at or above this threshold go to stderr (default 2)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process.
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the NGINX Ingress controller and exit.
--vmodule
comma-separated list of pattern=N settings for file-filtered logging
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
The following command line arguments are accepted by the Ingress controller executable.
They are set in the container spec of the nginx-ingress-controller Deployment manifest
Argument
Description
--add_dir_header
If true, adds the file directory to the header
--alsologtostderr
log to standard error as well as files
--annotations-prefix
Prefix of the Ingress annotations specific to the NGINX controller. (default "nginx.ingress.kubernetes.io")
--apiserver-host
Address of the Kubernetes API server. Takes the form "protocol://address:port". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.
--certificate-authority
Path to a cert file for the certificate authority. This certificate is used only when the flag --apiserver-host is specified.
--configmap
Name of the ConfigMap containing custom global configurations for the controller.
--default-backend-service
Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form "namespace/name". The controller configures NGINX to forward requests to the first port of this Service.
--default-server-port
Port to use for exposing the default server (catch-all). (default 8181)
--default-ssl-certificate
Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form "namespace/name".
--disable-catch-all
Disable support for catch-all Ingresses
--election-id
Election id to use for Ingress status updates. (default "ingress-controller-leader")
--enable-metrics
Enables the collection of NGINX metrics (default true)
--enable-ssl-chain-completion
Autocomplete SSL certificate chains with missing intermediate CA certificates. Certificates uploaded to Kubernetes must have the "Authority Information Access" X.509 v3 extension for this to succeed.
--enable-ssl-passthrough
Enable SSL Passthrough.
--health-check-path
URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default "/healthz")
--health-check-timeout
Time limit, in seconds, for a probe to health-check-path to succeed. (default 10)
--healthz-port
Port to use for the healthz endpoint. (default 10254)
--http-port
Port to use for servicing HTTP traffic. (default 80)
--https-port
Port to use for servicing HTTPS traffic. (default 443)
--ingress-class
Name of the ingress class this controller satisfies. The class of an Ingress object is set using the field IngressClassName in Kubernetes clusters version v1.18.0 or higher or the annotation "kubernetes.io/ingress.class" (deprecated). If this parameter is not set, or set to the default value of "nginx", it will handle ingresses with either an empty or "nginx" class name.
--kubeconfig
Path to a kubeconfig file containing authorization and API server information.
--log_backtrace_at
when logging hits line file:N, emit a stack trace (default :0)
--log_dir
If non-empty, write log files in this directory
--log_file
If non-empty, use this log file
--log_file_max_size
Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
--logtostderr
log to standard error instead of files (default true)
--maxmind-edition-ids
Maxmind edition ids to download GeoLite2 Databases. (default "GeoLite2-City,GeoLite2-ASN")
--maxmind-retries-timeout
Maxmind downloading delay between 1st and 2nd attempt, 0s - do not retry to download if something went wrong. (default 0s)
--maxmind-retries-count
Number of attempts to download the GeoIP DB. (default 1)
--maxmind-license-key
Maxmind license key to download GeoLite2 Databases. https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databases
--metrics-per-host
Export metrics per-host (default true)
--profiler-port
Port to use for expose the ingress controller Go profiler when it is enabled. (default 10245)
--profiling
Enable profiling via web interface host:port/debug/pprof/ (default true)
--publish-service
Service fronting the Ingress controller. Takes the form "namespace/name". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.
--publish-status-address
Customized address (or addresses, separated by comma) to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.
--report-node-internal-ip-address
Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.
--skip_headers
If true, avoid header prefixes in the log messages
--skip_log_headers
If true, avoid headers when opening log files
--ssl-passthrough-proxy-port
Port to use internally for SSL Passthrough. (default 442)
--status-port
Port to use for the lua HTTP endpoint configuration. (default 10246)
--status-update-interval
Time interval in seconds in which the status should check if an update is required. Default is 60 seconds (default 60)
--stderrthreshold
logs at or above this threshold go to stderr (default 2)
--stream-port
Port to use for the lua TCP/UDP endpoint configuration. (default 10247)
--sync-period
Period at which the controller forces the repopulation of its local object stores. Disabled by default.
--sync-rate-limit
Define the sync frequency upper limit (default 0.3)
--tcp-services-configmap
Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.
--udp-services-configmap
Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form "namespace/name:port", where "port" can either be a port name or number.
--update-status
Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)
--update-status-on-shutdown
Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)
--shutdown-grace-period
Seconds to wait after receiving the shutdown signal, before stopping the nginx process.
-v, --v Level
number for the log level verbosity
--validating-webhook
The address to start an admission controller on to validate incoming ingresses. Takes the form ":port". If not provided, no admission controller is started.
--validating-webhook-certificate
The path of the validating webhook certificate PEM.
--validating-webhook-key
The path of the validating webhook key PEM.
--version
Show release information about the NGINX Ingress controller and exit.
--vmodule
comma-separated list of pattern=N settings for file-filtered logging
--watch-namespace
Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
apiVersion:networking.k8s.io/v1beta1
+ Regular expressions in paths - NGINX Ingress Controller
Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.
The ingress controller supports case insensitive regular expressions in the spec.rules.http.paths.path field. This can be enabled by setting the nginx.ingress.kubernetes.io/use-regex annotation to true (the default is false).
Hint
Kubernetes only accept expressions that comply with the RE2 engine syntax. It is possible that valid expressions accepted by NGINX cannot be used with ingress-nginx, because the PCRE library (used in NGINX) supports a wider syntax than RE2. See the RE2 Syntax documentation for differences.
See the description of the use-regex annotation for more details.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.
Please read the warning before using regular expressions in your ingress definitions.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The following request URI's would match the corresponding location blocks:
test.com/foo/bar/1 matches ~* ^/foo/bar/.+ and will go to service 3.
test.com/foo/bar/ matches ~* ^/foo/bar/ and will go to service 2.
test.com/foo/bar matches ~* ^/foo/bar and will go to service 1.
IMPORTANT NOTES:
If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
The easiest way to configure the controller for metrics is via helm upgrade. Assuming you have installed the ingress-nginx controller as a helm release named ingresscontroller0, then you can simply type the command show below :
Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.
If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.
Running the following command deploys prometheus in Kubernetes:
kubectl get svc -n ingress-nginx
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-default-http-backend ClusterIP 10.103.59.201 <none> 80/TCP 3d
-ingress-nginx NodePort 10.97.44.72 <none> 80:30100/TCP,443:30154/TCP,10254:32049/TCP 5h
-prometheus-server NodePort 10.98.233.86 <none> 9090:32630/TCP 10m
-grafana NodePort 10.98.233.87 <none> 3000:31086/TCP 10m
-
Open your browser and visit the following URL: http://{node IP address}:{grafana-svc-nodeport} to load the Grafana Dashboard. According to the above example, this URL will be http://10.192.0.3:31086
The username and password is admin
After the login you can import the Grafana dashboard from official dashboards, by following steps given below :
Navigate to lefthand panel of grafana
Hover on the gearwheel icon for Configuration and click "Data Sources"
Click "Add data source"
Select "Prometheus"
Enter the details (note: I used http://CLUSTER_IP_PROMETHEUS_SVC:9090)
Left menu (hover over +) -> Dashboard
Click "Import"
Enter the copy pasted json from https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/grafana/dashboards/nginx.json
By default request metrics are labeled with the hostname. When you have a wildcard domain ingress, then there will be no metrics for that ingress (to prevent the metrics from exploding in cardinality). To get metrics in this case you need to run the ingress controller with --metrics-per-host=false (you will lose labeling by hostname, but still have labeling by ingress).
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.
In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".
"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".
"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.
Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of files that can be opened by each worker process. The default of 0 means "max open files (system's limit) / worker-processes - 1024". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: "0.0.0.0/0"
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Activates plugins installed in /etc/nginx/lua/plugins. Refer to ingress-nginx plugins README for more information on how to write and install a plugin.
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/nginx/geoip/GeoLite2-City.mmdb and /etc/nginx/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the "brotli" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled
Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if use-gzip is enabled. default:application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true
Sets the maximum number of files that can be opened by each worker process. The default of 0 means "max open files (system's limit) - 1024". default: 0
If use-forwarded-headers or use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default IP/network address of your external load balancer. Can be a comma-separated list of CIDR blocks. default: "0.0.0.0/0"
Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true
Activates plugins installed in /etc/nginx/lua/plugins. Refer to ingress-nginx plugins README for more information on how to write and install a plugin.
Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64
Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s
Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true
Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.
Enables the geoip2 module for NGINX. Since 0.27.0 and due to a change in the MaxMind databases a license is required to have access to the databases. For this reason, it is required to define a new flag --maxmind-license-key in the ingress controller deployment to download the databases needed during the initialization of the ingress controller. Alternatively, it is possible to use a volume to mount the files /etc/nginx/geoip/GeoLite2-City.mmdb and /etc/nginx/geoip/GeoLite2-ASN.mmdb, avoiding the overhead of the download.
Important
If the feature is enabled but the files are missing, GeoIP2 will not be enabled.
Enables or disables compression of HTTP responses using the "brotli" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled
Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if use-gzip is enabled. default:application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.
Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 320
Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 10000
Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.
If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.
Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1
Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1
Specifies to use client-side sampling. If true disables client-side sampling (thus ignoring sample_rate) and enables distributed priority sampling, where traces are sampled based on a combination of user-assigned priorities and configuration from the agent. default: true
Adds custom configuration to all the locations in the nginx configuration.
You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.
Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308default: 308
Why the default code is 308?
RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if we send a redirect in methods like POST.
A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: "/.well-known/acme-challenge"
A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to "false". default: ""
A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: ""
Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: ""
Sets the query parameter in the error page signin URL which contains the original URL of the request that failed authentication. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin-redirect-param. default: "rd"
Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: ""
Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""
Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""
Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.
A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.
global-rate-limit-memcached-host: IP/FQDN of memcached server to use. Required to enable Global Rate Limiting.
global-rate-limit-memcached-port: port of memcached server to use. Defaults default memcached port of 11211.
global-rate-limit-memcached-connect-timeout: configure timeout for connect, send and receive operations. Unit is millisecond. Defaults to 50ms.
global-rate-limit-memcached-max-idle-timeout: configure timeout for cleaning idle connections. Unit is millisecond. Defaults to 50ms.
global-rate-limit-memcached-pool-size: configure number of max connections to keep alive. Make sure your memcached server can handle global-rate-limit-memcached-pool-size * worker-processes * <number of ingress-nginx replicas> simultaneous connections.
These settings get used by lua-resty-global-throttle that ingress-nginx includes. Refer to the link to learn more about lua-resty-global-throttle.