Merge pull request #1130 from aledbf/improve-docs

[nginx-ingress-controller] Improve docs and examples
This commit is contained in:
Prashanth B 2016-06-22 22:37:43 -07:00 committed by GitHub
commit b9740c96d9
6 changed files with 396 additions and 112 deletions

View file

@ -2,19 +2,34 @@
This is a nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works. This is a nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works.
## Contents
* [Conventions](#conventions)
* [Requirements](#what-it-provides)
* [Dry running](#dry-running-the-ingress-controller)
* [Deployment](#deployment)
* [HTTP](#http)
* [HTTPS](#https)
* [HTTPS enforcement](#server-side-https-enforcement)
* [HSTS](#http-strict-transport-security)
* [TCP Services](#exposing-tcp-services)
* [UDP Services](#exposing-udp-services)
* [Proxy Protocol](#proxy-protocol)
* [NGINX customization](configuration.md)
* [NGINX status page](#nginx-status-page)
* [Debug & Troubleshooting](#troubleshooting)
* [Limitations](#limitations)
## What it provides?
- Ingress controller ## Conventions
- nginx 1.9.x with
- SSL support Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:
- custom ssl_dhparam (optional). Just mount a secret with a file named `dhparam.pem`. `openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)"`
- support for TCP services (flag `--tcp-services-configmap`) and creat the secret via `kubectl create secret tls --key file --cert file`
- custom nginx configuration using [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md)
## Requirements ## Requirements
- default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) - Default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
## Dry running the Ingress controller ## Dry running the Ingress controller
@ -27,8 +42,7 @@ $ mkdir /etc/nginx-ssl
$ ./nginx-ingress-controller --running-in-cluster=false --default-backend-service=kube-system/default-http-backend $ ./nginx-ingress-controller --running-in-cluster=false --default-backend-service=kube-system/default-http-backend
``` ```
## Deployment
## Deploy the Ingress controller
First create a default backend: First create a default backend:
``` ```
@ -85,7 +99,7 @@ $ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{
$ curl $LBIP/foo -H 'Host: foo.bar.com' $ curl $LBIP/foo -H 'Host: foo.bar.com'
``` ```
## TLS ## HTTPS
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg: You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:
@ -119,36 +133,25 @@ Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/exampl
Check the [example](examples/tls/README.md) Check the [example](examples/tls/README.md)
### Server-side HTTPS enforcement
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX config map.
To configure this feature for specfic ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in theparticular resource.
### HTTP Strict Transport Security ### HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule. By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule.
To disable this behavior use `hsts=false` in the NGINX ConfigMap. To disable this behavior use `hsts=false` in the NGINX config map.
#### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
### Server-side HTTPS enforcement through redirect
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX ConfigMap.
To configure this feature for specfic ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in theparticular resource.
## Proxy Protocol
If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the acutal TCP connection itself.
Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol.
Please check the [proxy-protocol](examples/proxy-protocol/) example
## Exposing TCP services ## Exposing TCP services
Ingress does not support TCP services (yet). For this reason this Ingress controller uses a ConfigMap where the key is the external port to use and the value is Ingress does not support TCP services (yet). For this reason this Ingress controller uses the flag `--tcp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>`
`<namespace/service name>:<service port>`
It is possible to use a number or the name of the port. It is possible to use a number or the name of the port.
The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000` The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000`
@ -168,8 +171,7 @@ Please check the [tcp services](examples/tcp/README.md) example
Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/). Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/).
Ingress does not support UDP services (yet). For this reason this Ingress controller uses a ConfigMap where the key is the external port to use and the value is Ingress does not support UDP services (yet). For this reason this Ingress controller uses the flag `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>`
`<namespace/service name>:<service port>`
It is possible to use a number or the name of the port. It is possible to use a number or the name of the port.
The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53` The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53`
@ -185,73 +187,13 @@ data:
Please check the [udp services](examples/udp/README.md) example Please check the [udp services](examples/udp/README.md) example
## Proxy Protocol
## Custom NGINX configuration If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the acutal TCP connection itself.
Using a ConfigMap it is possible to customize the defaults in nginx. Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol.
Please check the [tcp services](examples/custom-configuration/README.md) example Please check the [proxy-protocol](examples/proxy-protocol/) example
## Custom NGINX template
The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Mounting a volume is possible to use a custom version.
Use the [custom-template](examples/custom-template/README.md) example as a guide
**Please note the template is tied to the go code. Be sure to no change names in the variable `$cfg`**
### Custom NGINX upstream checks
NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enabled configuration of each server in the upstream. The ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` or `upstream-fail-timeout` in the NGINX Configmap or in a particular Ingress rule. By default this values are 0. This means NGINX will respect the `livenessProbe`, if is defined. If there is no probe, NGINX will not mark a server inside an upstream down.
To use custom values in an Ingress rule define this annotations:
`ingress-nginx.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable
`ingress-nginx.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable. Also the period of time the server will be considered unavailable.
**Important:**
The upstreams are shared. i.e. Ingress rule using the same service will use the same upstream.
This means only one of the rules should define annotations to configure the upstream servers
Please check the [auth](examples/custom-upstream-check/README.md) example
### Authentication
Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the the key `auth`
The annotations are:
```
ingress-nginx.kubernetes.io/auth-type:[basic|digest]
```
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
```
ingress-nginx.kubernetes.io/auth-secret:secretName
```
Name of the secret that contains the usernames and passwords with access to the `path/s` defined in the Ingress Rule.
The secret must be created in the same namespace than the Ingress rule
```
ingress-nginx.kubernetes.io/auth-realm:"realm string"
```
### NGINX status page
The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`.
This controller provides an alternitive to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module.
To use this module just provide a ConfigMap with the key `enable-vts-status=true`. The URL is exposed in the port 8080.
Please check the example `example/rc-default.yaml`
![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter")
To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json`
### Custom errors ### Custom errors
@ -262,14 +204,19 @@ In case of an error in a request the body of the response is obtained from the `
Using this two headers is possible to use a custom backend service like [this one](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/custom-errors/README.md) Using this two headers is possible to use a custom backend service like [this one](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/custom-errors/README.md)
### NGINX status page
## Troubleshooting The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`.
This controller provides an alternitive to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module.
To use this module just provide a config map with the key `enable-vts-status=true`. The URL is exposed in the port 8080.
Please check the example `example/rc-default.yaml`
Problems encountered during [1.2.0-alpha7 deployment](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md): ![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter")
* make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.
* To fix the above, setup-files.sh must be patched before the cluster is inited (refer to https://github.com/kubernetes/kubernetes/pull/21504)
### Debug To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json`
### Debug & Troubleshooting
Using the flag `--v=XX` it is possible to increase the level of logging. Using the flag `--v=XX` it is possible to increase the level of logging.
In particular: In particular:
@ -295,13 +242,14 @@ I0316 12:24:37.610073 1 command.go:69] change in configuration detected. R
### Retries in no idempotent methods *These issues were encountered in past versions of Kubernetes:*
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. [1.2.0-alpha7 deployment](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md):
The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap
* make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.
* To fix the above, setup-files.sh must be patched before the cluster is inited (refer to https://github.com/kubernetes/kubernetes/pull/21504)
## Limitations ## Limitations
- Ingress rules for TLS require the definition of the field `host` - Ingress rules for TLS require the definition of the field `host`
- The IP address in the status of loadBalancer could contain old values

View file

@ -0,0 +1,336 @@
## Contents
* [Customizing NGINX](#customizing-nginx)
* [Custom NGINX configuration](#custom-nginx-configuration)
* [Custom NGINX template](#custom-nginx-template)
* [Annotations](#annotations)
* [Custom NGINX upstream checks](#custom-nginx-upstream-checks)
* [Authentication](#authentication)
* [Rewrite](#rewrite)
* [Rate limiting](#rate-limiting)
* [Secure backends](#secure-backends)
* [Whitelist source range](#whitelist-source-range)
* [Allowed parameters in configuration config map](#allowed-parameters-in-configuration-configmap)
* [Default configuration options](#default-configuration-options)
* [Websockets](#websockets)
* [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb)
* [Retries in no idempotent methods](#retries-in-no-idempotent-methods)
### Customizing nginx
there are 3 ways to customize nginx
1. config map: create a stand alone config map, use this if you want a different global configuration
2. annoations: [annotate the ingress](#annotations), use this if you want a specific configuration for the site defined in the ingress rule
3. custom template: when is required a specific setting like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), custom [log_format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change an through the config map
#### Custom NGINX configuration
It's possible to customize the defaults in NGINX using a config map.
Please check the [custom configuration](examples/custom-configuration/README.md) example
#### Annotations
The following annotaitons are supported:
|Name |type|
|---------------------------|------|
|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI|
|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false|
|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number|
|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number|
|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest|
|[ingress.kubernetes.io/auth-secret](#authentication)|string|
|[ingress.kubernetes.io/auth-realm](#authentication)|string|
|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false|
|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR|
#### Custom NGINX template
The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Mounting a volume is possible to use a custom version.
Use the [custom-template](examples/custom-template/README.md) example as a guide
**Please note the template is tied to the go code. Be sure to no change names in the variable `$cfg`**
### Custom NGINX upstream checks
NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enables the configuration of each server in the upstream. The ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` or `upstream-fail-timeout` in the NGINX config map or in a particular Ingress rule. It defaults to 0. This means NGINX will respect the `readinessProbe`, if is defined. If there is no probe, NGINX will not mark a server inside an upstream down.
**With the default values NGINX will not health check your backends, and whenever the endpoints controller notices a readiness probe failure that pod's ip will be removed from the list of endpoints, causing nginx to also remove it from the upstreams.**
To use custom values in an Ingress rule define this annotations:
`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable
`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable. Also the period of time the server will be considered unavailable.
**Important:**
The upstreams are shared. i.e. Ingress rule using the same service will use the same upstream.
This means only one of the rules should define annotations to configure the upstream servers
Please check the [custom upstream check](examples/custom-upstream-check/README.md) example
### Authentication
Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the the key `auth`
The annotations are:
```
ingress.kubernetes.io/auth-type:[basic|digest]
```
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
```
ingress.kubernetes.io/auth-secret:secretName
```
Name of the secret that contains the usernames and passwords with access to the `path/s` defined in the Ingress Rule.
The secret must be created in the same namespace than the Ingress rule
```
ingress.kubernetes.io/auth-realm:"realm string"
```
Please check the [auth](examples/custom-upstream-check/README.md) example
### Rewrite
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service.
If the application contains relative links is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will append a `base` tag in the header of the returned HTML from the backend.
Please check the [rewrite](examples/rewrite/README.md) example
### Rate limiting
The annotations `ingress.kubernetes.io/limit-connections` and `ingress.kubernetes.io/limit-rps` allows the creation of a limit in the connections that can be opened by a single client IP address. This can be use to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus)
`ingress.kubernetes.io/limit-connections`: number of concurrent allowed connections from a single IP address
`ingress.kubernetes.io/limit-rps`: number of allowed connections per second from a single IP address
Is possible to specify both annotation in the same Ingress rule. If you specify both annotations in a single Ingress rule, limit-rps takes precedence
### Secure upstreams
By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the ingress rule changes the protocol to `https`.
### Whitelist source range
You can specify the allowed client ip source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation, eg; `10.0.0.0/24,172.10.0.1`
For a global restriction (any URL) is possible to use `whitelist-source-range` in the NGINX config map
*Note:* adding an annotation overrides any global restriction
Please check the [whitelist](examples/whitelist/README.md) example
### **Allowed parameters in configuration config map:**
**body-size:** Sets the maximum allowed size of the client request body. See NGINX [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size)
**custom-http-errors:** Enables which HTTP codes should be passed for processing with the [error_page directive](http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page)
Setting at least one code this also enables [proxy_intercept_errors](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) (required to process error_page)
For instance setting `custom-http-errors: 404,415`
**enable-sticky-sessions:** Enables sticky sessions using cookies. This is provided by [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng) module
**enable-vts-status:** Allows the replacement of the default status page with a third party module named [nginx-module-vts](https://github.com/vozlt/nginx-module-vts)
**error-log-level:** Configures the logging level of errors. Log levels above are listed in the order of increasing severity
http://nginx.org/en/docs/ngx_core_module.html#error_log
**retry-non-idempotent:** Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server.
The previous behavior can be restored using the value "true"
**hsts:** Enables or disables the header HSTS in servers running SSL.
HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
**hsts-include-subdomains:** Enables or disables the use of HSTS in all the subdomains of the servername
**hsts-max-age:** Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.
**keep-alive:** Sets the time during which a keep-alive client connection will stay open on the server side.
The zero value disables keep-alive client connections
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
**max-worker-connections:** Sets the maximum number of simultaneous connections that can be opened by each [worker process](http://nginx.org/en/docs/ngx_core_module.html#worker_connections)
**proxy-connect-timeout:** Sets the timeout for [establishing a connection with a proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds.
**proxy-read-timeout:** Sets the timeout in seconds for [reading a response from the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response
**proxy-send-timeout:** Sets the timeout in seconds for [transmitting a request to the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request.
**resolver:** Configures name servers used to [resolve](http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) names of upstream servers into addresses
**server-name-hash-max-size:** Sets the maximum size of the [server names hash tables](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names, map directives values, MIME types, names of request header strings, etc.
http://nginx.org/en/docs/hash.html
**server-name-hash-bucket-size:** Sets the size of the bucker for the server names hash tables
http://nginx.org/en/docs/hash.html
http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
**ssl-buffer-size:** Sets the size of the [SSL buffer](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data.
4k helps NGINX to improve TLS Time To First Byte (TTTFB)
https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/
**ssl-ciphers:** Sets the [ciphers](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library
The default cipher list is: `ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA`
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority.
The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy)
Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/)
**ssl-protocols:** Sets the [SSL protocols](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use.
The default is: `TLSv1 TLSv1.1 TLSv1.2`
TLSv1 is enabled to allow old clients like:
- [IE 8-10 / Win 7](https://www.ssllabs.com/ssltest/viewClient.html?name=IE&version=8-10&platform=Win%207&key=113)
- [Java 7u25](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=7u25&key=26)
If you dont need to support this clients please remove TLSv1
Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`
**ssl-dh-param:** sets the Base64 string that contains Diffie-Hellman key to help with "Perfect Forward Secrecy"
https://www.openssl.org/docs/manmaster/apps/dhparam.html
https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
**ssl-session-cache:** Enables or disables the use of shared [SSL cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes.
**ssl-session-cache-size:** Sets the size of the [SSL shared session cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes.
**ssl-session-tickets:** Enables or disables session resumption through [TLS session tickets](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets)
**ssl-session-timeout:** Sets the time during which a client may [reuse the session](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache.
**ssl-redirect:** Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule)
Default is true
**upstream-max-fails:** Sets the number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that should happen in the duration set by the fail_timeout parameter to consider the server unavailable
**upstream-fail-timeout:** Sets the time during which the specified number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) should happen to consider the server unavailable
**use-proxy-protocol:** Enables or disables the use of the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAproxy and Amazon Elastic Load Balancer (ELB).
**use-gzip:** Enables or disables the use of the nginx module that compresses responses using the ["gzip" module](http://nginx.org/en/docs/http/ngx_http_gzip_module.html)
**use-http2:** Enables or disables the [HTTP/2](http://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections
**gzip-types:** Sets the MIME types in addition to "text/html" to compress. The special value "*"" matches any MIME type.
Responses with the "text/html" type are always compressed if `use-gzip` is enabled
**worker-processes:** Sets the number of [worker processes](http://nginx.org/en/docs/ngx_core_module.html#worker_processes). By default "auto" means number of available CPU cores
### Default configuration options
Running `/nginx-ingress-controller --dump-nginx-configuration` is possible to get the value of the options that can be changed.
The next table shows the options, the default value and a description
|name |default|
|---------------------------|------|
|body-size|1m|
|custom-http-errors|" "|
|enable-sticky-sessions|"false"|
|enable-vts-status|"false"|
|error-log-level|notice|
|gzip-types||
|hsts|"true"|
|hsts-include-subdomains|"true"|
|hsts-max-age|"15724800"|
|keep-alive|"75"|
|max-worker-connections|"16384"|
|proxy-connect-timeout|"5"|
|proxy-read-timeout|"60"|
|proxy-real-ip-cidr|0.0.0.0/0|
|proxy-send-timeout|"60"|
|retry-non-idempotent|"false"|
|server-name-hash-bucket-size|"64"|
|server-name-hash-max-size|"512"|
|ssl-buffer-size|4k|
|ssl-ciphers||
|ssl-protocols|TLSv1 TLSv1.1 TLSv1.2|
|ssl-session-cache|"true"|
|ssl-session-cache-size|10m|
|ssl-session-tickets|"true"|
|ssl-session-timeout|10m|
|use-gzip|"true"|
|use-http2|"true"|
|vts-status-zone-size|10m|
|worker-processes|<number of CPUs>|
### Websockets
Support for websockets is provided by NGINX OOTB. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. The default value of this settings is `30 seconds`.
A more adequate value to support websockets is a value higher than one hour (`3600`)
#### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
#### Retries in no idempotent methods
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
The previous behavior can be restored using `retry-non-idempotent=true` in the configuration config map

View file

@ -35,11 +35,11 @@ metadata:
name: ingress-with-auth name: ingress-with-auth
annotations: annotations:
# type of authentication # type of authentication
ingress-nginx.kubernetes.io/auth-type: basic ingress.kubernetes.io/auth-type: basic
# name of the secret that contains the user/password definitions # name of the secret that contains the user/password definitions
ingress-nginx.kubernetes.io/auth-secret: basic-auth ingress.kubernetes.io/auth-secret: basic-auth
# message to display with an appropiate context why the authentication is required # message to display with an appropiate context why the authentication is required
ingress-nginx.kubernetes.io/auth-realm: "Authentication Required - foo" ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
spec: spec:
rules: rules:
- host: foo.bar.com - host: foo.bar.com

View file

@ -7,7 +7,7 @@ kind: Ingress
metadata: metadata:
name: echoheaders name: echoheaders
annotations: annotations:
ingress-nginx.kubernetes.io/upstream-fail-timeout: "30" ingress.kubernetes.io/upstream-fail-timeout: "30"
spec: spec:
rules: rules:
- host: foo.bar.com - host: foo.bar.com

View file

@ -24,7 +24,7 @@ import (
) )
const ( const (
secureUpstream = "ingress.kubernetes.io/secure-upstream" secureUpstream = "ingress.kubernetes.io/secure-backends"
) )
type ingAnnotations map[string]string type ingAnnotations map[string]string

View file

@ -67,7 +67,7 @@ func TestAnnotations(t *testing.T) {
su := ingAnnotations(ing.GetAnnotations()).secureUpstream() su := ingAnnotations(ing.GetAnnotations()).secureUpstream()
if !su { if !su {
t.Errorf("Expected true in secure-upstgream but %v was returned", su) t.Errorf("Expected true in secure-backends but %v was returned", su)
} }
} }