Split documentation

This commit is contained in:
Manuel de Brito Fontes 2017-10-13 10:55:03 -03:00
parent a18daabc51
commit a9168f276e
144 changed files with 1780 additions and 3789 deletions

View file

@ -36,7 +36,7 @@ Ingress collaborators may add "LGTM" (Looks Good To Me) or an equivalent comment
Whether you are a user or contributor, official support channels include:
- GitHub issues: https://github.com/kubernetes/ingress/issues/new
- GitHub issues: https://github.com/kubernetes/ingress-nginx/issues/new
- Slack: kubernetes-users room in the [Kubernetes Slack](http://slack.kubernetes.io/)
- Email: [kubernetes-users](https://groups.google.com/forum/#!forum/kubernetes-users) mailing list

560
README.md
View file

@ -7,7 +7,6 @@ The GCE ingress controller was moved to [github.com/kubernetes/ingress-gce](http
[![Build Status](https://travis-ci.org/kubernetes/ingress-nginx.svg?branch=master)](https://travis-ci.org/kubernetes/ingress-nginx)
[![Coverage Status](https://coveralls.io/repos/github/kubernetes/ingress-nginx/badge.svg?branch=master)](https://coveralls.io/github/kubernetes/ingress-nginx?branch=master)
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes/ingress-nginx)](https://goreportcard.com/report/github.com/kubernetes/ingress-nginx)
[![GoDoc](https://godoc.org/github.com/kubernetes/ingress-nginx?status.svg)](https://godoc.org/github.com/kubernetes/ingress-nginx)
## Description
@ -25,35 +24,34 @@ An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches th
## Contents
* [Conventions](#conventions)
* [Requirements](#requirements)
* [Contribute](#contribute)
* [Command line arguments](#command-line-arguments)
* [Deployment](#deployment)
* [HTTP](#http)
* [HTTPS](#https)
* [Default SSL Certificate](#default-ssl-certificate)
* [HTTPS enforcement](#server-side-https-enforcement)
* [HSTS](#http-strict-transport-security)
* [Kube-Lego](#automated-certificate-management-with-kube-lego)
* [Source IP address](#source-ip-address)
* [TCP Services](#exposing-tcp-services)
* [UDP Services](#exposing-udp-services)
* [Proxy Protocol](#proxy-protocol)
* [ModSecurity Web Application Firewall](#modsecurity-web-application-firewall)
* [Opentracing](#opentracing)
* [NGINX customization](configuration.md)
* [Custom errors](#custom-errors)
* [NGINX status page](#nginx-status-page)
* [Running multiple ingress controllers](#running-multiple-ingress-controllers)
* [Running on Cloudproviders](#running-on-cloudproviders)
* [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller)
* [Log format](#log-format)
* [Local cluster](#local-cluster)
* [Debug & Troubleshooting](#debug--troubleshooting)
* [Limitations](#limitations)
* [Why endpoints and not services?](#why-endpoints-and-not-services)
* [NGINX Notes](#nginx-notes)
- [Conventions](#conventions)
- [Requirements](#requirements)
- [Deployment](deploy/README.md)
- [Command line arguments](docs/user-guide/cli-arguments.md)
- [Contribute](#contribute)
- [TLS](docs/user-guide/tls.md)
- [Customizing NGINX](#customizing-nginx)
- [Custom NGINX configuration](docs/user-guide/configmap.md)
- [Annotation ingress.class](#annotation-ingress.class)
- [Annotations](docs/user-guide/annotations.md)
- [Allowed parameters in configuration ConfigMap](docs/user-guide/configmap.md)
- [Retries in non-idempotent methods](#retries-in-non-idempotent-methods)
- [Source IP address](#source-ip-address)
- [Exposing TCP and UDP Services](docs/user-guide/exposing-tcp-udp-services.md)
- [Proxy Protocol](#proxy-protocol)
- [ModSecurity Web Application Firewall](docs/user-guide/modsecurity.md)
- [Opentracing](docs/user-guide/opentracing.md)
- [Custom errors](docs/user-guide/custom-errors.md)
- [NGINX status page](docs/user-guide/nginx-status-page.md)
- [Running multiple ingress controllers](#running-multiple-ingress-controllers)
- [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller)
- [Log format](docs/user-guide/log-format.md)
- [Websockets](#websockets)
- [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb)
- [Debug & Troubleshooting](docs/troubleshooting.md)
- [Limitations](#limitations)
- [Why endpoints and not services?](#why-endpoints-and-not-services)
- [External Articles](docs/user-guide/external-articles.md)
## Conventions
@ -63,292 +61,52 @@ and create the secret via `kubectl create secret tls ${CERT_NAME} --key ${KEY_FI
## Requirements
Default backend [404-server](https://github.com/kubernetes/ingress/tree/master/images/404-server)
The default backend is a service of handling all url paths and hosts the nginx controller doesn't understand, i.e., all the request that are not mapped with an Ingress
Basically a default backend exposes two URLs:
- /healthz that returns 200
- / that returns 404
The location [404-server](https://github.com/kubernetes/ingress-nginx/tree/master/images/404-server) contains the image of the default backend and [custom-error-pages](https://github.com/kubernetes/ingress-nginx/tree/master/images/custom-error-pages) an example that shows how it is possible to customize
## Contribute
See the [contributor guidelines](CONTRIBUTING.md)
## Command line arguments
## Annotation ingress.class
```console
Usage of :
--alsologtostderr log to standard error as well as files
--apiserver-host string The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside a Kubernetes cluster and local discovery is attempted.
--configmap string Name of the ConfigMap that contains the custom configuration to use
--default-backend-service string Service used to serve a 404 page for the default backend. Takes the form
namespace/name. The controller uses the first node port of this Service for
the default backend.
--default-server-port int Default port to use for exposing the default server (catch all) (default 8181)
--default-ssl-certificate string Name of the secret
that contains a SSL certificate to be used as default for a HTTPS catch-all server
--disable-node-list Disable querying nodes. If --force-namespace-isolation is true, this should also be set.
--election-id string Election id to use for status update. (default "ingress-controller-leader")
--enable-ssl-passthrough Enable SSL passthrough feature. Default is disabled
--force-namespace-isolation Force namespace isolation. This flag is required to avoid the reference of secrets or
configmaps located in a different namespace than the specified in the flag --watch-namespace.
--health-check-path string Defines
the URL to be used as health check inside in the default server in NGINX. (default "/healthz")
--healthz-port int port for healthz endpoint. (default 10254)
--http-port int Indicates the port to use for HTTP traffic (default 80)
--https-port int Indicates the port to use for HTTPS traffic (default 443)
--ingress-class string Name of the ingress class to route through this controller.
--kubeconfig string Path to kubeconfig file with authorization and master location information.
--log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0)
--log_dir string If non-empty, write log files in this directory
--logtostderr log to standard error instead of files
--profiling Enable profiling via web interface host:port/debug/pprof/ (default true)
--publish-service string Service fronting the ingress controllers. Takes the form
namespace/name. The controller will set the endpoint records on the
ingress objects to reflect those on the service.
--sort-backends Defines if backends and it's endpoints should be sorted
--ssl-passtrough-proxy-port int Default port to use internally for SSL when SSL Passthgough is enabled (default 442)
--status-port int Indicates the TCP port to use for exposing the nginx status page (default 18080)
--stderrthreshold severity logs at or above this threshold go to stderr (default 2)
--sync-period duration Relist and confirm cloud resources this often. Default is 10 minutes (default 10m0s)
--tcp-services-configmap string Name of the ConfigMap that contains the definition of the TCP services to expose.
The key in the map indicates the external port to be used. The value is the name of the
service with the format namespace/serviceName and the port of the service could be a
number of the name of the port.
The ports 80 and 443 are not allowed as external ports. This ports are reserved for the backend
--udp-services-configmap string Name of the ConfigMap that contains the definition of the UDP services to expose.
The key in the map indicates the external port to be used. The value is the name of the
service with the format namespace/serviceName and the port of the service could be a
number of the name of the port.
--update-status Indicates if the
ingress controller should update the Ingress status IP/hostname. Default is true (default true)
--update-status-on-shutdown Indicates if the
ingress controller should update the Ingress status IP/hostname when the controller
is being stopped. Default is true (default true)
-v, --v Level log level for V logs
--vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
--watch-namespace string Namespace to watch for Ingress. Default is to watch all namespaces
```
## Ingress Class
## Deployment
First create a default backend and it's corresponding service:
```console
kubectl create -f examples/default-backend.yaml
```
Follow the [example-deployment](examples/deployment/README.md) steps to deploy nginx-ingress-controller in Kubernetes cluster (you may prefer other type of workloads, like Daemonset, in production environment).
Loadbalancers are created via a ReplicationController or Daemonset:
## HTTP
First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output
```console
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.8 --replicas=1 --port=8080
```
Now we expose the same application in two different services (so we can create different Ingress rules)
```console
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y
```
Next we create a couple of Ingress rules
```console
kubectl create -f examples/ingress.yaml
```
we check that ingress rules are defined:
```console
$ kubectl get ing
NAME RULE BACKEND ADDRESS
echomap -
foo.bar.com
/foo echoheaders-x:80
bar.baz.com
/bar echoheaders-y:80
/foo echoheaders-x:80
```
Before the deploy of the Ingress controller we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
```console
kubectl create -f examples/default-backend.yaml
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
```
Check NGINX it is running with the defined Ingress rules:
```console
$ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}')
$ curl $LBIP/foo -H 'Host: foo.bar.com'
```
## HTTPS
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:
If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the `ingress.class`
annotation, eg creating an Ingress with an annotation like
```yaml
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: foo-secret
namespace: default
type: kubernetes.io/tls
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
secretName: foo-secret
backend:
serviceName: s1
servicePort: 80
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
```
Please follow [PREREQUISITES](examples/PREREQUISITES.md) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.
will target the nginx controller, forcing the GCE controller to ignore it.
Check the [example](examples/tls-termination/nginx)
__Note__: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.
### Default SSL Certificate
### Customizing NGINX
NGINX provides the option [server name _](http://nginx.org/en/docs/http/server_names.html) as a catch-all in case of requests that do not match one of the configured server names. This configuration works without issues for HTTP traffic. In case of HTTPS, NGINX requires a certificate. For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned case. If this flag is not provided NGINX will use a self signed certificate.
There are three ways to customize NGINX:
Running without the flag `--default-ssl-certificate`:
```console
$ curl -v https://10.2.78.7:443 -k
* Rebuilt URL to: https://10.2.78.7:443/
* Trying 10.2.78.4...
* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=foo.bar.com
* start date: Apr 13 00:50:56 2016 GMT
* expire date: Apr 13 00:50:56 2017 GMT
* issuer: CN=foo.bar.com
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: 10.2.78.7
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.11.1
< Date: Thu, 21 Jul 2016 15:38:46 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
<
<span>The page you're looking for could not be found.</span>
* Connection #0 to host 10.2.78.7 left intact
```
Specifying `--default-ssl-certificate=default/foo-tls`:
```console
core@localhost ~ $ curl -v https://10.2.78.7:443 -k
* Rebuilt URL to: https://10.2.78.7:443/
* Trying 10.2.78.7...
* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: CN=foo.bar.com
* start date: Apr 13 00:50:56 2016 GMT
* expire date: Apr 13 00:50:56 2017 GMT
* issuer: CN=foo.bar.com
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET / HTTP/1.1
> Host: 10.2.78.7
> User-Agent: curl/7.47.1
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.11.1
< Date: Mon, 18 Jul 2016 21:02:59 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
<
<span>The page you're looking for could not be found.</span>
* Connection #0 to host 10.2.78.7 left intact
```
### Server-side HTTPS enforcement
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX config map.
To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource.
### HTTP Strict Transport Security
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule.
To disable this behavior use `hsts=false` in the NGINX config map.
### Automated Certificate Management with Kube-Lego
[Kube-Lego] automatically requests missing or expired certificates from [Let's Encrypt] by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation:
```console
kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
```
To setup Kube-Lego you can take a look at this [full example]. The first
version to fully support Kube-Lego is nginx Ingress controller 0.8.
[full example]:https://github.com/jetstack/kube-lego/tree/master/examples
[Kube-Lego]:https://github.com/jetstack/kube-lego
[Let's Encrypt]:https://letsencrypt.org
1. [ConfigMap](docs/user-guide/configmap.md): uses a Configmap to set global configurations in NGINX.
2. [annotations](docs/user-guide/annotations.md): use this if you want a specific configuration for a particular Ingress rule.
3. custom template: when more specific settings are required, like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), custom [log_format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change the configuration through the ConfigMap.
## Source IP address
@ -357,45 +115,6 @@ If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR. Th
Another option is to enable proxy protocol using `use-proxy-protocol: "true"`.
In this mode NGINX do not uses the content of the header to get the source IP address of the connection.
## Exposing TCP services
Ingress does not support TCP services (yet). For this reason this Ingress controller uses the flag `--tcp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>:[PROXY]:[PROXY]`
It is possible to use a number or the name of the port. The two last fields are optional. Adding `PROXY` in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service (https://www.nginx.com/resources/admin-guide/proxy-protocol/).
The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000`
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-example
data:
9000: "default/example-go:8080"
```
Please check the [tcp services](examples/tcp/README.md) example
## Exposing UDP services
Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/).
Ingress does not support UDP services (yet). For this reason this Ingress controller uses the flag `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>`
It is possible to use a number or the name of the port.
The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53`
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: udp-configmap-example
data:
53: "kube-system/kube-dns:53"
```
Please check the [udp services](examples/udp/README.md) example
## Proxy Protocol
If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
@ -404,170 +123,32 @@ Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/lat
Please check the [proxy-protocol](examples/proxy-protocol/) example
## ModSecurity Web Application Firewall
ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analys… https://www.modsecurity.org
The [ModSecurity-nginx](https://github.com/SpiderLabs/ModSecurity-nginx) connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).
The default modsecurity configuration file is located in `/etc/nginx/modsecurity/modsecurity.conf`. This is the only file located in this directory and it contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.
To enable the modsecurity feature we need to specify `enable-modsecurity: "true"` in the configuration configmap.
The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.
The directory `/etc/nginx/owasp-modsecurity-crs` contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository.
Using `enable-owasp-modsecurity-crs: "true"` we enable the use of the this rules.
## Opentracing
Using the third party module [rnburn/nginx-opentracing](https://github.com/rnburn/nginx-opentracing) the NGINX ingress controller can configure NGINX to enable [OpenTracing](http://opentracing.io) instrumentation.
By default this feature is disabled.
To enable the instrumentation we just need to enable the instrumentation in the configuration configmap and set the host where we should send the traces.
In the [aledbf/zipkin-js-example](https://github.com/aledbf/zipkin-js-example) github repository is possible to see a dockerized version of zipkin-js-example with the required Kubernetes descriptors.
To install the example and the zipkin collector we just need to run:
```
kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/zipkin.yaml
kubectl create -f https://raw.githubusercontent.com/aledbf/zipkin-js-example/kubernetes/kubernetes/deployment.yaml
```
Also we need to configure the NGINX controller configmap with the required values:
```yaml
apiVersion: v1
data:
enable-opentracing: "true"
zipkin-collector-host: zipkin.default.svc.cluster.local
kind: ConfigMap
metadata:
labels:
k8s-app: nginx-ingress-controller
name: nginx-custom-configuration
```
Using curl we can generate some traces:
```console
$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example'
$ curl -v http://$(minikube ip)/api -H 'Host: zipkin-js-example'
```
In the zipkin inteface we can see the details:
![zipkin screenshot](docs/images/zipkin-demo.png "zipkin collector screenshot")
### Custom errors
In case of an error in a request the body of the response is obtained from the `default backend`.
Each request to the default backend includes two headers:
- `X-Code` indicates the HTTP code to be returned to the client.
- `X-Format` the value of the `Accept` header.
**Important:** the custom backend must return the correct HTTP status code to be returned. NGINX do not changes the reponse from the custom default backend.
Using this two headers is possible to use a custom backend service like [this one](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-errors/nginx) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/customization/custom-errors/README.md)
NGINX sends aditional headers that can be used to build custom response:
- X-Original-URI
- X-Namespace
- X-Ingress-Name
- X-Service-Name
### NGINX status page
The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`.
This controller provides an alternative to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module.
To use this module just provide a config map with the key `enable-vts-status=true`. The URL is exposed in the port 18080.
Please check the example `example/rc-default.yaml`
![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter")
To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json`
### Running multiple ingress controllers
If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress, you need to specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses that you would like this controller to claim.
Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying the wrong value will result in all ingress controllers ignoring the ingress.
Multiple ingress controllers running in the same cluster was not supported in Kubernetes versions < 1.3.
### Running on Cloudproviders
### Websockets
If you're running this ingress controller on a cloudprovider, you should assume the provider also has a native Ingress controller and specify the ingress.class annotation as indicated in this section.
In addition to this, you will need to add a firewall rule for each port this controller is listening on, i.e :80 and :443.
Support for websockets is provided by NGINX out of the box. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`.
The default value of this settings is `60 seconds`. A more adequate value to support websockets is a value higher than one hour (`3600`).
### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (NGINX default is `16k`).
### Retries in non-idempotent methods
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap.
### Disabling NGINX ingress controller
Setting the annotation `kubernetes.io/ingress.class` to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
### Log format
The default configuration uses a custom logging format to add additional information about upstreams
```
log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - '
'[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" '
'$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
```
Sources:
- [upstream variables](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables)
- [embedded variables](http://nginx.org/en/docs/http/ngx_http_core_module.html#variables)
Description:
- `$proxy_protocol_addr`: if PROXY protocol is enabled
- `$remote_addr`: if PROXY protocol is disabled (default)
- `$proxy_add_x_forwarded_for`: the `X-Forwarded-For` client request header field with the $remote_addr variable appended to it, separated by a comma
- `$remote_user`: user name supplied with the Basic authentication
- `$time_local`: local time in the Common Log Format
- `$request`: full original request line
- `$status`: response status
- `$body_bytes_sent`: number of bytes sent to a client, not counting the response header
- `$http_referer`: value of the Referer header
- `$http_user_agent`: value of User-Agent header
- `$request_length`: request length (including request line, header, and request body)
- `$request_time`: time elapsed since the first bytes were read from the client
- `$proxy_upstream_name`: name of the upstream. The format is `upstream-<namespace>-<service name>-<service port>`
- `$upstream_addr`: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas
- `$upstream_response_length`: keeps the length of the response obtained from the upstream server
- `$upstream_response_time`: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution
- `$upstream_status`: keeps status code of the response obtained from the upstream server
### Local cluster
Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/community/blob/master/contributors/devel/running-locally.md) for more details.
Use of `hostNetwork: true` in the ingress controller is required to falls back at localhost:8080 for the apiserver if every other client creation check fails (eg: service account not present, kubeconfig doesn't exist, no master env vars...)
### Debug & Troubleshooting
Using the flag `--v=XX` it is possible to increase the level of logging.
In particular:
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
```console
I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000
+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000
@@ -235,7 +235,6 @@
upstream default-echoheadersx {
least_conn;
- server 10.2.112.124:5000;
server 10.2.208.50:5000;
}
I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading...
```
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html)
Peruse the [FAQ section](docs/faq/README.md)
Ask on one of the [user-support channels](CONTRIBUTING.md#support-channels)
### Limitations
- Ingress rules for TLS require the definition of the field `host`
@ -575,10 +156,3 @@ Ask on one of the [user-support channels](CONTRIBUTING.md#support-channels)
### Why endpoints and not services
The NGINX ingress controller does not uses [Services](http://kubernetes.io/docs/user-guide/services) to route traffic to the pods. Instead it uses the Endpoints API in order to bypass [kube-proxy](http://kubernetes.io/docs/admin/kube-proxy/) to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
### NGINX notes
Since `gcr.io/google_containers/nginx-slim:0.8` NGINX contains the next patches:
- Dynamic TLS record size [nginx__dynamic_tls_records.patch](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/)
NGINX provides the parameter `ssl_buffer_size` to adjust the size of the buffer. Default value in NGINX is 16KB. The ingress controller changes the default to 4KB. This improves the [TLS Time To First Byte (TTTFB)](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) but the size is fixed. This patches adapts the size of the buffer to the content is being served helping to improve the perceived latency.
- [HTTP/2 header compression](https://raw.githubusercontent.com/cloudflare/sslconfig/master/patches/nginx_http2_hpack.patch)

View file

@ -1,659 +0,0 @@
## Contents
* [Customizing NGINX](#customizing-nginx)
* [Custom NGINX configuration](#custom-nginx-configuration)
* [Custom NGINX template](#custom-nginx-template)
* [Annotations](#annotations)
* [Custom NGINX upstream checks](#custom-nginx-upstream-checks)
* [Custom NGINX upstream hashing](#custom-nginx-upstream-hashing)
* [Authentication](#authentication)
* [Rewrite](#rewrite)
* [Rate limiting](#rate-limiting)
* [SSL Passthrough](#ssl-passthrough)
* [Secure backends](#secure-backends)
* [Server-side HTTPS enforcement through redirect](#server-side-https-enforcement-through-redirect)
* [Whitelist source range](#whitelist-source-range)
* [Allowed parameters in configuration ConfigMap](#allowed-parameters-in-configuration-configmap)
* [Default configuration options](#default-configuration-options)
* [Websockets](#websockets)
* [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb)
* [Retries in non-idempotent methods](#retries-in-non-idempotent-methods)
* [Custom max body size](#custom-max-body-size)
### Customizing NGINX
There are 3 ways to customize NGINX:
1. [ConfigMap](#allowed-parameters-in-configuration-configmap): create a stand alone ConfigMap, use this if you want a different global configuration.
2. [annotations](#annotations): use this if you want a specific configuration for the site defined in the Ingress rule.
3. custom template: when more specific settings are required, like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), custom [log_format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change an through the ConfigMap.
#### Custom NGINX configuration
It is possible to customize the defaults in NGINX using a ConfigMap.
Please check the [custom configuration](../../examples/customization/custom-configuration/nginx/README.md) example.
#### Annotations
The following annotations are supported:
|Name |type|
|---------------------------|------|
|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false|
|[ingress.kubernetes.io/app-root](#rewrite)|string|
|[ingress.kubernetes.io/affinity](#session-affinity)|cookie|
|[ingress.kubernetes.io/auth-realm](#authentication)|string|
|[ingress.kubernetes.io/auth-secret](#authentication)|string|
|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest|
|[ingress.kubernetes.io/auth-url](#external-authentication)|string|
|[ingress.kubernetes.io/auth-tls-secret](#certificate-authentication)|string|
|[ingress.kubernetes.io/auth-tls-verify-depth](#certificate-authentication)|number|
|[ingress.kubernetes.io/auth-tls-verify-client](#certificate-authentication)|string|
|[ingress.kubernetes.io/auth-tls-error-page](#certificate-authentication)|string|
|[ingress.kubernetes.io/base-url-scheme](#rewrite)|string|
|[ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string|
|[ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string|
|[ingress.kubernetes.io/default-backend](#default-backend)|string|
|[ingress.kubernetes.io/enable-cors](#enable-cors)|true or false|
|[ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|[ingress.kubernetes.io/from-to-www-redirect](#redirect-from-to-www)|true or false|
|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number|
|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number|
|[ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string|
|[ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string|
|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI|
|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false|
|[ingress.kubernetes.io/server-alias](#server-alias)|string|
|[ingress.kubernetes.io/server-snippet](#server-snippet)|string|
|[ingress.kubernetes.io/service-upstream](#service-upstream)|true or false|
|[ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string|
|[ingress.kubernetes.io/session-cookie-hash](#cookie-affinity)|string|
|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|[ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|true or false|
|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string|
|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR|
#### Custom NGINX template
The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Mounting a volume is possible to use a custom version.
Use the [custom-template](../../examples/customization/custom-template/README.md) example as a guide.
**Please note the template is tied to the Go code. Do not change names in the variable `$cfg`.**
For more information about the template syntax please check the [Go template package](https://golang.org/pkg/text/template/).
In addition to the built-in functions provided by the Go package the following functions are also available:
- empty: returns true if the specified parameter (string) is empty
- contains: [strings.Contains](https://golang.org/pkg/strings/#Contains)
- hasPrefix: [strings.HasPrefix](https://golang.org/pkg/strings/#HasPrefix)
- hasSuffix: [strings.HasSuffix](https://golang.org/pkg/strings/#HasSuffix)
- toUpper: [strings.ToUpper](https://golang.org/pkg/strings/#ToUpper)
- toLower: [strings.ToLower](https://golang.org/pkg/strings/#ToLower)
- buildLocation: helps to build the NGINX Location section in each server
- buildProxyPass: builds the reverse proxy configuration
- buildRateLimitZones: helps to build all the required rate limit zones
- buildRateLimit: helps to build a limit zone inside a location if contains a rate limit annotation
### Custom NGINX upstream checks
NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enable the configuration of each server in the upstream. The Ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` and `upstream-fail-timeout` in the NGINX ConfigMap or in a particular Ingress rule. `upstream-max-fails` defaults to 0. This means NGINX will respect the container's `readinessProbe` if it is defined. If there is no probe and no values for `upstream-max-fails` NGINX will continue to send traffic to the container.
**With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**
To use custom values in an Ingress rule define these annotations:
`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the `upstream-fail-timeout` parameter to consider the server unavailable.
`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.
In NGINX, backend server pools are called "[upstreams](http://nginx.org/en/docs/http/ngx_http_upstream_module.html)". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.
**Important:** All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.
Please check the [custom upstream check](../../examples/customization/custom-upstream-check/README.md) example.
### Custom NGINX upstream hashing
NGINX supports load balancing by client-server mapping based on [consistent hashing](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](http://www.last.fm/user/RJ/journal/2007/04/10/392555/) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
To enable consistent hashing for a backend:
`ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example `ingress.kubernetes.io/upstream-hash-by: "$request_uri"` to consistently hash upstream requests by the current request URI.
### Authentication
Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key `auth`.
The annotations are:
```
ingress.kubernetes.io/auth-type: [basic|digest]
```
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
```
ingress.kubernetes.io/auth-secret: secretName
```
The name of the secret that contains the usernames and passwords with access to the `path`s defined in the Ingress Rule.
The secret must be created in the same namespace as the Ingress rule.
```
ingress.kubernetes.io/auth-realm: "realm string"
```
Please check the [auth](/examples/auth/basic/nginx/README.md) example.
### Certificate Authentication
It's possible to enable Certificate-Based Authentication (Mutual Authentication) using additional annotations in Ingress Rule.
The annotations are:
```
ingress.kubernetes.io/auth-tls-secret: secretName
```
The name of the secret that contains the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this ingress. It's composed of namespace/secretName.
```
ingress.kubernetes.io/auth-tls-verify-depth
```
The validation depth between the provided client certificate and the Certification Authority chain.
```
ingress.kubernetes.io/auth-tls-verify-client
```
Enables verification of client certificates.
```
ingress.kubernetes.io/auth-tls-error-page
```
The URL/Page that user should be redirected in case of a Certificate Authentication Error
Please check the [tls-auth](/examples/auth/client-certs/nginx/README.md) example.
### Configuration snippet
Using this annotation you can add additional configuration to the NGINX location. For example:
```
ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $request_id";
```
### Default Backend
The ingress controller requires a default backend. This service is handle the response when the service in the Ingress rule does not have endpoints.
This is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation `ingress.kubernetes.io/default-backend: <svc name>` to specify a custom default backend.
### Enable CORS
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation `ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality.
For more information please check https://enable-cors.org/server_nginx.html
### Server Alias
To add Server Aliases to an Ingress rule add the annotation `ingress.kubernetes.io/server-alias: "<alias>"`.
This will create a server with the same configuration, but a different server_name as the provided host.
*Note:* A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias
annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created
the new server configuration will take place over the alias configuration.
For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name
### Server snippet
Using the annotation `ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block.
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/server-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://m.example.com;
}
```
**Important:** This annotation can be used only once per host
### Client Body Buffer Size
Sets buffer size for reading client request body per location. In case the request body is larger than the buffer,
the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.
This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is
applied to each location provided in the ingress rule.
*Note:* The annotation value must be given in a valid format otherwise the
For example to set the client-body-buffer-size the following can be done:
* `ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes
* `ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte
* `ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte
* `ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte
* `ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte
For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
### External Authentication
To use an existing service that provides authentication the Ingress rule can be annotated with `ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent.
Additionally it is possible to set `ingress.kubernetes.io/auth-method` to specify the HTTP method to use (GET or POST) and `ingress.kubernetes.io/auth-send-body` to true or false (default).
```
ingress.kubernetes.io/auth-url: "URL to the authentication service"
```
Please check the [external-auth](/examples/auth/external-auth/nginx/README.md) example.
### Rewrite
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service.
If the application contains relative links it is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will prepend a [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) in the header of the returned HTML from the backend.
If the scheme of [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) need to be specific, set the annotation `ingress.kubernetes.io/base-url-scheme` to the scheme such as `http` and `https`.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation `ingress.kubernetes.io/app-root` to redirect requests for `/`.
Please check the [rewrite](/examples/rewrite/nginx/README.md) example.
### Rate limiting
The annotations `ingress.kubernetes.io/limit-connections`, `ingress.kubernetes.io/limit-rps`, and `ingress.kubernetes.io/limit-rpm` define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus).
`ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address.
`ingress.kubernetes.io/limit-rps`: number of connections that may be accepted from a given IP each second.
`ingress.kubernetes.io/limit-rpm`: number of connections that may be accepted from a given IP each minute.
You can specify the client IP source ranges to be excluded from rate-limiting through the `ingress.kubernetes.io/limit-whitelist` annotation. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, `limit-rpm`, and then `limit-rps` takes precedence.
The annotation `ingress.kubernetes.io/limit-rate`, `ingress.kubernetes.io/limit-rate-after` define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
`ingress.kubernetes.io/limit-rate-after`: sets the initial amount after which the further transmission of a response to a client will be rate limited.
`ingress.kubernetes.io/limit-rate`: rate of request that accepted from a client each second.
To configure this setting globally for all Ingress rules, the `limit-rate-after` and `limit-rate` value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.
### SSL Passthrough
The annotation `ingress.kubernetes.io/ssl-passthrough` allows to configure TLS termination in the pod and not in NGINX.
**Important:**
- Using the annotation `ingress.kubernetes.io/ssl-passthrough` invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).
- The use of this annotation requires the flag `--enable-ssl-passthrough` (By default it is disabled)
### Secure backends
By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the Ingress rule changes the protocol to `https`.
### Service Upstream
By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue [#257](https://github.com/kubernetes/ingress/issues/257).
#### Known Issues
If the `service-upstream` annotation is specified the following things should be taken into consideration:
* Sticky Sessions will not work as only round-robin load balancing is supported.
* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream.
### Server-side HTTPS enforcement through redirect
By default the controller redirects (301) to `HTTPS` if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use `ssl-redirect: "false"` in the NGINX config map.
To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to `HTTPS` even when there is not TLS cert available. This can be achieved by using the `ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource.
### Redirect from to www
In some scenarios is required to redirect from `www.domain.com` to `domain.com` or viceversa.
To enable this feature use the annotation `ingress.kubernetes.io/from-to-www-redirect: "true"`
**Important:**
If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted.
### Whitelist source range
You can specify the allowed client IP source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`.
To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the NGINX ConfigMap.
*Note:* Adding an annotation to an Ingress rule overrides any global restriction.
Please check the [whitelist](/examples/affinity/cookie/nginx/README.md) example.
### Session Affinity
The annotation `ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.
The only affinity type available for NGINX is `cookie`.
### Cookie affinity
If you use the ``cookie`` type you can also specify the name of the cookie that will be used to route the requests with the annotation `ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'route'.
In case of NGINX the annotation `ingress.kubernetes.io/session-cookie-hash` defines which algorithm will be used to 'hash' the used upstream. Default value is `md5` and possible values are `md5`, `sha1` and `index`.
The `index` option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!
In NGINX this feature is implemented by the third party module [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng). The workflow used to define which upstream server will be used is explained [here](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/raw/08a395c66e425540982c00482f55034e1fee67b6/docs/sticky.pdf)
### Custom timeouts
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.
In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
- ingress.kubernetes.io/proxy-connect-timeout
- ingress.kubernetes.io/proxy-send-timeout
- ingress.kubernetes.io/proxy-read-timeout
- ingress.kubernetes.io/proxy-request-buffering
### **Allowed parameters in configuration ConfigMap**
**proxy-body-size:** Sets the maximum allowed size of the client request body. See NGINX [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
**custom-http-errors:** Enables which HTTP codes should be passed for processing with the [error_page directive](http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page).
Setting at least one code also enables [proxy_intercept_errors](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) which are required to process error_page.
Example usage: `custom-http-errors: 404,415`
**disable-access-log:** Disables the Access Log from the entire Ingress Controller. This is 'false' by default.
**access-log-path:** Access log path. Goes to '/var/log/nginx/access.log' by default. http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log
**error-log-path:** Error log path. Goes to '/var/log/nginx/error.log' by default. http://nginx.org/en/docs/ngx_core_module.html#error_log
**enable-modsecurity:** enables the modsecurity module for NGINX
By default this is disabled
**enable-owasp-modsecurity-crs:** enables the OWASP ModSecurity Core Rule Set (CRS)
By default this is disabled
**disable-ipv6:** Disable listening on IPV6. This is 'false' by default.
**enable-dynamic-tls-records:** Enables dynamically sized TLS records to improve time-to-first-byte. Enabled by default. See [CloudFlare's blog](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency) for more information.
**enable-underscores-in-headers:** Enables underscores in header names. This is disabled by default.
**enable-vts-status:** Allows the replacement of the default status page with a third party module named [nginx-module-vts](https://github.com/vozlt/nginx-module-vts).
**error-log-level:** Configures the logging level of errors. Log levels above are listed in the order of increasing severity.
http://nginx.org/en/docs/ngx_core_module.html#error_log
**gzip-types:** Sets the MIME types in addition to "text/html" to compress. The special value "\*" matches any MIME type.
Responses with the "text/html" type are always compressed if `use-gzip` is enabled.
**hsts:** Enables or disables the header HSTS in servers running SSL.
HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
**hsts-include-subdomains:** Enables or disables the use of HSTS in all the subdomains of the server-name.
**hsts-max-age:** Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.
**hsts-preload:** Enables or disables the preload attribute in the HSTS feature (if is enabled)
**ignore-invalid-headers:** set if header fields with invalid names should be ignored. This is 'true' by default.
**keep-alive:** Sets the time during which a keep-alive client connection will stay open on the server side.
The zero value disables keep-alive client connections.
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
**load-balance:** Sets the algorithm to use for load balancing. The value can either be round_robin to
use the default round robin loadbalancer, least_conn to use the least connected method, or
ip_hash to use a hash of the server for routing. The default is least_conn.
http://nginx.org/en/docs/http/load_balancing.html.
**log-format-upstream:** Sets the nginx [log format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format).
Example for json output:
```
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$request_id", "remote_user":
"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":
$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri",
"request_query": "$args", "request_length": $request_length, "duration": $request_time,
"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":
"$http_user_agent" }'
```
**log-format-stream:** Sets the nginx [stream format](https://nginx.org/en/docs/stream/ngx_stream_log_module.html#log_format).
**max-worker-connections:** Sets the maximum number of simultaneous connections that can be opened by each [worker process](http://nginx.org/en/docs/ngx_core_module.html#worker_connections).
**proxy-buffer-size:** Sets the size of the buffer used for [reading the first part of the response](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header.
**proxy-connect-timeout:** Sets the timeout for [establishing a connection with a proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds.
**proxy-cookie-domain:** Sets a text that [should be changed in the domain attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_domain) of the “Set-Cookie” header fields of a proxied server response.
**proxy-cookie-path:** Sets a text that [should be changed in the path attribute](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cookie_path) of the “Set-Cookie” header fields of a proxied server response.
**proxy-read-timeout:** Sets the timeout in seconds for [reading a response from the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response.
**proxy-send-timeout:** Sets the timeout in seconds for [transmitting a request to the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request.
**proxy-next-upstream:** Specifies in [which cases](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_next_upstream) a request should be passed to the next server.
**proxy-request-buffering:** Enables or disables [buffering of a client request body](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering).
**retry-non-idempotent:** Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server.
The previous behavior can be restored using the value "true".
**server-name-hash-bucket-size:** Sets the size of the bucket for the server names hash tables.
http://nginx.org/en/docs/hash.html
http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
**server-name-hash-max-size:** Sets the maximum size of the [server names hash tables](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names, map directives values, MIME types, names of request header strings, etc.
http://nginx.org/en/docs/hash.html
**proxy-headers-hash-bucket-size:** Sets the size of the bucket for the proxy headers hash tables.
http://nginx.org/en/docs/hash.html
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size
**proxy-headers-hash-max-size:** Sets the maximum size of the proxy headers hash tables.
http://nginx.org/en/docs/hash.html
https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size
**server-tokens:** Send NGINX Server header in responses and display NGINX version in error pages. Enabled by default.
**map-hash-bucket-size:** Sets the bucket size for the [map variables hash tables](http://nginx.org/en/docs/http/ngx_http_map_module.html#map_hash_bucket_size). The details of setting up hash tables are provided in a separate [document](http://nginx.org/en/docs/hash.html).
**ssl-buffer-size:** Sets the size of the [SSL buffer](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data.
The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).
https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/
**ssl-ciphers:** Sets the [ciphers](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library.
The default cipher list is:
`ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256`.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority.
The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy).
Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/).
**ssl-dh-param:** Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy".
https://www.openssl.org/docs/manmaster/apps/dhparam.html
https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
**ssl-protocols:** Sets the [SSL protocols](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use.
The default is: `TLSv1.2`.
TLSv1 is enabled to allow old clients like:
- [IE 8-10 / Win 7](https://www.ssllabs.com/ssltest/viewClient.html?name=IE&version=8-10&platform=Win%207&key=113)
- [Java 7u25](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=7u25&key=26)
If you don't need to support these clients please remove `TLSv1` to improve security.
Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`.
**ssl-redirect:** Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule)
Default is "true".
**ssl-session-cache:** Enables or disables the use of shared [SSL cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes.
**ssl-session-cache-size:** Sets the size of the [SSL shared session cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes.
**ssl-session-tickets:** Enables or disables session resumption through [TLS session tickets](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets).
**ssl-session-ticket-key:** sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets
By default, a randomly generated key is used.
To create a ticket: `openssl rand 80 | base64 -w0`
**ssl-session-timeout:** Sets the time during which a client may [reuse the session](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache.
**upstream-max-fails:** Sets the number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that should happen in the duration set by the `fail_timeout` parameter to consider the server unavailable.
**upstream-fail-timeout:** Sets the time during which the specified number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) should happen to consider the server unavailable.
**use-gzip:** Enables or disables compression of HTTP responses using the ["gzip" module](http://nginx.org/en/docs/http/ngx_http_gzip_module.html)
The default mime type list to compress is: `application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`.
**use-http2:** Enables or disables [HTTP/2](http://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections.
**use-proxy-protocol:** Enables or disables the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
**whitelist-source-range:** Sets the default whitelisted IPs for each `server` block. This can be overwritten by an annotation on an Ingress rule. See [ngx_http_access_module](http://nginx.org/en/docs/http/ngx_http_access_module.html).
**worker-processes:** Sets the number of [worker processes](http://nginx.org/en/docs/ngx_core_module.html#worker_processes). The default of "auto" means number of available CPU cores.
**worker-shutdown-timeout:** Sets a timeout for Nginx to [wait for worker to gracefully shutdown](http://nginx.org/en/docs/ngx_core_module.html#worker_shutdown_timeout). The default is "10s".
**limit-conn-zone-variable:** Sets parameters for a shared memory zone that will keep states for various keys of [limit_conn_zone](http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html#limit_conn_zone). The default of "$binary_remote_addr" variables size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
**proxy-set-headers:** Sets custom headers from a configmap before sending traffic to backends. See [example](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-headers/nginx)
**add-headers:** Sets custom headers from a configmap before sending traffic to the client. See `proxy-set-headers` [example](https://github.com/kubernetes/ingress/tree/master/examples/customization/custom-headers/nginx)
**bind-address:** Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
**enable-opentracing:** enables the nginx Opentracing extension https://github.com/rnburn/nginx-opentracing
Default is "false"
**zipkin-collector-host:** specifies the host to use when uploading traces. It must be a valid URL
**zipkin-collector-port:** specifies the port to use when uploading traces
Default: 9411
**zipkin-service-name:** specifies the service name to use for any traces created
Default: nginx
**http-snippet:** adds custom configuration to the http section of the nginx configuration
Default: ""
**server-snippet:** adds custom configuration to all the servers in the nginx configuration
Default: ""
**location-snippet:** adds custom configuration to all the locations in the nginx configuration
Default: ""
### Default configuration options
The following table shows the options, the default value and a description.
|name |default|
|---------------------------|------|
|body-size|1m|
|custom-http-errors|" "|
|enable-dynamic-tls-records|"true"|
|enable-sticky-sessions|"false"|
|enable-underscores-in-headers|"false"|
|enable-vts-status|"false"|
|error-log-level|notice|
|gzip-types|see use-gzip description above|
|hsts|"true"|
|hsts-include-subdomains|"true"|
|hsts-max-age|"15724800"|
|hsts-preload|"false"|
|ignore-invalid-headers|"true"|
|keep-alive|"75"|
|log-format-stream|[$time_local] $protocol $status $bytes_sent $bytes_received $session_time|
|log-format-upstream|[$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status|
|map-hash-bucket-size|"64"|
|max-worker-connections|"16384"|
|proxy-body-size|same as body-size|
|proxy-buffer-size|"4k"|
|proxy-request-buffering|"on"|
|proxy-connect-timeout|"5"|
|proxy-cookie-domain|"off"|
|proxy-cookie-path|"off"|
|proxy-read-timeout|"60"|
|proxy-real-ip-cidr|0.0.0.0/0|
|proxy-send-timeout|"60"|
|retry-non-idempotent|"false"|
|server-name-hash-bucket-size|"64"|
|server-name-hash-max-size|"512"|
|server-tokens|"true"|
|ssl-buffer-size|4k|
|ssl-ciphers||
|ssl-dh-param|value from openssl|
|ssl-protocols|TLSv1 TLSv1.1 TLSv1.2|
|ssl-session-cache|"true"|
|ssl-session-cache-size|10m|
|ssl-session-tickets|"true"|
|ssl-session-timeout|10m|
|use-gzip|"true"|
|use-http2|"true"|
|upstream-keepalive-connections|"0" (disabled)|
|variables-hash-bucket-size|64|
|variables-hash-max-size|2048|
|vts-status-zone-size|10m|
|vts-default-filter-key|$geoip_country_code country::*|
|whitelist-source-range|permit all|
|worker-processes|number of CPUs|
|limit-conn-zone-variable|$binary_remote_addr|
|bind-address||
### Websockets
Support for websockets is provided by NGINX out of the box. No special configuration required.
The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. The default value of this settings is `60 seconds`.
A more adequate value to support websockets is a value higher than one hour (`3600`).
### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (NGINX default is `16k`).
### Retries in non-idempotent methods
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
The previous behavior can be restored using `retry-non-idempotent=true` in the configuration ConfigMap.
### Custom max body size
For NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the NGINX ConfigMap.
To use custom values in an Ingress rule define these annotation:
```
ingress.kubernetes.io/proxy-body-size: 8m
```

166
deploy/README.md Normal file
View file

@ -0,0 +1,166 @@
# Installation Guide
## Contents
- [Mandatory commands](#mandatory-commands)
- [Install without RBAC roles](#install-without-rbac-roles)
- [Install with RBAC roles](#install-with-rbac-roles)
- [Custom Provider](#custom-provider)
- [minikube](#minikube)
- [AWS](#aws)
- [GCE - GKE](#gce-gke)
- [Azure](#azure)
- [Baremetal](#baremetal)
- [Using Helm](#using-helm)
- [Verify installation](#verify-installation)
- [Detect installed version](#detect-installed-version)
## Mandatory commands
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \
| kubectl apply -f -
```
## Install without RBAC roles
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml \
| kubectl apply -f -
```
## Install with RBAC roles
Please check the [RBAC](rbac.md) document.
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \
| kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \
| kubectl apply -f -
```
## Custom Service provider
There are cloud provider specific yaml files
### minikube
```console
minikube addons enable ingress
```
### AWS
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
This setup requires to choose in wich layer (L4 or L7) we want to configure the ELB:
- [Layer 4](https://en.wikipedia.org/wiki/OSI_model#Layer_4:_Transport_Layer): use TCP as the listener protocol for ports 80 and 443.
- [Layer 7](https://en.wikipedia.org/wiki/OSI_model#Layer_7:_Application_Layer): use HTTP as the listener protocol for port 80 and terminate TLS in the ELB
For L4:
```console
kubectl apply -f provider/aws/service-l4.yaml
kubectl apply -f provider/aws/patch-configmap-l4.yaml
```
For L7:
Change line of the file `provider/aws/service-l7.yaml` replacing the dummy id with a valid one `"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"`
Then execute:
```console
kubectl apply -f provider/aws/service-l7.yaml
kubectl apply -f provider/aws/patch-configmap-l7.yaml
```
This example creates an ELB with just two listeners, one in port 80 and another in port 443
![Listeners](../docs/images/listener.png)
If the ingress controller uses RBAC run:
```console
kubectl apply -f provider/aws/patch-service-with-rbac.yaml
```
If not run:
```console
kubectl apply -f provider/aws/patch-service-without-rbac.yaml
```
### GCE - GKE
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/gce-gke/service.yaml \
| kubectl apply -f -
```
### Azure
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/azure/service.yaml \
| kubectl apply -f -
```
### Baremetal
Using [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport):
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml \
| kubectl apply -f -
```
Using HostPort:
```console
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-hostport.yaml \
| kubectl apply -f -
```
## Using Helm
NGINX Ingress controller can be installed via [Helm](https://helm.sh/) using the chart [stable/nginx](https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress) from the official charts repository.
To install the chart with the release name `my-nginx`:
```console
helm install stable/nginx-ingress --name my-nginx
```
## Verify installation
To check if the ingress controller pods have started, run the following command:
```console
kubectl get pods --all-namespaces -l app=ingress-nginx --watch
```
Once the operator pods are running, you can cancel the above command by typing `Ctrl+C`.
Now, you are ready to create your first ingress.
## Detect installed version
To detect which version of the ingress controller is running, exec into the pod and run `nginx-ingress-controller version` command.
```console
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name})
kubectl exec -it $POD_NAME -n $POD_NAMESPACE /nginx-ingress-controller version
```

7
deploy/configmap.yaml Normal file
View file

@ -0,0 +1,7 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx

View file

@ -3,14 +3,14 @@ kind: Deployment
metadata:
name: default-http-backend
labels:
k8s-app: default-http-backend
namespace: default
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: default-http-backend
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
@ -36,16 +36,17 @@ spec:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: default
namespace: ingress-nginx
labels:
k8s-app: default-http-backend
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
k8s-app: default-http-backend
app: default-http-backend

4
deploy/namespace.yaml Normal file
View file

@ -0,0 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx

View file

@ -0,0 +1,9 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "true"

View file

@ -0,0 +1,9 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app: ingress-nginx
data:
use-proxy-protocol: "false"

View file

@ -0,0 +1,40 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

View file

@ -0,0 +1,39 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

View file

@ -0,0 +1,20 @@
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https

View file

@ -0,0 +1,25 @@
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
annotations:
# replace with the correct value of the generated certifcate in the AWS console
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
# the backend instances are HTTP
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
# Map port 443
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
spec:
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http

View file

@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http

View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
ports:
- name: http
hostPort: 80
targetPort: 80
protocol: TCP
- name: https
hostPort: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx

View file

@ -1,21 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
name: ingress-nginx
namespace: ingress-nginx
spec:
# Can also use LoadBalancer type
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30080
targetPort: 80
protocol: TCP
- name: https
port: 443
nodePort: 30443
targetPort: 443
protocol: TCP
selector:
k8s-app: nginx-ingress-lb
app: ingress-nginx

View file

@ -0,0 +1,19 @@
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http

View file

@ -1,23 +1,19 @@
# Role Based Access Control
This example demonstrates how to apply an nginx ingress controller with role based access control
# Role Based Access Control - RBAC
## Overview
This example applies to nginx-ingress-controllers being deployed in an
environment with RBAC enabled.
This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.
Role Based Access Control is comprised of four layers:
1. `ClusterRole` - permissions assigned to a role that apply to an entire cluster
2. `ClusterRoleBinding` - binding a ClusterRole to a specific account
3. `Role` - permissions assigned to a role that apply to a specific namespace
4. `RoleBinding` - binding a Role to a specific account
1. `ClusterRole` - permissions assigned to a role that apply to an entire cluster
2. `ClusterRoleBinding` - binding a ClusterRole to a specific account
3. `Role` - permissions assigned to a role that apply to a specific namespace
4. `RoleBinding` - binding a Role to a specific account
In order for RBAC to be applied to an nginx-ingress-controller, that controller
should be assigned to a `ServiceAccount`. That `ServiceAccount` should be
bound to the `Role`s and `ClusterRole`s defined for the
nginx-ingress-controller.
bound to the `Role`s and `ClusterRole`s defined for the nginx-ingress-controller.
## Service Accounts created in this example
@ -27,8 +23,7 @@ One ServiceAccount is created in this example, `nginx-ingress-serviceaccount`.
There are two sets of permissions defined in this example. Cluster-wide
permissions defined by the `ClusterRole` named `nginx-ingress-clusterrole`, and
namespace specific permissions defined by the `Role` named
`nginx-ingress-role`.
namespace specific permissions defined by the `Role` named `nginx-ingress-role`.
### Cluster Permissions
@ -76,41 +71,6 @@ nginx-ingress-controller.
The ServiceAccount `nginx-ingress-serviceaccount` is bound to the Role
`nginx-ingress-role` and the ClusterRole `nginx-ingress-clusterrole`.
## Namespace created in this example
The `Namespace` named `nginx-ingress` is defined in this example. The
namespace name can be changed arbitrarily as long as all of the references
change as well.
## Usage
1. Create the `Namespace`, `Service Account`, `ClusterRole`, `Role`,
`ClusterRoleBinding`, and `RoleBinding`.
```sh
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-rbac.yml
```
2. Create default backend
```sh
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/default-backend.yml
```
3. Create the nginx-ingress-controller
For this example to work, the Service must be in the nginx-ingress namespace:
```sh
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller.yml
```
The serviceAccountName associated with the containers in the deployment must
match the serviceAccount from nginx-ingress-controller-rbac.yml The namespace
references in the Deployment metadata, container arguments, and POD_NAMESPACE
should be in the nginx-ingress namespace.
4. Create ingress service
```sh
kubectl create -f https://raw.githubusercontent.com/kubernetes/ingress/master/examples/rbac/nginx/nginx-ingress-controller-service.yml
```
match the serviceAccount. The namespace references in the Deployment metadata,
container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.

View file

@ -1,14 +1,11 @@
apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: nginx-ingress
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
@ -60,12 +57,14 @@ rules:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: nginx-ingress
namespace: ingress-nginx
rules:
- apiGroups:
- ""
@ -101,14 +100,14 @@ rules:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: nginx-ingress
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
@ -116,8 +115,10 @@ roleRef:
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: nginx-ingress
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
@ -129,4 +130,4 @@ roleRef:
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: nginx-ingress
namespace: ingress-nginx

View file

@ -0,0 +1,5 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx

View file

@ -0,0 +1,5 @@
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx

39
deploy/with-rbac.yaml Normal file
View file

@ -0,0 +1,39 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

38
deploy/without-rbac.yaml Normal file
View file

@ -0,0 +1,38 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
spec:
containers:
- name: nginx-ingress-controller
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.15
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443

View file

@ -1,21 +0,0 @@
# Ingress Documentation and Examples
This directory contains documentation.
## File naming convention
Try to create a README file in every directory containing documentation and index
out from there, that's what readers will notice first. Use lower case for other
file names unless you have a reason to draw someone's attention to it.
Avoid CamelCase.
Rationale:
* Files that are common to all controllers, or heavily index other files, are
named using ALL CAPS. This is done to indicate to the user that they should
visit these files first. Examples include PREREQUISITES and README.
* Files specific to a controller, or files that contain information about
various controllers, are named using all lower case. Examples include
configuration and catalog files.

View file

@ -1,55 +0,0 @@
# Ingress Admin Guide
This is a guide to the different deployment styles of an Ingress controller.
## Vanillla deployments
__GCP__: On GCE/GKE, the Ingress controller runs on the
master. If you wish to stop this controller and run another instance on your
nodes instead, you can do so by following this [example](/examples/deployment/gce).
__Generic__: You can deploy a generic (nginx or haproxy) Ingress controller by simply
running it as a pod in your cluster, as shown in the [examples](/examples/deployment).
Please note that you must specify the `ingress.class`
[annotation](/examples/PREREQUISITES.md#ingress-class) if you're running on a
cloudprovider, or the cloudprovider controller will fight the nginx controller
for the Ingress.
__AWS__: Until we have an AWS ALB Ingress controller, you can deploy the nginx
Ingress controller behind an ELB on AWS, as shows in the [next section](#stacked-deployments).
## Stacked deployments
__Behind a LoadBalancer Service__: You can deploy a generic controller behind a
Service of `Type=LoadBalancer`, by following this [example](/examples/static-ip/nginx#acquiring-an-ip).
More specifically, first create a LoadBalancer Service that selects the generic
controller pods, then start the generic controller with the `--publish-service`
flag.
__Behind another Ingress__: Sometimes it is desirable to deploy a stack of
Ingresses, like the GCE Ingress -> nginx Ingress -> application. You might
want to do this because the GCE HTTP lb offers some features that the GCE
network LB does not, like a global static IP or CDN, but doesn't offer all the
features of nginx, like url rewriting or redirects.
TODO: Write an example
## Daemonset
Neither a single pod nor bank of generic controllers scale with the cluster size.
If you create a daemonset of generic Ingress controllers, every new node
automatically gets an instance of the controller listening on the specified
ports.
TODO: Write an example
## Intra-cluster Ingress
Since generic Ingress controllers run in pods, you can deploy them as intra-cluster
proxies by just not exposing them on a `hostPort` and putting them behind a
Service of `Type=ClusterIP`.
TODO: Write an example

View file

@ -1,18 +0,0 @@
# Ingress Development Guide
This directory is intended to be the canonical source of truth for things like
writing and hacking on Ingress controllers. If you find a requirement that this
doc does not capture, please submit an issue on github. If you find other docs
with references to requirements that are not simply links to this doc, please
submit an issue.
This document is intended to be relative to the branch in which it is found.
It is guaranteed that requirements will change over time for the development
branch, but release branches of Kubernetes should not change.
## Navigation
* [Build, test, release](getting-started.md) an existing controller
* [Setup a cluster](setup-cluster.md) to hack at an existing controller
* [Write your own](custom-controller.md) controller

View file

@ -1,4 +0,0 @@
# Writing Ingress controllers
This doc outlines the basic steps needed to write an Ingress controller.
If you want the tl;dr version, skip straight to the [example](/examples/custom-controller).

View file

@ -1,141 +0,0 @@
# Getting Started
This document explains how to get started with developing for Kubernetes Ingress.
It includes how to build, test, and release ingress controllers.
## Dependencies
The build uses dependencies in the `ingress/vendor` directory, which
must be installed before building a binary/image. Occasionally, you
might need to update the dependencies.
This guide requires you to install the [godep](https://github.com/tools/godep) dependency
tool.
Check the version of `godep` you are using and make sure it is up to date.
```console
$ godep version
godep v74 (linux/amd64/go1.6.1)
```
If you have an older version of `godep`, you can update it as follows:
```console
$ cd $GOPATH/src/ingress
$ go get github.com/tools/godep
$ cd $GOPATH/src/github.com/tools/godep
$ go build -o godep *.go
```
This will automatically save the dependencies to the `vendor/` directory.
```console
$ cd $GOPATH/src/ingress
$ godep save ./...
```
In general, you can follow [this guide](https://github.com/kubernetes/community/blob/master/contributors/devel/godep.md#using-godep-to-manage-dependencies) to update dependencies.
To update a particular dependency, eg: Kubernetes:
```console
$ cd $GOPATH/src/k8s.io/ingress
$ godep restore
$ go get -u k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ godep restore
$ cd $GOPATH/src/k8s.io/kubernetes/ingress
$ rm -rf Godeps
$ godep save ./...
$ git [add/remove] as needed
$ git commit
```
## Building
All ingress controllers are built through a Makefile. Depending on your
requirements you can build a raw server binary, a local container image,
or push an image to a remote repository.
In order to use your local Docker, you may need to set the following environment variables:
```console
# "gcloud docker" (default) or "docker"
$ export DOCKER=<docker>
# "gcr.io/google_containers" (default), "index.docker.io", or your own registry
$ export REGISTRY=<your-docker-registry>
```
To find the registry simply run: `docker system info | grep Registry`
### Nginx Controller
Build a raw server binary
```console
$ make controllers
```
[TODO](https://github.com/kubernetes/ingress/issues/387): add more specific instructions needed for raw server binary.
Build a local container image
```console
$ make docker-build TAG=<tag> PREFIX=$USER/ingress-controller
```
Push the container image to a remote repository
```console
$ make docker-push TAG=<tag> PREFIX=$USER/ingress-controller
```
### GCE Controller
[TODO](https://github.com/kubernetes/ingress/issues/387): add instructions on building gce controller.
## Deploying
There are several ways to deploy the ingress controller onto a cluster. If you don't have a cluster start by
creating one [here](setup-cluster.md).
* [nginx controller](../../examples/deployment/nginx/README.md)
* [gce controller](../../examples/deployment/gce/README.md)
## Testing
To run unit-tests, enter each directory in `controllers/`
```console
$ cd $GOPATH/src/k8s.io/ingress/controllers/<controller>
$ go test ./...
```
If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo.
```console
$ cd $GOPATH/src/k8s.io/kubernetes
$ ./hack/ginkgo-e2e.sh --ginkgo.focus=Ingress.* --delete-namespace-on-failure=false
```
See also [related FAQs](../faq#how-are-the-ingress-controllers-tested).
[TODO](https://github.com/kubernetes/ingress/issues/5): add instructions on running integration tests, or e2e against
local-up/minikube.
## Releasing
All Makefiles will produce a release binary, as shown above. To publish this
to a wider Kubernetes user base, push the image to a container registry, like
[gcr.io](https://cloud.google.com/container-registry/). All release images are hosted under `gcr.io/google_containers` and
tagged according to a [semver](http://semver.org/) scheme.
An example release might look like:
```
$ make push TAG=0.8.0 PREFIX=gcr.io/google_containers/glbc
```
Please follow these guidelines to cut a release:
* Update the [release](https://help.github.com/articles/creating-releases/)
page with a short description of the major changes that correspond to a given
image tag.
* Cut a release branch, if appropriate. Release branches follow the format of
`controller-release-version`. Typically, pre-releases are cut from HEAD.
All major feature work is done in HEAD. Specific bug fixes are
cherry-picked into a release branch.
* If you're not confident about the stability of the code,
[tag](https://help.github.com/articles/working-with-tags/) it as alpha or beta.
Typically, a release branch should have stable code.

View file

@ -1,115 +0,0 @@
# Cluster Getting Started
This doc outlines the steps needed to setup a local dev cluster within which you
can deploy/test an ingress controller. Note that you can also setup the ingress controller
locally.
## Deploy a Development cluster
### Single node local cluster
You can run the nginx ingress controller locally on any node with access to the
internet, and the following dependencies: [docker](https://docs.docker.com/engine/getstarted/step_one/), [etcd](https://github.com/coreos/etcd/releases), [golang](https://golang.org/doc/install), [cfssl](https://github.com/cloudflare/cfssl#installation), [openssl](https://www.openssl.org/), [make](https://www.gnu.org/software/make/), [gcc](https://gcc.gnu.org/), [git](https://git-scm.com/download/linux).
Clone the kubernetes repo:
```console
$ cd $GOPATH/src/k8s.io
$ git clone https://github.com/kubernetes/kubernetes.git
```
Add yourself to the docker group, if you haven't done so already (or give
local-up-cluster sudo)
```
$ sudo usermod -aG docker $USER
$ sudo reboot
..
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
```
**NB: the next step will bring up Kubernetes daemons directly on your dev
machine, no sandbox, iptables rules, routes, loadbalancers, network bridges
etc are created on the host.**
```console
$ cd $GOPATH/src/k8s.io/kubernetes
$ hack/local-up-cluster.sh
```
Check for Ready nodes
```console
$ kubectl get no --context=local
NAME STATUS AGE VERSION
127.0.0.1 Ready 5s v1.6.0-alpha.0.1914+8ccecf93aa6db5-dirty
```
### Minikube cluster
[Minikube](https://github.com/kubernetes/minikube) is a popular way to bring up
a sandboxed local cluster. You will first need to [install](https://github.com/kubernetes/minikube/releases)
the minikube binary, then bring up a cluster
```console
$ minikube start
```
Check for Ready nodes
```console
$ kubectl get no
NAME STATUS AGE VERSION
minikube Ready 42m v1.4.6
```
List the existing addons
```console
$ minikube addons list
- addon-manager: enabled
- dashboard: enabled
- kube-dns: enabled
- heapster: disabled
```
If this list already contains the ingress controller, you don't need to
redeploy it. If the addon controller is disabled, you can enable it with
```console
$ minikube addons enable ingress
```
If the list *does not* contain the ingress controller, you can either update
minikube, or deploy it yourself as shown in the next section.
You may want to consider [using the VM's docker
daemon](https://github.com/kubernetes/minikube/blob/master/README.md#reusing-the-docker-daemon)
when developing.
### CoreOS Kubernetes
[CoreOS Kubernetes](https://github.com/coreos/coreos-kubernetes/) repository has `Vagrantfile`
scripts to easily create a new Kubernetes cluster on VirtualBox, VMware or AWS.
Follow the CoreOS [doc](https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant-single.html)
for detailed instructions.
## Deploy the ingress controller
You can deploy an ingress controller on the cluster setup in the previous step
[like this](../../examples/deployment).
## Run against a remote cluster
If the controller you're interested in using supports a "dry-run" flag, you can
run it on any machine that has `kubectl` access to a remote cluster. Eg:
```console
$ cd $GOPATH/k8s.io/ingress/controllers/gce
$ glbc --help
--running-in-cluster Optional, if this controller is running in a kubernetes cluster, use the
pod secrets for creating a Kubernetes client. (default true)
$ ./glbc --running-in-cluster=false
I1210 17:49:53.202149 27767 main.go:179] Starting GLBC image: glbc:0.9.2, cluster name
```
Note that this is equivalent to running the ingress controller on your local
machine, so if you already have an ingress controller running in the remote
cluster, they will fight for the same ingress.

View file

@ -2,23 +2,6 @@
Many of the examples in this directory have common prerequisites.
## Deploying a controller
Unless you're running on a cloudprovider that supports Ingress out of the box
(eg: GCE/GKE), you will need to deploy a controller. You can do so following
[these instructions](/examples/deployment).
## Firewall rules
If you're using a generic controller (eg the nginx ingress controller), you
will need to create a firewall rule that targets port 80/443 on the specific VMs
the nginx controller is running on. On cloudproviders, the respective backend
will auto-create firewall rules for your Ingress.
If you'd like to auto-create firewall rules for an Ingress controller,
you can put it behind a Service of `Type=Loadbalancer` as shown in
[this example](/examples/static-ip/nginx#acquiring-an-ip).
## TLS certificates
Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA
@ -37,6 +20,7 @@ secret "tls-secret" created
```
## CA Authentication
You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our
own CA, and also generate a client certificate.
@ -72,12 +56,13 @@ This will generate two files: A private key (ca.key) and a public key (ca.crt).
The ca.crt can be used later in the step of creation of CA authentication secret.
### Generating the client certificate
The following steps generate a client certificate signed by the CA generated above. This client can be
used to authenticate in a tls-auth configured ingress.
First, we need to generate an 'openssl.cnf' file that will be used while signing the keys:
```
```console
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
@ -103,8 +88,8 @@ $ openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -ou
Then, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).
### Creating the CA Authentication secret
If you're using the CA Authentication feature, you need to generate a secret containing
all the authorized CAs. You must download them from your CA site in PEM format (like the following):
@ -123,7 +108,6 @@ $ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
Then, you've to concatenate them all in only one file, named 'ca.crt' as the following:
```console
$ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
```
@ -160,6 +144,7 @@ http-svc 10.0.122.116 <pending> 80:30301/TCP 1d
```
You can test that the HTTP Service works by exposing it temporarily
```console
$ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}'
"http-svc" patched
@ -209,31 +194,3 @@ BODY:
$ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}'
"http-svc" patched
```
## Ingress Class
If you have multiple Ingress controllers in a single cluster, you can pick one
by specifying the `ingress.class` annotation, eg creating an Ingress with an
annotation like
```yaml
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "gce"
```
will target the GCE controller, forcing the nginx controller to ignore it, while
an annotation like
```yaml
metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx"
```
will target the nginx controller, forcing the GCE controller to ignore it.
__Note__: Deploying multiple ingress controller and not specifying the
annotation will result in both controllers fighting to satisfy the Ingress.

View file

@ -1,15 +1,6 @@
# Sticky Session
This example demonstrates how to achieve session affinity using cookies
## Prerequisites
You will need to make sure you Ingress targets exactly one Ingress
controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class),
and that you have an ingress controller [running](/examples/deployment) in your cluster.
You will also need to deploy multiple replicas of your application that show up as endpoints for the Service referenced in the Ingress object, to test session stickyness.
Using a deployment with only one replica doesn't set the 'sticky' cookie.
This example demonstrates how to achieve session affinity using cookies
## Deployment
@ -24,7 +15,7 @@ Session stickyness is achieved through 3 annotations on the Ingress, as shown in
You can create the ingress to test this
```console
$ kubectl create -f sticky-ingress.yaml
kubectl create -f ingress.yaml
```
## Validation

View file

@ -3,7 +3,6 @@ kind: Ingress
metadata:
name: nginx-test
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/affinity: "cookie"
ingress.kubernetes.io/session-cookie-name: "route"
ingress.kubernetes.io/session-cookie-hash: "sha1"
@ -14,6 +13,6 @@ spec:
http:
paths:
- backend:
serviceName: nginx-service
serviceName: http-svc
servicePort: 80
path: /

View file

@ -1,7 +1,8 @@
# Basic Authentication
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`.
```
```console
$ htpasswd -c auth foo
New password: <bar>
New password:
@ -9,12 +10,12 @@ Re-type new password:
Adding password for user foo
```
```
```console
$ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created
```
```
```console
$ kubectl get secret basic-auth -o yaml
apiVersion: v1
data:
@ -26,7 +27,7 @@ metadata:
type: Opaque
```
```
```console
echo "
apiVersion: extensions/v1beta1
kind: Ingress
@ -46,7 +47,7 @@ spec:
paths:
- path: /
backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
" | kubectl create -f -
```

View file

@ -29,7 +29,7 @@ spec:
http:
paths:
- backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
path: /
status:
@ -40,7 +40,8 @@ $
```
Test 1: no username/password (expect code 401)
```
```console
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
* Rebuilt URL to: http://172.17.4.99/
* Trying 172.17.4.99...

View file

@ -10,6 +10,6 @@ spec:
http:
paths:
- backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
path: /

View file

@ -0,0 +1,12 @@
## Ingress
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [this example](/examples/customization/custom-headers/nginx).
```console
$ kubectl apply -f ingress.yaml
```
## Test
Check if the contents of the annotation are present in the nginx.conf file using:
`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf`

View file

@ -1,3 +1,5 @@
# Custom Upstream server checks
This example shows how is possible to create a custom configuration for a particular upstream associated with an Ingress rule.
```
@ -5,7 +7,7 @@ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echoheaders
name: http-svc
annotations:
ingress.kubernetes.io/upstream-fail-timeout: "30"
spec:
@ -15,14 +17,14 @@ spec:
paths:
- path: /
backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
" | kubectl create -f -
```
Check the annotation is present in the Ingress rule:
```
kubectl get ingress echoheaders -o yaml
kubectl get ingress http-svc -o yaml
```
Check the NGINX configuration is updated using kubectl or the status page:
@ -33,7 +35,7 @@ $ kubectl exec nginx-ingress-controller-v1ppm cat /etc/nginx/nginx.conf
```
....
upstream default-echoheaders-x-80 {
upstream default-http-svc-x-80 {
least_conn;
server 10.2.92.2:8080 max_fails=5 fail_timeout=30;

View file

Before

Width:  |  Height:  |  Size: 59 KiB

After

Width:  |  Height:  |  Size: 59 KiB

View file

@ -1,4 +1,4 @@
## External Authentication
# External Authentication
### Overview
@ -18,7 +18,7 @@ same endpoint.
Sample:
```
```yaml
...
metadata:
name: application
@ -33,7 +33,7 @@ metadata:
This example will show you how to deploy [`oauth2_proxy`](https://github.com/bitly/oauth2_proxy)
into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider
#### Prepare:
#### Prepare
1. Install the kubernetes dashboard
@ -45,14 +45,11 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addon
![Register OAuth2 Application](images/register-oauth-app.png)
- Homepage URL is the FQDN in the Ingress rule, like `https://foo.bar.com`
- Authorization callback URL is the same as the base FQDN plus `/oauth2`, like `https://foo.bar.com/oauth2`
![Register OAuth2 Application](images/register-oauth-app-2.png)
3. Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values:
- OAUTH2_PROXY_CLIENT_ID with the github `<Client ID>`
@ -64,13 +61,13 @@ kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addon
Replace `__INGRESS_HOST__` with a valid FQDN and `__INGRESS_SECRET__` with a Secret with a valid SSL certificate.
5. Deploy the oauth2 proxy and the ingress rules running:
```console
$ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml
```
Test the oauth integration accessing the configured URL, like `https://foo.bar.com`
![Register OAuth2 Application](images/github-auth.png)
![Github authentication](images/oauth-login.png)

View file

Before

Width:  |  Height:  |  Size: 86 KiB

After

Width:  |  Height:  |  Size: 86 KiB

View file

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View file

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 50 KiB

View file

Before

Width:  |  Height:  |  Size: 90 KiB

After

Width:  |  Height:  |  Size: 90 KiB

View file

Before

Width:  |  Height:  |  Size: 84 KiB

After

Width:  |  Height:  |  Size: 84 KiB

View file

@ -1,26 +1,28 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: echoheaders
name: http-svc
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
app: http-svc
spec:
containers:
- name: echoheaders
- name: http-svc
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: echoheaders-x
name: http-svc
labels:
app: echoheaders-x
app: http-svc
spec:
ports:
- port: 80
@ -28,19 +30,4 @@ spec:
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: v1
kind: Service
metadata:
name: echoheaders-y
labels:
app: echoheaders-y
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
app: http-svc

View file

@ -48,17 +48,17 @@ $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf |
proxy_http_version 1.1;
proxy_pass http://default-echoheaders-80;
proxy_pass http://default-http-svc-80;
}
```
And you should be able to reach your nginx service or echoheaders service using a hostname switch:
And you should be able to reach your nginx service or http-svc service using a hostname switch:
```console
$ kubectl get ing
NAME RULE BACKEND ADDRESS AGE
foo-tls - 104.154.30.67 13m
foo.bar.com
/ echoheaders:80
/ http-svc:80
bar.baz.com
/ nginx:80

View file

@ -33,9 +33,9 @@ spec:
apiVersion: v1
kind: Service
metadata:
name: echoheaders
name: http-svc
labels:
app: echoheaders
app: http-svc
spec:
ports:
- port: 80
@ -43,21 +43,21 @@ spec:
protocol: TCP
name: http
selector:
app: echoheaders
app: http-svc
---
apiVersion: v1
kind: ReplicationController
metadata:
name: echoheaders
name: http-svc
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
app: http-svc
spec:
containers:
- name: echoheaders
- name: http-svc
image: gcr.io/google_containers/echoserver:1.8
ports:
- containerPort: 8080
@ -108,7 +108,7 @@ spec:
http:
paths:
- backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
path: /
- host: bar.baz.com

View file

@ -40,7 +40,7 @@ spec:
http:
paths:
- backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
path: /something
" | kubectl create -f -
@ -108,7 +108,7 @@ spec:
http:
paths:
- backend:
serviceName: echoheaders
serviceName: http-svc
servicePort: 80
path: /
" | kubectl create -f -

View file

@ -1,8 +1,6 @@
# Static IPs
This example demonstrates how to assign a static-ip to an Ingress on through
the Nginx controller.
This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.
## Prerequisites

View file

@ -1,7 +1,7 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
name: ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:

View file

@ -4,19 +4,15 @@ This example demonstrates how to terminate TLS through the nginx Ingress control
## Prerequisites
You need a [TLS cert](/examples/PREREQUISITES.md#tls-certificates) and a [test HTTP service](/examples/PREREQUISITES.md#test-http-service) for this example.
You will also need to make sure you Ingress targets exactly one Ingress
controller by specifying the [ingress.class annotation](/examples/PREREQUISITES.md#ingress-class),
and that you have an ingress controller [running](/examples/deployment) in your cluster.
You need a [TLS cert](../PREREQUISITES.md#tls-certificates) and a [test HTTP service](../PREREQUISITES.md#test-http-service) for this example.
## Deployment
The following command instructs the controller to terminate traffic using
the provided TLS cert, and forward un-encrypted HTTP traffic to the test
HTTP service.
The following command instructs the controller to terminate traffic using the provided
TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.
```console
$ kubectl create -f nginx-tls-ingress.yaml
kubectl apply -f ingress.yaml
```
## Validation

View file

@ -15,4 +15,3 @@ spec:
# This assumes http-svc exists and routes to healthy endpoints.
serviceName: http-svc
servicePort: 80

View file

@ -1,148 +0,0 @@
# Ingress FAQ
This page contains general FAQ for Ingress, there is also a per-backend FAQ
in this directory with site specific information.
Table of Contents
=================
* [How is Ingress different from a Service?](#how-is-ingress-different-from-a-service)
* [I created an Ingress and nothing happens, what now?](#i-created-an-ingress-and-nothing-happens-what-now)
* [How do I deploy an Ingress controller?](#how-do-i-deploy-an-ingress-controller)
* [Are Ingress controllers namespaced?](#are-ingress-controllers-namespaced)
* [How do I disable an Ingress controller?](#how-do-i-disable-an-ingress-controller)
* [How do I run multiple Ingress controllers in the same cluster?](#how-do-i-run-multiple-ingress-controllers-in-the-same-cluster)
* [How do I contribute a backend to the generic Ingress controller?](#how-do-i-contribute-a-backend-to-the-generic-ingress-controller)
* [Is there a catalog of existing Ingress controllers?](#is-there-a-catalog-of-existing-ingress-controllers)
* [How are the Ingress controllers tested?](#how-are-the-ingress-controllers-tested)
* [An Ingress controller E2E is failing, what should I do?](#an-ingress-controller-e2e-is-failing-what-should-i-do)
* [Is there a roadmap for Ingress features?](#is-there-a-roadmap-for-ingress-features)
## How is Ingress different from a Service?
The Kubernetes Service is an abstraction over endpoints (pod-ip:port pairings).
The Ingress is an abstraction over Services. This doesn't mean all Ingress
controller must route *through* a Service, but rather, that routing, security
and auth configuration is represented in the Ingress resource per Service, and
not per pod. As long as this configuration is respected, a given Ingress
controller is free to route to the DNS name of a Service, the VIP, a NodePort,
or directly to the Service's endpoints.
## I created an Ingress and nothing happens, what now?
Run `describe` on the Ingress. If you see create/add events, you have an Ingress
controller running in the cluster, otherwise, you either need to deploy or
restart your Ingress controller. If the events associated with an Ingress are
insufficient to debug, consult the controller specific FAQ.
## How do I deploy an Ingress controller?
The following platforms currently deploy an Ingress controller addon: GCE, GKE,
minikube. If you're running on any other platform, you can deploy an Ingress
controller by following [this](/examples/deployment) example.
## Are Ingress controllers namespaced?
Ingress is namespaced, this means 2 Ingress objects can have the same name in 2
namespaces, and must only point to Services in its own namespace. An admin can
deploy an Ingress controller such that it only satisfies Ingress from a given
namespace, but by default, controllers will watch the entire Kubernetes cluster
for unsatisfied Ingress.
## How do I disable an Ingress controller?
Either shutdown the controller satisfying the Ingress, or use the
`ingress.class` annotation:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
backend:
serviceName: echoheaders-https
servicePort: 80
```
The GCE controller will only act on Ingresses with the annotation value of "gce" or empty string "" (the default value if the annotation is omitted).
The nginx controller will only act on Ingresses with the annotation value of "nginx" or empty string "" (the default value if the annotation is omitted).
To completely stop the Ingress controller on GCE/GKE, please see [this](gce.md#how-do-i-disable-the-gce-ingress-controller) faq.
## How do I run multiple Ingress controllers in the same cluster?
Multiple Ingress controllers can co-exist and key off the `ingress.class`
annotation, as shown in this faq, as well as in [this](/examples/daemonset/nginx) example.
## How do I contribute a backend to the generic Ingress controller?
First check the [catalog](#is-there-a-catalog-of-existing-ingress-controllers), to make sure you really need to write one.
1. Write a [generic backend](/examples/custom-controller)
2. Keep it in your own repo, make sure it passes the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/ingress_utils.go#L129)
3. Submit an example(s) in the appropriate subdirectories [here](/examples/README.md)
4. Add it to the catalog
## Is there a catalog of existing Ingress controllers?
Yes, a non-comprehensive [catalog](/docs/catalog.md) exists.
## How are the Ingress controllers tested?
Testing for the Ingress controllers is divided between:
* Ingress repo: unit tests and pre-submit integration tests run via travis
* Kubernetes repo: [pre-submit e2e](https://k8s-testgrid.appspot.com/google-gce#gce&include-filter-by-regex=Loadbalancing),
[post-merge e2e](https://k8s-testgrid.appspot.com/google-gce#gci-gce-ingress),
[per release-branch e2e](https://k8s-testgrid.appspot.com/google-gce#gci-gce-ingress-1.5)
The configuration for jenkins e2e tests are located [here](https://github.com/kubernetes/test-infra).
The Ingress E2Es are located [here](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/network/ingress.go),
each controller added to that suite must consistently pass the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/ingress_utils.go#L129).
## An Ingress controller E2E is failing, what should I do?
First, identify the reason for failure.
* Look at the build log, if there's nothing obvious, search for quota issues.
* Find events logged by the controller in the build log
* Ctrl+f "quota" in the build log
* If the failure is in the GCE controller:
* Navigate to the test artifacts for that run and look at glbc.log, [eg](http://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-ingress-release-1.5/1234/artifacts/bootstrap-e2e-master/)
* Look up the `PROJECT=` line in the build log, and navigate to that project
looking for quota issues (`gcloud compute project-info describe project-name`
or navigate to the cloud console > compute > quotas)
* If the failure is for a non-cloud controller (eg: nginx)
* Make sure the firewall rules required by the controller are opened on the
right ports (80/443), since the jenkins builders run *outside* the
Kubernetes cluster.
Note that you currently need help from a test-infra maintainer to access the GCE
test project. If you think the failures are related to project quota, cleanup
leaked resources and bump up quota before debugging the leak.
If the preceding identification process fails, it's likely that the Ingress api
is broken upstream. Try to setup a [dev environment](/docs/dev/setup-cluster.md) from
HEAD and create an Ingress. You should be deploying the [latest](https://github.com/kubernetes/ingress/releases)
release image to the local cluster.
If neither of these 2 strategies produces anything useful, you can either start
reverting images, or digging into the underlying infrastructure the e2es are
running on for more nefarious issues (like permission and scope changes for
some set of nodes on which an Ingress controller is running).
## Is there a roadmap for Ingress features?
The community is working on it. There are currently too many efforts in flight
to serialize into a flat roadmap. You might be interested in the following issues:
* Loadbalancing [umbrella issue](https://github.com/kubernetes/kubernetes/issues/24145)
* Service proxy [proposal](https://groups.google.com/forum/#!topic/kubernetes-sig-network/weni52UMrI8)
* Better [routing rules](https://github.com/kubernetes/kubernetes/issues/28443)
* Ingress [classes](https://github.com/kubernetes/kubernetes/issues/30151)
As well as the issues in this repo.

View file

@ -1,412 +0,0 @@
# GCE Ingress controller FAQ
This page contains general FAQ for the GCE Ingress controller.
Table of Contents
=================
* [How do I deploy an Ingress controller?](#how-do-i-deploy-an-ingress-controller)
* [I created an Ingress and nothing happens, now what?](#i-created-an-ingress-and-nothing-happens-now-what)
* [What are the cloud resources created for a single Ingress?](#what-are-the-cloud-resources-created-for-a-single-ingress)
* [The Ingress controller events complain about quota, how do I increase it?](#the-ingress-controller-events-complain-about-quota-how-do-i-increase-it)
* [Why does the Ingress need a different instance group then the GKE cluster?](#why-does-the-ingress-need-a-different-instance-group-then-the-gke-cluster)
* [Why does the cloud console show 0/N healthy instances?](#why-does-the-cloud-console-show-0n-healthy-instances)
* [Can I configure GCE health checks through the Ingress?](#can-i-configure-gce-health-checks-through-the-ingress)
* [Why does my Ingress have an ephemeral ip?](#why-does-my-ingress-have-an-ephemeral-ip)
* [Can I pre-allocate a static-ip?](#can-i-pre-allocate-a-static-ip)
* [Does updating a Kubernetes secrete update the GCE TLS certs?](#does-updating-a-kubernetes-secrete-update-the-gce-tls-certs)
* [Can I tune the loadbalancing algorithm?](#can-i-tune-the-loadbalancing-algorithm)
* [Is there a maximum number of Endpoints I can add to the Ingress?](#is-there-a-maximum-number-of-endpoints-i-can-add-to-the-ingress)
* [How do I match GCE resources to Kubernetes Services?](#how-do-i-match-gce-resources-to-kubernetes-services)
* [Can I change the cluster UID?](#can-i-change-the-cluster-uid)
* [Why do I need a default backend?](#why-do-i-need-a-default-backend)
* [How does Ingress work across 2 GCE clusters?](#how-does-ingress-work-across-2-gce-clusters)
* [I shutdown a cluster without deleting all Ingresses, how do I manually cleanup?](#i-shutdown-a-cluster-without-deleting-all-ingresses-how-do-i-manually-cleanup)
* [How do I disable the GCE Ingress controller?](#how-do-i-disable-the-gce-ingress-controller)
* [What GCE resources are shared between Ingresses?](#what-gce-resources-are-shared-between-ingresses)
* [How do I debug a controller spin loop?](#host-do-i-debug-a-controller-spinloop)
* [Creating an Internal Load Balancer without existing ingress](#creating-an-internal-load-balancer-without-existing-ingress)
* [Can I use websockets?](#can-i-use-websockets)
## How do I deploy an Ingress controller?
On GCP (either GCE or GKE), every Kubernetes cluster has an Ingress controller
running on the master, no deployment necessary. You can deploy a second,
different (i.e non-GCE) controller, like [this](README.md#how-do-i-deploy-an-ingress-controller).
If you wish to deploy a GCE controller as a pod in your cluster, make sure to
turn down the existing auto-deployed Ingress controller as shown in this
[example](/examples/deployment/gce/).
## I created an Ingress and nothing happens, now what?
Please check the following:
1. Output of `kubectl describe`, as shown [here](README.md#i-created-an-ingress-and-nothing-happens-what-now)
2. Do your Services all have a `NodePort`?
3. Do your Services either serve an HTTP status code 200 on `/`, or have a readiness probe
as described in [this section](#can-i-configure-gce-health-checks-through-the-ingress)?
4. Do you have enough GCP quota?
## What are the cloud resources created for a single Ingress?
__Terminology:__
* [Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules): Manages the Ingress VIP
* [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies): Manages SSL certs and proxies between the VIP and backend
* [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map): Routing rules
* [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service): Bridges various Instance Groups on a given Service NodePort
* [Instance Group](https://cloud.google.com/compute/docs/instance-groups/): Collection of Kubernetes nodes
The pipeline is as follows:
```
Global Forwarding Rule -> TargetHTTPProxy
| \ Instance Group (us-east1)
Static IP URL Map - Backend Service(s) - Instance Group (us-central1)
| / ...
Global Forwarding Rule -> TargetHTTPSProxy
ssl cert
```
In addition to this pipeline:
* Each Backend Service requires a HTTP or HTTPS health check to the NodePort of the Service
* Each port on the Backend Service has a matching port on the Instance Group
* Each port on the Backend Service is exposed through a firewall-rule open
to the GCE LB IP ranges (`130.211.0.0/22` and `35.191.0.0/16`)
## The Ingress controller events complain about quota, how do I increase it?
GLBC is not aware of your GCE quota. As of this writing users get 3
[GCE Backend Services](https://cloud.google.com/compute/docs/load-balancing/http/backend-service)
by default. If you plan on creating Ingresses for multiple Kubernetes Services,
remember that each one requires a backend service, and request quota. Should you
fail to do so the controller will poll periodically and grab the first free
backend service slot it finds. You can view your quota:
```console
$ gcloud compute project-info describe --project myproject
```
See [GCE documentation](https://cloud.google.com/compute/docs/resource-quotas#checking_your_quota)
for how to request more.
## Why does the Ingress need a different instance group then the GKE cluster?
The controller adds/removes Kubernetes nodes that are `NotReady` from the lb
instance group. We cannot simply rely on health checks to achieve this for
a few reasons.
First, older Kubernetes versions (<=1.3) did not mark
endpoints on unreachable nodes as NotReady. Meaning if the Kubelet didn't
heart beat for 10s, the node was marked NotReady, but there was no other signal
at the Service level to stop routing requests to endpoints on that node. In
later Kubernetes version this is handled a little better, if the Kubelet
doesn't heart beat for 10s it's marked NotReady, if it stays in NotReady
for 40s all endpoints are marked NotReady. So it is still advantageous
to pull the node out of the GCE LB Instance Group in 10s, because we
save 30s of bad requests.
Second, continuing to send requests to NotReady nodes is not a great idea.
The NotReady condition is an aggregate of various factors. For example,
a NotReady node might still pass health checks but have the wrong
nodePort to endpoint mappings. The health check will pass as long as *something*
returns a HTTP 200.
## Why does the cloud console show 0/N healthy instances?
Some nodes are reporting negatively on the GCE HTTP health check.
Please check the following:
1. Try to access any node-ip:node-port/health-check-url
2. Try to access any pubic-ip:node-port/health-check-url
3. Make sure you have a firewall-rule allowing access to the GCE LB IP range
(created by the Ingress controller on your behalf)
4. Make sure the right NodePort is opened in the Backend Service, and
consequently, plugged into the lb instance group
## Can I configure GCE health checks through the Ingress?
Currently health checks are not exposed through the Ingress resource, they're
handled at the node level by Kubernetes daemons (kube-proxy and the kubelet).
However the GCE L7 lb still requires a HTTP(S) health check to measure node
health. By default, this health check points at `/` on the nodePort associated
with a given backend. Note that the purpose of this health check is NOT to
determine when endpoint pods are overloaded, but rather, to detect when a
given node is incapable of proxying requests for the Service:nodePort
altogether. Overloaded endpoints are removed from the working set of a
Service via readiness probes conducted by the kubelet.
If `/` doesn't work for your application, you can have the Ingress controller
program the GCE health check to point at a readiness probe as shows in [this](/examples/health-checks/)
example.
We plan to surface health checks through the API soon.
## Why does my Ingress have an ephemeral ip?
GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances-and-network#ephemeraladdress)
and [static](https://cloud.google.com/compute/docs/instances-and-network#reservedaddress) IPs. A production
website would always want a static IP, which ephemeral IPs are cheaper (both in terms of quota and cost), and
are therefore better suited for experimentation.
* Creating a HTTP Ingress (i.e an Ingress without a TLS section) allocates an ephemeral IP for 2 reasons:
* we want to encourage secure defaults
* static-ips have limited quota and pure HTTP ingress is often used for testing
* Creating an Ingress with a TLS section allocates a static IP
* Modifying an Ingress and adding a TLS section allocates a static IP, but the
IP *will* change. This is a beta limitation.
* You can [promote](https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip)
an ephemeral to a static IP by hand, if required.
## Can I pre-allocate a static-ip?
Yes, please see [this](/examples/static-ip) example.
## Does updating a Kubernetes secret update the GCE TLS certs?
Yes, expect O(30s) delay.
The controller should create a second ssl certificate suffixed with `-1` and
atomically swap it with the ssl certificate in your taret proxy, then delete
the obselete ssl certificate.
## Can I tune the loadbalancing algorithm?
Right now, a kube-proxy nodePort is a necessary condition for Ingress on GCP.
This is because the cloud lb doesn't understand how to route directly to your
pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate
toward a common goal is still a work in progress. If you really want fine
grained control over the algorithm, you should deploy the [nginx controller](/examples/deployment/nginx).
## Is there a maximum number of Endpoints I can add to the Ingress?
This limit is directly related to the maximum number of endpoints allowed in a
Kubernetes cluster, not the the HTTP LB configuration, since the HTTP LB sends
packets to VMs. Ingress is not yet supported on single zone clusters of size >
1000 nodes ([issue](https://github.com/kubernetes/contrib/issues/1724)). If
you'd like to use Ingress on a large cluster, spread it across 2 or more zones
such that no single zone contains more than a 1000 nodes. This is because there
is a [limit](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances)
to the number of instances one can add to a single GCE Instance Group. In a
multi-zone cluster, each zone gets its own instance group.
## How do I match GCE resources to Kubernetes Services?
The format followed for creating resources in the cloud is:
`k8s-<resource-name>-<nodeport>-<cluster-hash>`, where `nodeport` is the output of
```console
$ kubectl get svc <svcname> --template '{{range $i, $e := .spec.ports}}{{$e.nodePort}},{{end}}'
```
`cluster-hash` is the output of:
```console
$ kubectl get configmap -o yaml --namespace=kube-system | grep -i " data:" -A 1
data:
uid: cad4ee813812f808
```
and `resource-name` is a short prefix for one of the resources mentioned [here](#what-are-the-cloud-resources-created-for-a-single-ingress)
(eg: `be` for backends, `hc` for health checks). If a given resource is not tied
to a single `node-port`, its name will not include the same.
## Can I change the cluster UID?
The Ingress controller configures itself to add the UID it stores in a configmap in the `kube-system` namespace.
```console
$ kubectl --namespace=kube-system get configmaps
NAME DATA AGE
ingress-uid 1 12d
$ kubectl --namespace=kube-system get configmaps -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
uid: UID
kind: ConfigMap
...
```
You can pick a different UID, but this requires you to:
1. Delete existing Ingresses
2. Edit the configmap using `kubectl edit`
3. Recreate the same Ingress
After step 3 the Ingress should come up using the new UID as the suffix of all cloud resources. You can't simply change the UID if you have existing Ingresses, because
renaming a cloud resource requires a delete/create cycle that the Ingress controller does not currently automate. Note that the UID in step 1 might be an empty string,
if you had a working Ingress before upgrading to Kubernetes 1.3.
__A note on setting the UID__: The Ingress controller uses the token `--` to split a machine generated prefix from the UID itself. If the user supplied UID is found to
contain `--` the controller will take the token after the last `--`, and use an empty string if it ends with `--`. For example, if you insert `foo--bar` as the UID,
the controller will assume `bar` is the UID. You can either edit the configmap and set the UID to `bar` to match the controller, or delete existing Ingresses as described
above, and reset it to a string bereft of `--`.
## Why do I need a default backend?
All GCE URL maps require at least one [default backend](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case), which handles all
requests that don't match a host/path. In Ingress, the default backend is
optional, since the resource is cross-platform and not all platforms require
a default backend. If you don't specify one in your yaml, the GCE ingress
controller will inject the default-http-backend Service that runs in the
`kube-system` namespace as the default backend for the GCE HTTP lb allocated
for that Ingress resource.
Some caveats concerning the default backend:
* It is the only Backend Service that doesn't directly map to a user specified
NodePort Service
* It's created when the first Ingress is created, and deleted when the last
Ingress is deleted, since we don't want to waste quota if the user is not going
to need L7 loadbalancing through Ingress
* It has a http health check pointing at `/healthz`, not the default `/`, because
`/` serves a 404 by design
## How does Ingress work across 2 GCE clusters?
See federation [documentation](http://kubernetes.io/docs/user-guide/federation/federated-ingress/).
## I shutdown a cluster without deleting all Ingresses, how do I manually cleanup?
If you kill a cluster without first deleting Ingresses, the resources will leak.
If you find yourself in such a situation, you can delete the resources by hand:
1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing"
2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID
3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services)
4. Switch to the "Compute Engine" tab, then choose "Instance Groups"
5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID
We plan to fix this [soon](https://github.com/kubernetes/kubernetes/issues/16337).
## How do I disable the GCE Ingress controller?
As of Kubernetes 1.3, GLBC runs as a static pod on the master.
If you want to disable it, you have 3 options:
### Soft disable
Option 1. Have it no-op for an Ingress resource based on the `ingress.class` annotation as shown [here](README.md#how-do-i-disable-an-ingress-controller).
This can also be used to use one of the other Ingress controllers at the same time as the GCE controller.
### Hard disable
Option 2. SSH into the GCE master node and delete the GLBC manifest file found at `/etc/kubernetes/manifests/glbc.manifest`.
Option 3. Disable the addon in GKE via `gcloud`:
#### Disabling GCE ingress on cluster creation
Disable the addon in GKE at cluster bring-up time through the `disable-addons` flag:
```console
gcloud container clusters create mycluster --network "default" --num-nodes 1 \
--machine-type n1-standard-2 \
--zone $ZONE \
--disk-size 50 \
--scopes storage-full \
--disable-addons HttpLoadBalancing
```
#### Disabling GCE ingress in an existing cluster
Disable the addon in GKE for an existing cluster through the `update-addons` flag:
```console
gcloud container clusters update mycluster --update-addons HttpLoadBalancing=DISABLED
```
## What GCE resources are shared between Ingresses?
Every Ingress creates a pipeline of GCE cloud resources behind an IP. Some of
these are shared between Ingresses out of necessity, while some are shared
because there was no perceived need for duplication (all resources consume
quota and usually cost money).
Shared:
* Backend Services: because of low quota and high reuse. A single Service in a
Kubernetes cluster has one NodePort, common throughout the cluster. GCE has
a hard limit of the number of allowed BackendServices, so if multiple Ingresses
all point to a single Service, that creates a single BackendService in GCE
pointing to that Service's NodePort.
* Instance Group: since an instance can only be part of a single loadbalanced
Instance Group, these must be shared. There is 1 Ingress Instance Group per
zone containing Kubernetes nodes.
* Health Checks: currently the health checks point at the NodePort
of a BackendService. They don't *need* to be shared, but they are since
BackendServices are shared.
* Firewall rule: In a non-federated cluster there is a single firewall rule
that covers health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting)
to Service nodePorts.
Unique:
Currently, a single Ingress on GCE creates a unique IP and url map. In this
model the following resources cannot be shared:
* Url Map
* Target HTTP(S) Proxies
* SSL Certificates
* Static-ip
* Forwarding rules
## How do I debug a controller spinloop?
The most likely cause of a controller spin loop is some form of GCE validation
failure, eg:
* It's trying to delete a BackendService already in use, say in a UrlMap
* It's trying to add an Instance to more than 1 loadbalanced InstanceGroups
* It's trying to flip the loadbalancing algorithm on a BackendService to RATE,
when some other BackendService is pointing at the same InstanceGroup and asking
for UTILIZATION
In all such cases, the work queue will put a single key (ingress namespace/name)
that's getting continuously requeued into exponential backoff. However, currently
the Informers that watch the Kubernetes api are setup to periodically resync,
so even though a particular key is in backoff, we might end up syncing all other
keys every, say, 10m, which might trigger the same validation-error-condition
when syncing a shared resource.
## Creating an Internal Load Balancer without existing ingress
**How the GCE ingress controller Works**
To assemble an L7 Load Balancer, the ingress controller creates an [unmanaged instance-group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances) named `k8s-ig--{UID}` and adds every known minion node to the group. For every service specified in all ingresses, a backend service is created to point to that instance group.
**How the Internal Load Balancer Works**
K8s does not yet assemble ILB's for you, but you can manually create one via the GCP Console. The ILB is composed of a regional forwarding rule and a regional backend service. Similar to the L7 LB, the backend-service points to an unmanaged instance-group containing your K8s nodes.
**The Complication**
GCP will only allow one load balanced unmanaged instance-group for a given instance.
If you manually created an instance group named something like `my-kubernetes-group` containing all your nodes and put an ILB in front of it, then you will probably encounter a GCP error when setting up an ingress resource. The controller doesn't know to use your `my-kubernetes-group` group and will create it's own. Unfortunately, it won't be able to add any nodes to that group because they already belong to the ILB group.
As mentioned before, the instance group name is composed of a hard-coded prefix `k8s-ig--` and a cluster-specific UID. The ingress controller will check the K8s configmap for an existing UID value at process start. If it doesn't exist, the controller will create one randomly and update the configmap.
#### Solutions
**Want an ILB and Ingress?**
If you plan on creating both ingresses and internal load balancers, simply create the ingress resource first then use the GCP Console to create an ILB pointing to the existing instance group.
**Want just an ILB for now, ingress maybe later?**
Retrieve the UID via configmap, create an instance-group per used zone, then add all respective nodes to the group.
```shell
# Fetch instance group name from config map
GROUPNAME=`kubectl get configmaps ingress-uid -o jsonpath='k8s-ig--{.data.uid}' --namespace=kube-system`
# Create an instance group for every zone you have nodes. If you use GKE, this is probably a single zone.
gcloud compute instance-groups unmanaged create $GROUPNAME --zone {ZONE}
# Look at your list of your nodes
kubectl get nodes
# Add minion nodes that exist in zone X to the instance group in zone X. (Do not add the master!)
gcloud compute instance-groups unmanaged add-instances $GROUPNAME --zone {ZONE} --instances=A,B,C...
```
You can now follow the GCP Console wizard for creating an internal load balancer and point to the `k8s-ig--{UID}` instance group.
## Can I use websockets?
Yes!
The GCP HTTP(S) Load Balancer supports websockets. You do not need to change your http server or Kubernetes deployment. You will need to manually configure the created Backend Service's `timeout` setting. This value is the interpreted as the max connection duration. The default value of 30 seconds is probably too small for you. You can increase it to the supported maximum: 86400 (a day) through the GCP Console or the gcloud CLI.
View the [example](/controllers/gce/examples/websocket/).

View file

@ -1,3 +0,0 @@
# Nginx Ingress controller FAQ
Placeholder

View file

Before

Width:  |  Height:  |  Size: 37 KiB

After

Width:  |  Height:  |  Size: 37 KiB

View file

@ -1,15 +1,42 @@
<!--
-----------------NOTICE------------------------
This file is referenced in code as
https://github.com/kubernetes/ingress/blob/master/docs/troubleshooting.md
https://github.com/kubernetes/ingress-ingress/blob/master/docs/troubleshooting.md
Do not move it without providing redirects.
-----------------------------------------------
-->
# Troubleshooting
# Debug & Troubleshooting
## Debug
Using the flag `--v=XX` it is possible to increase the level of logging.
In particular:
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
```console
I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000
+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000
@@ -235,7 +235,6 @@
upstream default-http-svcx {
least_conn;
- server 10.2.112.124:5000;
server 10.2.208.50:5000;
}
I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading...
```
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html)
## Troubleshooting
## Authentication to the Kubernetes API Server
### Authentication to the Kubernetes API Server
A number of components are involved in the authentication process and the first step is to narrow
@ -60,8 +87,7 @@ Kubernetes Workstation
+---------------------------------------------------+ +------------------+
```
## Service Account
### Service Account
If using a service account to connect to the API server, Dashboard expects the file
`/var/run/secrets/kubernetes.io/serviceaccount/token` to be present. It provides a secret
token that is required to authenticate with the API server.
@ -153,13 +179,12 @@ More information:
* [User Guide: Service Accounts](http://kubernetes.io/docs/user-guide/service-accounts/)
* [Cluster Administrator Guide: Managing Service Accounts](http://kubernetes.io/docs/admin/service-accounts-admin/)
## Kubeconfig
### Kubeconfig
If you want to use a kubeconfig file for authentication, create a deployment file similar to the one below:
*Note:* the important part is the flag `--kubeconfig=/etc/kubernetes/kubeconfig.yaml`.
```
```yaml
kind: Service
apiVersion: v1
metadata:

View file

@ -0,0 +1,328 @@
# Annotations
The following annotations are supported:
|Name | type |
|---------------------------|------|
|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false|
|[ingress.kubernetes.io/app-root](#rewrite)|string|
|[ingress.kubernetes.io/affinity](#session-affinity)|cookie|
|[ingress.kubernetes.io/auth-realm](#authentication)|string|
|[ingress.kubernetes.io/auth-secret](#authentication)|string|
|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest|
|[ingress.kubernetes.io/auth-tls-secret](#certificate-authentication)|string|
|[ingress.kubernetes.io/auth-tls-verify-depth](#certificate-authentication)|number|
|[ingress.kubernetes.io/auth-tls-verify-client](#certificate-authentication)|string|
|[ingress.kubernetes.io/auth-tls-error-page](#certificate-authentication)|string|
|[ingress.kubernetes.io/auth-url](#external-authentication)|string|
|[ingress.kubernetes.io/base-url-scheme](#rewrite)|string|
|[ingress.kubernetes.io/client-body-buffer-size](#client-body-buffer-size)|string|
|[ingress.kubernetes.io/configuration-snippet](#configuration-snippet)|string|
|[ingress.kubernetes.io/default-backend](#default-backend)|string|
|[ingress.kubernetes.io/enable-cors](#enable-cors)|true or false|
|[ingress.kubernetes.io/force-ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|[ingress.kubernetes.io/from-to-www-redirect](#redirect-from-to-www)|true or false|
|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number|
|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number|
|[ingress.kubernetes.io/proxy-body-size](#custom-max-body-size)|string|
|[ingress.kubernetes.io/proxy-connect-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-send-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-read-timeout](#custom-timeouts)|number|
|[ingress.kubernetes.io/proxy-request-buffering](#custom-timeouts)|string|
|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI|
|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false|
|[ingress.kubernetes.io/server-alias](#server-alias)|string|
|[ingress.kubernetes.io/server-snippet](#server-snippet)|string|
|[ingress.kubernetes.io/service-upstream](#service-upstream)|true or false|
|[ingress.kubernetes.io/session-cookie-name](#cookie-affinity)|string|
|[ingress.kubernetes.io/session-cookie-hash](#cookie-affinity)|string|
|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|[ingress.kubernetes.io/ssl-passthrough](#ssl-passthrough)|true or false|
|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number|
|[ingress.kubernetes.io/upstream-hash-by](#custom-nginx-upstream-hashing)|string|
|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR|
### Rewrite
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service.
If the application contains relative links it is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will prepend a [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) in the header of the returned HTML from the backend.
If the scheme of [`base` tag](https://developer.mozilla.org/en/docs/Web/HTML/Element/base) need to be specific, set the annotation `ingress.kubernetes.io/base-url-scheme` to the scheme such as `http` and `https`.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation `ingress.kubernetes.io/app-root` to redirect requests for `/`.
Please check the [rewrite](../examples/rewrite/README.md) example.
### Session Affinity
The annotation `ingress.kubernetes.io/affinity` enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.
The only affinity type available for NGINX is `cookie`.
Please check the [affinity](../examples/affinity/README.md) example.
### Authentication
Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key `auth`.
The annotations are:
```
ingress.kubernetes.io/auth-type: [basic|digest]
```
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
```
ingress.kubernetes.io/auth-secret: secretName
```
The name of the secret that contains the usernames and passwords with access to the `path`s defined in the Ingress Rule.
The secret must be created in the same namespace as the Ingress rule.
```
ingress.kubernetes.io/auth-realm: "realm string"
```
Please check the [auth](../examples/auth/basic/README.md) example.
### Custom NGINX upstream checks
NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enable the configuration of each server in the upstream. The Ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` and `upstream-fail-timeout` in the NGINX ConfigMap or in a particular Ingress rule. `upstream-max-fails` defaults to 0. This means NGINX will respect the container's `readinessProbe` if it is defined. If there is no probe and no values for `upstream-max-fails` NGINX will continue to send traffic to the container.
**With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**
To use custom values in an Ingress rule define these annotations:
`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the `upstream-fail-timeout` parameter to consider the server unavailable.
`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.
In NGINX, backend server pools are called "[upstreams](http://nginx.org/en/docs/http/ngx_http_upstream_module.html)". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.
**Important:** All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.
Please check the [custom upstream check](../examples/customization/custom-upstream-check/README.md) example.
### Custom NGINX upstream hashing
NGINX supports load balancing by client-server mapping based on [consistent hashing](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash) for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The [ketama](http://www.last.fm/user/RJ/journal/2007/04/10/392555/) consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
To enable consistent hashing for a backend:
`ingress.kubernetes.io/upstream-hash-by`: the nginx variable, text value or any combination thereof to use for consistent hashing. For example `ingress.kubernetes.io/upstream-hash-by: "$request_uri"` to consistently hash upstream requests by the current request URI.
### Certificate Authentication
It's possible to enable Certificate-Based Authentication (Mutual Authentication) using additional annotations in Ingress Rule.
The annotations are:
```
ingress.kubernetes.io/auth-tls-secret: secretName
```
The name of the secret that contains the full Certificate Authority chain `ca.crt` that is enabled to authenticate against this ingress. It's composed of namespace/secretName.
```
ingress.kubernetes.io/auth-tls-verify-depth
```
The validation depth between the provided client certificate and the Certification Authority chain.
```
ingress.kubernetes.io/auth-tls-verify-client
```
Enables verification of client certificates.
```
ingress.kubernetes.io/auth-tls-error-page
```
The URL/Page that user should be redirected in case of a Certificate Authentication Error
Please check the [tls-auth](../examples/auth/client-certs/README.md) example.
### Configuration snippet
Using this annotation you can add additional configuration to the NGINX location. For example:
```yaml
ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $request_id";
```
### Default Backend
The ingress controller requires a default backend. This service is handle the response when the service in the Ingress rule does not have endpoints.
This is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation `ingress.kubernetes.io/default-backend: <svc name>` to specify a custom default backend.
### Enable CORS
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation `ingress.kubernetes.io/enable-cors: "true"`. This will add a section in the server location enabling this functionality.
For more information please check https://enable-cors.org/server_nginx.html
### Server Alias
To add Server Aliases to an Ingress rule add the annotation `ingress.kubernetes.io/server-alias: "<alias>"`.
This will create a server with the same configuration, but a different server_name as the provided host.
*Note:* A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias
annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created
the new server configuration will take place over the alias configuration.
For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name
### Server snippet
Using the annotation `ingress.kubernetes.io/server-snippet` it is possible to add custom configuration in the server configuration block.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/server-snippet: |
set $agentflag 0;
if ($http_user_agent ~* "(Mobile)" ){
set $agentflag 1;
}
if ( $agentflag = 1 ) {
return 301 https://m.example.com;
}
```
**Important:** This annotation can be used only once per host
### Client Body Buffer Size
Sets buffer size for reading client request body per location. In case the request body is larger than the buffer,
the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.
This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is
applied to each location provided in the ingress rule.
*Note:* The annotation value must be given in a valid format otherwise the
For example to set the client-body-buffer-size the following can be done:
* `ingress.kubernetes.io/client-body-buffer-size: "1000"` # 1000 bytes
* `ingress.kubernetes.io/client-body-buffer-size: 1k` # 1 kilobyte
* `ingress.kubernetes.io/client-body-buffer-size: 1K` # 1 kilobyte
* `ingress.kubernetes.io/client-body-buffer-size: 1m` # 1 megabyte
* `ingress.kubernetes.io/client-body-buffer-size: 1M` # 1 megabyte
For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
### External Authentication
To use an existing service that provides authentication the Ingress rule can be annotated with `ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent.
Additionally it is possible to set `ingress.kubernetes.io/auth-method` to specify the HTTP method to use (GET or POST) and `ingress.kubernetes.io/auth-send-body` to true or false (default).
```yaml
ingress.kubernetes.io/auth-url: "URL to the authentication service"
```
Please check the [external-auth](../examples/auth/external-auth/README.md) example.
### Rate limiting
The annotations `ingress.kubernetes.io/limit-connections`, `ingress.kubernetes.io/limit-rps`, and `ingress.kubernetes.io/limit-rpm` define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus).
`ingress.kubernetes.io/limit-connections`: number of concurrent connections allowed from a single IP address.
`ingress.kubernetes.io/limit-rps`: number of connections that may be accepted from a given IP each second.
`ingress.kubernetes.io/limit-rpm`: number of connections that may be accepted from a given IP each minute.
You can specify the client IP source ranges to be excluded from rate-limiting through the `ingress.kubernetes.io/limit-whitelist` annotation. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, `limit-rpm`, and then `limit-rps` takes precedence.
The annotation `ingress.kubernetes.io/limit-rate`, `ingress.kubernetes.io/limit-rate-after` define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
`ingress.kubernetes.io/limit-rate-after`: sets the initial amount after which the further transmission of a response to a client will be rate limited.
`ingress.kubernetes.io/limit-rate`: rate of request that accepted from a client each second.
To configure this setting globally for all Ingress rules, the `limit-rate-after` and `limit-rate` value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.
### SSL Passthrough
The annotation `ingress.kubernetes.io/ssl-passthrough` allows to configure TLS termination in the pod and not in NGINX.
**Important:**
- Using the annotation `ingress.kubernetes.io/ssl-passthrough` invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).
- The use of this annotation requires the flag `--enable-ssl-passthrough` (By default it is disabled)
### Secure backends
By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the Ingress rule changes the protocol to `https`.
### Service Upstream
By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue [#257](https://github.com/kubernetes/ingress/issues/257).
#### Known Issues
If the `service-upstream` annotation is specified the following things should be taken into consideration:
* Sticky Sessions will not work as only round-robin load balancing is supported.
* The `proxy_next_upstream` directive will not have any effect meaning on error the request will not be dispatched to another upstream.
### Server-side HTTPS enforcement through redirect
By default the controller redirects (301) to `HTTPS` if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use `ssl-redirect: "false"` in the NGINX config map.
To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to `HTTPS` even when there is not TLS cert available. This can be achieved by using the `ingress.kubernetes.io/force-ssl-redirect: "true"` annotation in the particular resource.
### Redirect from to www
In some scenarios is required to redirect from `www.domain.com` to `domain.com` or viceversa.
To enable this feature use the annotation `ingress.kubernetes.io/from-to-www-redirect: "true"`
**Important:**
If at some point a new Ingress is created with a host equal to one of the options (like `domain.com`) the annotation will be omitted.
### Whitelist source range
You can specify the allowed client IP source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation. The value is a comma separated list of [CIDRs](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing), e.g. `10.0.0.0/24,172.10.0.1`.
To configure this setting globally for all Ingress rules, the `whitelist-source-range` value may be set in the NGINX ConfigMap.
*Note:* Adding an annotation to an Ingress rule overrides any global restriction.
Please check the [whitelist](../examples/whitelist/README.md) example.
### Cookie affinity
If you use the ``cookie`` type you can also specify the name of the cookie that will be used to route the requests with the annotation `ingress.kubernetes.io/session-cookie-name`. The default is to create a cookie named 'route'.
In case of NGINX the annotation `ingress.kubernetes.io/session-cookie-hash` defines which algorithm will be used to 'hash' the used upstream. Default value is `md5` and possible values are `md5`, `sha1` and `index`.
The `index` option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!
In NGINX this feature is implemented by the third party module [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng). The workflow used to define which upstream server will be used is explained [here](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng/raw/08a395c66e425540982c00482f55034e1fee67b6/docs/sticky.pdf)
### Custom timeouts
Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.
In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
- `ingress.kubernetes.io/proxy-connect-timeout`
- `ingress.kubernetes.io/proxy-send-timeout`
- `ingress.kubernetes.io/proxy-read-timeout`
- `ingress.kubernetes.io/proxy-request-buffering`
### Custom max body size
For NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter [`client_max_body_size`](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size).
To configure this setting globally for all Ingress rules, the `proxy-body-size` value may be set in the NGINX ConfigMap.
To use custom values in an Ingress rule define these annotation:
```yaml
ingress.kubernetes.io/proxy-body-size: 8m
```

Some files were not shown because too many files have changed in this diff Show more