[![Build Status](https://travis-ci.org/aledbf/ingress-controller.svg?branch=master)](https://travis-ci.org/aledbf/ingress-controller) [![Coverage Status](https://coveralls.io/repos/github/aledbf/ingress-controller/badge.svg?branch=master)](https://coveralls.io/github/aledbf/ingress-controller?branch=master) [![Go Report Card](https://goreportcard.com/badge/github.com/aledbf/ingress-controller)](https://goreportcard.com/report/github.com/aledbf/ingress-controller) # Nginx Ingress Controller This is an nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works. ## Contents * [Recent changes](#recent-changes) * [Conventions](#conventions) * [Requirements](#what-it-provides) * [Deployment](#deployment) * [Health checks](#health-checks) * [HTTP](#http) * [HTTPS](#https) * [Default SSL Certificate](#default-ssl-certificate) * [HTTPS enforcement](#server-side-https-enforcement) * [HSTS](#http-strict-transport-security) * [Kube-Lego](#automated-certificate-management-with-kube-lego) * [TCP Services](#exposing-tcp-services) * [UDP Services](#exposing-udp-services) * [Proxy Protocol](#proxy-protocol) * [Service Integration](#service-integration) * [NGINX customization](configuration.md) * [NGINX status page](#nginx-status-page) * [Running multiple ingress controllers](#running-multiple-ingress-controllers) * [Running on Cloudproviders](#running-on-cloudproviders) * [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller) * [Log format](#log-format) * [Local cluster](#local-cluster) * [Debug & Troubleshooting](#troubleshooting) * [Why endpoints and not services?](#why-endpoints-and-not-services) * [Metrics](#metrics) * [Limitations](#limitations) * [NGINX Notes](#nginx-notes) ## Recent changes Change history is available in [CHANGELOG.md](CHANGELOG.md) ## Conventions Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with: `openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)"` and create the secret via `kubectl create secret tls --key file --cert file` ## Requirements - Default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) ## Deployment First create a default backend: ``` $ kubectl create -f examples/default-backend.yaml $ kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend ``` Loadbalancers are created via a ReplicationController or Daemonset: ``` $ kubectl create -f examples/default/rc-default.yaml ``` ## Health checks The proveded examples in the Ingress controller use a `readiness` and `liveness` probe. By default the URL is `/healthz` and the port `18080`. Using the flag `--health-check-path` is possible to specify a custom path. In some environments only port 80 is allowed to enable health checks. For this reason the Ingress controller exposes this path in the default server. If PROXY protocol is enabled the health check must use the default port `18080`. This is required because Kubernetes probes do not understand PROXY protocol. ## HTTP First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output ``` kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080 ``` Now we expose the same application in two different services (so we can create different Ingress rules) ``` kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y ``` Next we create a couple of Ingress rules ``` kubectl create -f examples/ingress.yaml ``` we check that ingress rules are defined: ``` $ kubectl get ing NAME RULE BACKEND ADDRESS echomap - foo.bar.com /foo echoheaders-x:80 bar.baz.com /bar echoheaders-y:80 /foo echoheaders-x:80 ``` Before the deploy of the Ingress controller we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) ``` kubectl create -f examples/default-backend.yaml kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend ``` Check NGINX it is running with the defined Ingress rules: ``` $ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}') $ curl $LBIP/foo -H 'Host: foo.bar.com' ``` ## HTTPS You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg: ``` apiVersion: v1 data: tls.crt: base64 encoded cert tls.key: base64 encoded key kind: Secret metadata: name: testsecret namespace: default type: Opaque ``` Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS: ``` apiVersion: extensions/v1beta1 kind: Ingress metadata: name: no-rules-map spec: tls: secretName: testsecret backend: serviceName: s1 servicePort: 80 ``` Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate. Check the [example](examples/tls/README.md) ### Default SSL Certificate NGINX provides the option [server name](http://nginx.org/en/docs/http/server_names.html) as a catch-all in case of requests that do not match one of the configured server names. This configuration works without issues for HTTP traffic. In case of HTTPS NGINX requires a certificate. For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned case. If this flag is not provided NGINX will use a self signed certificate. Running without the flag `--default-ssl-certificate`: ``` $ curl -v https://10.2.78.7:443 -k * Rebuilt URL to: https://10.2.78.7:443/ * Trying 10.2.78.4... * Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=foo.bar.com * start date: Apr 13 00:50:56 2016 GMT * expire date: Apr 13 00:50:56 2017 GMT * issuer: CN=foo.bar.com * SSL certificate verify result: self signed certificate (18), continuing anyway. > GET / HTTP/1.1 > Host: 10.2.78.7 > User-Agent: curl/7.47.1 > Accept: */* > < HTTP/1.1 404 Not Found < Server: nginx/1.11.1 < Date: Thu, 21 Jul 2016 15:38:46 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Strict-Transport-Security: max-age=15724800; includeSubDomains; preload < The page you're looking for could not be found. * Connection #0 to host 10.2.78.7 left intact ``` Specifying `--default-ssl-certificate=default/foo-tls`: ``` core@localhost ~ $ curl -v https://10.2.78.7:443 -k * Rebuilt URL to: https://10.2.78.7:443/ * Trying 10.2.78.7... * Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Client hello (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Client hello (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=foo.bar.com * start date: Apr 13 00:50:56 2016 GMT * expire date: Apr 13 00:50:56 2017 GMT * issuer: CN=foo.bar.com * SSL certificate verify result: self signed certificate (18), continuing anyway. > GET / HTTP/1.1 > Host: 10.2.78.7 > User-Agent: curl/7.47.1 > Accept: */* > < HTTP/1.1 404 Not Found < Server: nginx/1.11.1 < Date: Mon, 18 Jul 2016 21:02:59 GMT < Content-Type: text/html < Transfer-Encoding: chunked < Connection: keep-alive < Strict-Transport-Security: max-age=15724800; includeSubDomains; preload < The page you're looking for could not be found. * Connection #0 to host 10.2.78.7 left intact ``` ### Server-side HTTPS enforcement By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX config map. To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource. ### HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS. By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule. To disable this behavior use `hsts=false` in the NGINX config map. ### Automated Certificate Management with Kube-Lego [Kube-Lego] automatically requests missing certificates or expired from [Let's Encrypt] by monitoring ingress resources and its referenced secrets. To enable this for an ingress resource you have to add an annotation: ``` kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true" ``` To setup Kube-Lego you can take a look at this [full example]. The first version to fully support Kube-Lego is nginx Ingress controller 0.8. [full example]:https://github.com/jetstack/kube-lego/tree/master/examples [Kube-Lego]:https://github.com/jetstack/kube-lego [Let's Encrypt]:https://letsencrypt.org ## Exposing TCP services Ingress does not support TCP services (yet). For this reason this Ingress controller uses the flag `--tcp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `:` It is possible to use a number or the name of the port. The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000` ``` apiVersion: v1 kind: ConfigMap metadata: name: tcp-configmap-example data: 9000: "default/example-go:8080" ``` Please check the [tcp services](examples/tcp/README.md) example ## Exposing UDP services Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/). Ingress does not support UDP services (yet). For this reason this Ingress controller uses the flag `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `:` It is possible to use a number or the name of the port. The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53` ``` apiVersion: v1 kind: ConfigMap metadata: name: udp-configmap-example data: 53: "kube-system/kube-dns:53" ``` Please check the [udp services](examples/udp/README.md) example ## Proxy Protocol If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself. Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol. Please check the [proxy-protocol](examples/proxy-protocol/) example ## Service Integration On clouds like AWS or GCE, using a service with `Type=LoadBalancer` allows the default kubernetes integration, which can save a lot of work. By passing the `--publish-service` argument to the controller, the ingress status will be updated with the load balancer configuration of the service, rather than the IP/s of the node/s. ### Custom errors In case of an error in a request the body of the response is obtained from the `default backend`. Each request to the default backend includes two headers: - `X-Code` indicates the HTTP code - `X-Format` the value of the `Accept` header Using this two headers is possible to use a custom backend service like [this one](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/custom-errors/README.md) ### NGINX status page The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`. This controller provides an alternative to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module. To use this module just provide a config map with the key `enable-vts-status=true`. The URL is exposed in the port 8080. Please check the example `example/rc-default.yaml` ![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter") To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json` ### Running multiple ingress controllers If you're running multiple ingress controllers, or running on a cloudprovider that natively handles ingress, you need to specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses that you would like this controller to claim. Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying the wrong value will result in all ingress controllers ignoring the ingress. Multiple ingress controllers running in the same cluster was not supported in Kubernetes versions < 1.3. ### Running on Cloudproviders If you're running this ingress controller on a cloudprovider, you should assume the provider also has a native Ingress controller and specify the ingress.class annotation as indicated in this section. In addition to this, you will need to add a firewall rule for each port this controller is listening on, i.e :80 and :443. ### Disabling NGINX ingress controller Setting the annotation `kubernetes.io/ingress.class` to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller. ### Log format The default configuration uses a custom logging format to add additional information about upstreams ``` log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - ' '[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' '$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; ``` Sources: - [upstream variables](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables) - [embedded variables](http://nginx.org/en/docs/http/ngx_http_core_module.html#variables) Description: - `$proxy_protocol_addr`: if PROXY protocol is enabled - `$remote_addr`: if PROXY protocol is disabled (default) - `$proxy_add_x_forwarded_for`: the `X-Forwarded-For` client request header field with the $remote_addr variable appended to it, separated by a comma - `$remote_user`: user name supplied with the Basic authentication - `$time_local`: local time in the Common Log Format - `$request`: full original request line - `$status`: response status - `$body_bytes_sent`: number of bytes sent to a client, not counting the response header - `$http_referer`: value of the Referer header - `$http_user_agent`: value of User-Agent header - `$request_length`: request length (including request line, header, and request body) - `$request_time`: time elapsed since the first bytes were read from the client - `$proxy_upstream_name`: name of the upstream. The format is `upstream---` - `$upstream_addr`: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas - `$upstream_response_length`: keeps the length of the response obtained from the upstream server - `$upstream_response_time`: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution - `$upstream_status`: keeps status code of the response obtained from the upstream server ### Local cluster Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/running-locally.md) for more details. Use of `hostNetwork: true` in the ingress controller is required to falls back at localhost:8080 for the apiserver if every other client creation check fails (eg: service account not present, kubeconfig doesn't exist, no master env vars...) ### Debug & Troubleshooting Using the flag `--v=XX` it is possible to increase the level of logging. In particular: - `--v=2` shows details using `diff` about the changes in the configuration in nginx ``` I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000 +++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000 @@ -235,7 +235,6 @@ upstream default-echoheadersx { least_conn; - server 10.2.112.124:5000; server 10.2.208.50:5000; } I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading... ``` - `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format - `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html) ### Metrics Using the doc [Instrumenting Kubernetes with a new metric](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/instrumentation.md#instrumenting-kubernetes-with-a-new-metric) the Ingress controller exposes the registered metrics via HTTP. Besides the default metrics provided by Prometheus is possible to get the number of reloads `reload_operations` and reloads with error `reload_operations_errors`, ie error in validation in the configuration file before the reload. The metrics are exposed in port `10254` and path `/metrics`. Using curl: `curl -v :10254/metrics` ### Limitations - Ingress rules for TLS require the definition of the field `host` ### Why endpoints and not services The NGINX ingress controller does not uses [Services](http://kubernetes.io/docs/user-guide/services) to route traffic to the pods. Instead it uses the Endpoints API in order to bypass [kube-proxy](http://kubernetes.io/docs/admin/kube-proxy/) to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT. ### NGINX notes Since `gcr.io/google_containers/nginx-slim:0.8` NGINX contains the next patches: - Dynamic TLS record size [nginx__dynamic_tls_records.patch](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/) NGINX provides the parameter `ssl_buffer_size` to adjust the size of the buffer. Default value in NGINX is 16KB. The ingress controller changes the default to 4KB. This improves the [TLS Time To First Byte (TTTFB)](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) but the size is fixed. This patches adapts the size of the buffer to the content is being served helping to improve the perceived latency.