Merge pull request #642 from aledbf/restrict-nginx-status

Improve nginx-ingress-controller documentation.
This commit is contained in:
Prashanth B 2016-03-29 15:17:35 -07:00
commit a06f0a707e
53 changed files with 473 additions and 490 deletions

View file

@ -1,434 +0,0 @@
# Nginx Ingress Controller
This is a nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works.
## What it provides?
- Ingress controller
- nginx 1.9.x with
- SSL support
- custom ssl_dhparam (optional). Just mount a secret with a file named `dhparam.pem`.
- support for TCP services (flag `--tcp-services-configmap`)
- custom nginx configuration using [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md)
- custom error pages. Using the flag `--custom-error-service` is possible to use a custom compatible [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) image [nginx-error-server](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) that provides an additional `/errors` route that returns custom content for a particular error code. **This is completely optional**
## Requirements
- default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) (or a custom compatible image)
## TLS
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:
```
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
secretName: testsecret
backend:
serviceName: s1
servicePort: 80
```
Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.
#### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
## Examples:
First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output
```
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.1 --replicas=1 --port=8080
```
Now we expose the same application in two different services (so we can create different Ingress rules)
```
kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-y
```
Next we create a couple of Ingress rules
```
kubectl create -f examples/ingress.yaml
```
we check that ingress rules are defined:
```
$ kubectl get ing
NAME RULE BACKEND ADDRESS
echomap -
foo.bar.com
/foo echoheaders-x:80
bar.baz.com
/bar echoheaders-y:80
/foo echoheaders-x:80
```
Before the deploy of nginx we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) (or a compatible custom image)
```
kubectl create -f examples/default-backend.yaml
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
```
# Default configuration
The last step is the deploy of nginx Ingress rc (from the examples directory)
```
kubectl create -f examples/rc-default.yaml
```
To test if evertyhing is working correctly:
`curl -v http://<node IP address>:80/foo -H 'Host: foo.bar.com'`
You should see an output similar to
```
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foo HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:45:13 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.84.43
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/foo
SERVER VALUES:
server_version=nginx: 1.9.7 - lua: 9019
HEADERS RECEIVED:
accept=*/*
connection=close
host=foo.bar.com
user-agent=curl/7.43.0
x-forwarded-for=172.17.4.1
x-forwarded-host=foo.bar.com
x-forwarded-server=foo.bar.com
x-real-ip=172.17.4.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
```
If we try to get a non exising route like `/foobar` we should see
```
$ curl -v 172.17.4.99/foobar -H 'Host: foo.bar.com'
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foobar HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:48:18 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
default backend - 404
* Connection #0 to host 172.17.4.99 left intact
```
(this test checked that the default backend is properly working)
*Replacing the default backend with a custom one we can change the default error pages provided by nginx*
# Exposing TCP services
First we need to remove the running
```
kubectl delete rc nginx-ingress-3rdpartycfg
```
To configure which services and ports will be exposed
```
kubectl create -f examples/tcp-configmap-example.yaml
```
The file `examples/tcp-configmap-example.yaml` uses a ConfigMap where the key is the external port to use and the value is
`<namespace/service name>:<service port>`
It is possible to use a number or the name of the port.
```
kubectl create -f examples/rc-tcp.yaml
```
Now we can test the new service:
```
$ (sleep 1; echo "GET / HTTP/1.1"; echo "Host: 172.17.4.99:9000"; echo;echo;sleep 2) | telnet 172.17.4.99 9000
Trying 172.17.4.99...
Connected to 172.17.4.99.
Escape character is '^]'.
HTTP/1.1 200 OK
Server: nginx/1.9.7
Date: Tue, 15 Dec 2015 14:46:28 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
f
CLIENT VALUES:
1a
client_address=10.2.84.45
c
command=GET
c
real path=/
a
query=nil
14
request_version=1.1
25
request_uri=http://172.17.4.99:8080/
1
f
SERVER VALUES:
28
server_version=nginx: 1.9.7 - lua: 9019
1
12
HEADERS RECEIVED:
16
host=172.17.4.99:9000
6
BODY:
14
-no body in request-
0
```
## SSL
First create a secret containing the ssl certificate and key. This example creates the certificate and the secret (json):
`SECRET_NAME=secret-echoheaders-1 HOSTS=foo.bar.com ./examples/certs.sh`
Create the secret:
```
kubectl create -f secret-secret-echoheaders-1-foo.bar.com.json
```
Check if the secret was created:
```
$ kubectl get secrets
NAME TYPE DATA AGE
secret-echoheaders-1 Opaque 2 9m
```
Like before we need to remove the running nginx rc
```
kubectl delete rc nginx-ingress-3rdpartycfg
```
Next create a new rc that uses the secret
```
kubectl create -f examples/rc-ssl.yaml
```
*Note:* this example uses a self signed certificate.
Example output:
```
$ curl -v https://172.17.4.99/foo -H 'Host: bar.baz.com' -k
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 4444 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: foo.bar.com
> GET /foo HTTP/1.1
> Host: bar.baz.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.8
< Date: Thu, 17 Dec 2015 14:57:03 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.84.34
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://bar.baz.com:8080/foo
SERVER VALUES:
server_version=nginx: 1.9.7 - lua: 9019
HEADERS RECEIVED:
accept=*/*
connection=close
host=bar.baz.com
user-agent=curl/7.43.0
x-forwarded-for=172.17.4.1
x-forwarded-host=bar.baz.com
x-forwarded-server=bar.baz.com
x-real-ip=172.17.4.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
-no body in request-
```
## Custom errors
The default backend provides a way to customize the default 404 page. This helps but sometimes is not enough.
Using the flag `--custom-error-service` is possible to use an image that must be 404 compatible and provide the route /error
[Here](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) there is an example of the the image
The route `/error` expects two arguments: code and format
* code defines the wich error code is expected to be returned (502,503,etc.)
* format the format that should be returned For instance /error?code=504&format=json or /error?code=502&format=html
Using a volume pointing to `/var/www/html` directory is possible to use a custom error
## Debug
Using the flag `--v=XX` it is possible to increase the level of logging.
In particular:
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
```
I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000
+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000
@@ -235,7 +235,6 @@
upstream default-echoheadersx {
least_conn;
- server 10.2.112.124:5000;
server 10.2.208.50:5000;
}
I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading...
```
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html)
## Custom NGINX configuration
Using a ConfigMap it is possible to customize the defaults in nginx.
The next command shows the defaults:
```
$ ./nginx-third-party-lb --dump-nginx—configuration
Example of ConfigMap to customize NGINX configuration:
data:
body-size: 1m
error-log-level: info
gzip-types: application/atom+xml application/javascript application/json application/rss+xml
application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
text/css text/plain text/x-component
hts-include-subdomains: "true"
hts-max-age: "15724800"
keep-alive: "75"
max-worker-connections: "16384"
proxy-connect-timeout: "30"
proxy-read-timeout: "30"
proxy-real-ip-cidr: 0.0.0.0/0
proxy-send-timeout: "30"
server-name-hash-bucket-size: "64"
server-name-hash-max-size: "512"
ssl-buffer-size: 4k
ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
ssl-protocols: TLSv1 TLSv1.1 TLSv1.2
ssl-session-cache: "true"
ssl-session-cache-size: 10m
ssl-session-tickets: "true"
ssl-session-timeout: 10m
use-gzip: "true"
use-hts: "true"
worker-processes: "8"
metadata:
name: custom-name
namespace: a-valid-namespace
```
For instance, if we want to change the timeouts we need to create a ConfigMap:
```
$ cat nginx-load-balancer-conf.yaml
apiVersion: v1
data:
proxy-connect-timeout: "10"
proxy-read-timeout: "120"
proxy-send-imeout: "120"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
```
```
$ kubectl create -f nginx-load-balancer-conf.yaml
```
Please check the example `rc-custom-configuration.yaml`
If the Configmap it is updated, NGINX will be reloaded with the new configuration
## Troubleshooting
Problems encountered during [1.2.0-alpha7 deployment](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md):
* make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.
* To fix the above, setup-files.sh must be patched before the cluster is inited (refer to https://github.com/kubernetes/kubernetes/pull/21504)

View file

@ -19,7 +19,7 @@ RUN apt-get update && apt-get install -y \
--no-install-recommends \
&& rm -rf /var/lib/apt/lists/*
COPY nginx-third-party-lb /
COPY nginx-ingress-controller /
COPY nginx.tmpl /
COPY default.conf /etc/nginx/nginx.conf
@ -27,4 +27,4 @@ COPY lua /etc/nginx/lua/
WORKDIR /
CMD ["/nginx-third-party-lb"]
CMD ["/nginx-ingress-controller"]

View file

@ -2,10 +2,10 @@ all: push
# 0.0 shouldn't clobber any release builds
TAG = 0.4
PREFIX = gcr.io/google_containers/nginx-third-party
PREFIX = gcr.io/google_containers/nginx-ingress-controller
controller: controller.go clean
CGO_ENABLED=0 GOOS=linux godep go build -a -installsuffix cgo -ldflags '-w' -o nginx-third-party-lb
CGO_ENABLED=0 GOOS=linux godep go build -a -installsuffix cgo -ldflags '-w' -o nginx-ingress-controller
container: controller
docker build -t $(PREFIX):$(TAG) .
@ -14,4 +14,4 @@ push: container
gcloud docker push $(PREFIX):$(TAG)
clean:
rm -f nginx-third-party-lb
rm -f nginx-ingress-controller

199
controllers/nginx/README.md Normal file
View file

@ -0,0 +1,199 @@
# Nginx Ingress Controller
This is a nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works.
## What it provides?
- Ingress controller
- nginx 1.9.x with
- SSL support
- custom ssl_dhparam (optional). Just mount a secret with a file named `dhparam.pem`.
- support for TCP services (flag `--tcp-services-configmap`)
- custom nginx configuration using [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/configmap.md)
- custom error pages. Using the flag `--custom-error-service` is possible to use a custom compatible [404-server](https://github.com/kubernetes/contrib/tree/master/404-server) image
## Requirements
- default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
## Deploy the Ingress controller
Loadbalancers are created via a ReplicationController or Daemonset
```
kubectl create -f examples/default/rc-default.yaml
```
## HTTP
First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output
```
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.3 --replicas=1 --port=8080
```
Now we expose the same application in two different services (so we can create different Ingress rules)
```
kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-x
kubectl expose rc echoheaders --port=80 --target-port=8080 --name=echoheaders-y
```
Next we create a couple of Ingress rules
```
kubectl create -f examples/ingress.yaml
```
we check that ingress rules are defined:
```
$ kubectl get ing
NAME RULE BACKEND ADDRESS
echomap -
foo.bar.com
/foo echoheaders-x:80
bar.baz.com
/bar echoheaders-y:80
/foo echoheaders-x:80
```
Before the deploy of the Ingress controller we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
```
kubectl create -f examples/default-backend.yaml
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
```
Check NGINX it is running with the defined Ingress rules:
```
$ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}')
$ curl $LBIP/foo -H 'Host: foo.bar.com'
```
## TLS
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:
```
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: no-rules-map
spec:
tls:
secretName: testsecret
backend:
serviceName: s1
servicePort: 80
```
Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.
Check the [example](examples/tls/README.md)
#### Optimizing TLS Time To First Byte (TTTFB)
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
## Exposing TCP services
Ingress does not support TCP services (yet). For this reason this Ingress controller uses a ConfigMap where the key is the external port to use and the value is
`<namespace/service name>:<service port>`
It is possible to use a number or the name of the port.
The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000`
```
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-configmap-example
data:
9000: "default/example-go:8080"
```
Please check the [tcp services](examples/tcp/README.md) example
## Custom NGINX configuration
Using a ConfigMap it is possible to customize the defaults in nginx.
Please check the [tcp services](examples/custom-configuration/README.md) example
### NGINX status page
The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`.
This controller provides an alternitive to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module.
To use this module just provide a ConfigMap with the key `enable-vts-status=true`. The URL is exposed in the port 8080.
Please check the example `example/rc-default.yaml`
![nginx-module-vts screenshot](https://cloud.githubusercontent.com/assets/3648408/10876811/77a67b70-8183-11e5-9924-6a6d0c5dc73a.png "screenshot with filter")
To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json`
## Troubleshooting
Problems encountered during [1.2.0-alpha7 deployment](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md):
* make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.
* To fix the above, setup-files.sh must be patched before the cluster is inited (refer to https://github.com/kubernetes/kubernetes/pull/21504)
### Custom errors
The default backend provides a way to customize the default 404 page. This helps but sometimes is not enough.
Using the flag `--custom-error-service` is possible to use an image that must be 404 compatible and provide the route /error
[Here](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) there is an example of the the image
The route `/error` expects two arguments: code and format
* code defines the wich error code is expected to be returned (502,503,etc.)
* format the format that should be returned For instance /error?code=504&format=json or /error?code=502&format=html
Using a volume pointing to `/var/www/html` directory is possible to use a custom error
### Debug
Using the flag `--v=XX` it is possible to increase the level of logging.
In particular:
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
```
I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000
+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000
@@ -235,7 +235,6 @@
upstream default-echoheadersx {
least_conn;
- server 10.2.112.124:5000;
server 10.2.208.50:5000;
}
I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading...
```
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html)
## Limitations
TODO

View file

@ -36,7 +36,7 @@ import (
"k8s.io/kubernetes/pkg/util/intstr"
"k8s.io/kubernetes/pkg/watch"
"k8s.io/contrib/ingress/controllers/nginx-third-party/nginx"
"k8s.io/contrib/ingress/controllers/nginx/nginx"
)
const (

View file

@ -0,0 +1,57 @@
The next command shows the defaults:
```
$ ./nginx-third-party-lb --dump-nginx—configuration
Example of ConfigMap to customize NGINX configuration:
data:
body-size: 1m
error-log-level: info
gzip-types: application/atom+xml application/javascript application/json application/rss+xml
application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
text/css text/plain text/x-component
hts-include-subdomains: "true"
hts-max-age: "15724800"
keep-alive: "75"
max-worker-connections: "16384"
proxy-connect-timeout: "30"
proxy-read-timeout: "30"
proxy-real-ip-cidr: 0.0.0.0/0
proxy-send-timeout: "30"
server-name-hash-bucket-size: "64"
server-name-hash-max-size: "512"
ssl-buffer-size: 4k
ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
ssl-protocols: TLSv1 TLSv1.1 TLSv1.2
ssl-session-cache: "true"
ssl-session-cache-size: 10m
ssl-session-tickets: "true"
ssl-session-timeout: 10m
use-gzip: "true"
use-hts: "true"
worker-processes: "8"
metadata:
name: custom-name
namespace: a-valid-namespace
```
For instance, if we want to change the timeouts we need to create a ConfigMap:
```
$ cat nginx-load-balancer-conf.yaml
apiVersion: v1
data:
proxy-connect-timeout: "10"
proxy-read-timeout: "120"
proxy-send-imeout: "120"
kind: ConfigMap
metadata:
name: nginx-load-balancer-conf
```
```
$ kubectl create -f nginx-load-balancer-conf.yaml
```
Please check the example `rc-custom-configuration.yaml`
If the Configmap it is updated, NGINX will be reloaded with the new configuration

View file

@ -1,7 +1,7 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-3rdpartycfg
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
@ -15,7 +15,7 @@ spec:
name: nginx-ingress-lb
spec:
containers:
- image: gcr.io/google_containers/nginx-third-party:0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -44,11 +44,7 @@ spec:
hostPort: 80
- containerPort: 443
hostPort: 4444
# we expose 8080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 8080
hostPort: 8081
args:
- /nginx-third-party-lb
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend
- --nginx-configmap=default/nginx-load-balancer-conf

View file

@ -0,0 +1,8 @@
In some cases could be required to run the Ingress controller in all the nodes in cluster.
Using [DaemonSet](https://github.com/kubernetes/kubernetes/blob/master/docs/design/daemon.md) it is possible to do this.
The file `as-daemonset.yaml` contains an example
```
kubectl create -f as-daemonset.yaml
```

View file

@ -9,7 +9,7 @@ spec:
name: nginx-ingress-lb
spec:
containers:
- image: gcr.io/google_containers/nginx-third-party::0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -38,10 +38,6 @@ spec:
hostPort: 80
- containerPort: 443
hostPort: 4444
# we expose 8080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 8080
hostPort: 8081
args:
- /nginx-third-party-lb
- /nginx-ingress-controller-lb
- --default-backend-service=default/default-http-backend

View file

@ -0,0 +1,76 @@
Create the Ingress controller
```
kubectl create -f rc-default.yaml
```
To test if evertyhing is working correctly:
`curl -v http://<node IP address>:80/foo -H 'Host: foo.bar.com'`
You should see an output similar to
```
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foo HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:45:13 GMT
< Content-Type: text/plain
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
CLIENT VALUES:
client_address=10.2.84.43
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/foo
SERVER VALUES:
server_version=nginx: 1.9.7 - lua: 9019
HEADERS RECEIVED:
accept=*/*
connection=close
host=foo.bar.com
user-agent=curl/7.43.0
x-forwarded-for=172.17.4.1
x-forwarded-host=foo.bar.com
x-forwarded-server=foo.bar.com
x-real-ip=172.17.4.1
BODY:
* Connection #0 to host 172.17.4.99 left intact
```
If we try to get a non exising route like `/foobar` we should see
```
$ curl -v 172.17.4.99/foobar -H 'Host: foo.bar.com'
* Trying 172.17.4.99...
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
> GET /foobar HTTP/1.1
> Host: foo.bar.com
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.9.8
< Date: Tue, 15 Dec 2015 13:48:18 GMT
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
< Vary: Accept-Encoding
<
default backend - 404
* Connection #0 to host 172.17.4.99 left intact
```
(this test checked that the default backend is properly working)
*Replacing the default backend with a custom one we can change the default error pages provided by nginx*

View file

@ -1,7 +1,7 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-3rdpartycfg
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
@ -15,7 +15,7 @@ spec:
name: nginx-ingress-lb
spec:
containers:
- image: gcr.io/google_containers/nginx-third-party:0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -44,10 +44,6 @@ spec:
hostPort: 80
- containerPort: 443
hostPort: 4444
# we expose 8080 to access nginx stats in url /nginx-status
# this is optional
- containerPort: 8080
hostPort: 8081
args:
- /nginx-third-party-lb
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend

View file

@ -2,7 +2,7 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-3rdpartycfg
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
@ -15,17 +15,12 @@ spec:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
# A secret for each nginx host that requires SSL. These secrets need to
# exist before hand, see README.
# The secret must contains 2 variables: cert and key.
# Follow this https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh
# as a guide on how to generate secrets containing SSL certificates.
volumes:
- name: dhparam-example
secret:
secretName: dhparam-example
containers:
- image: gcr.io/google_containers/nginx-third-party:0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -59,10 +54,7 @@ spec:
volumeMounts:
- mountPath: /etc/nginx-ssl/dhparam
name: dhparam-example
# the flags tcp-services is required because Ingress do not support TCP rules
# if no namespace is specified "default" is used. Example: nodefaultns/example-go:8080
# containerPort 8080 is mapped to 9000 in the node.
args:
- /nginx-third-party-lb
- /nginx-ingress-controller
- --tcp-services-configmap=default/tcp-configmap-example
- --default-backend-service=default/default-http-backend

View file

@ -0,0 +1,74 @@
To configure which services and ports will be exposed
```
kubectl create -f tcp-configmap-example.yaml
```
The file `tcp-configmap-example.yaml` uses a ConfigMap where the key is the external port to use and the value is
`<namespace/service name>:<service port>`
It is possible to use a number or the name of the port.
```
kubectl create -f rc-tcp.yaml
```
Now we can test the new service:
```
$ (sleep 1; echo "GET / HTTP/1.1"; echo "Host: 172.17.4.99:9000"; echo;echo;sleep 2) | telnet 172.17.4.99 9000
Trying 172.17.4.99...
Connected to 172.17.4.99.
Escape character is '^]'.
HTTP/1.1 200 OK
Server: nginx/1.9.7
Date: Tue, 15 Dec 2015 14:46:28 GMT
Content-Type: text/plain
Transfer-Encoding: chunked
Connection: keep-alive
f
CLIENT VALUES:
1a
client_address=10.2.84.45
c
command=GET
c
real path=/
a
query=nil
14
request_version=1.1
25
request_uri=http://172.17.4.99:8080/
1
f
SERVER VALUES:
28
server_version=nginx: 1.9.7 - lua: 9019
1
12
HEADERS RECEIVED:
16
host=172.17.4.99:9000
6
BODY:
14
-no body in request-
0
```

View file

@ -1,7 +1,7 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-3rdpartycfg
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
@ -15,7 +15,7 @@ spec:
name: nginx-ingress-lb
spec:
containers:
- image: gcr.io/google_containers/nginx-third-party:0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -53,6 +53,6 @@ spec:
- containerPort: 9000
hostPort: 9000
args:
- /nginx-third-party-lb
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend
- --tcp-services-configmap=default/tcp-configmap-example

View file

View file

@ -1,7 +1,7 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-3rdpartycfg
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
@ -15,7 +15,7 @@ spec:
name: nginx-ingress-lb
spec:
containers:
- image: gcr.io/google_containers/nginx-third-party:0.4
- image: gcr.io/google_containers/nginx-ingress-controller:0.4
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
@ -46,9 +46,6 @@ spec:
hostPort: 4444
- containerPort: 8080
hostPort: 9000
# to configure ssl_dhparam http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
# use the dhparam.sh file to generate and mount a secret that containing the key dhparam.pem or
# create a configuration with the content of dhparam.pem in the field sslDHParam.
args:
- /nginx-third-party-lb
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend

View file

@ -27,7 +27,7 @@ import (
"github.com/golang/glog"
"github.com/spf13/pflag"
"k8s.io/contrib/ingress/controllers/nginx-third-party/nginx"
"k8s.io/contrib/ingress/controllers/nginx/nginx"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/client/unversioned"

View file

@ -16,7 +16,7 @@ events {
}
http {
#vhost_traffic_status_zone shared:vhost_traffic_status:10m;
{{ if $cfg.enableVtsStatus}}vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.vtsStatusZoneSize }};{{ end }}
# lus sectrion to return proper error codes when custom pages are used
lua_package_path '.?.lua;./etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/lua-resty-http/lib/?.lua;';
@ -75,12 +75,17 @@ http {
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $access_scheme {
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
map $access_scheme $sts {
map $http_x_forwarded_proto $pass_forwarded_for {
default $http_x_forwarded_for;
'' $proxy_add_x_forwarded_for;
}
map $pass_access_scheme $sts {
'https' 'max-age={{ $cfg.htsMaxAge }}{{ if $cfg.htsIncludeSubdomains }}; includeSubDomains{{ end }}; preload';
}
@ -150,6 +155,14 @@ http {
return 200;
}
location /nginx_status {
allow 127.0.0.1;
deny all;
access_log off;
stub_status on;
}
{{ template "CUSTOM_ERRORS" $cfg }}
}
@ -167,6 +180,9 @@ http {
{{ if $server.SSL }}listen 443 ssl http2;
ssl_certificate {{ $server.SSLCertificate }};
ssl_certificate_key {{ $server.SSLCertificateKey }};{{ end }}
{{ if $cfg.enableVtsStatus }}
vhost_traffic_status_filter_by_set_key {{ $server.Name }} application::*;
{{ end }}
server_name {{ $server.Name }};
@ -186,10 +202,10 @@ http {
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-For $pass_forwarded_for;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_connect_timeout {{ $cfg.proxyConnectTimeout }}s;
proxy_send_timeout {{ $cfg.proxySendTimeout }}s;
@ -210,7 +226,6 @@ http {
# default server, including healthcheck
server {
listen 8080 default_server{{ if $cfg.useProxyProtocol }} proxy_protocol{{ end }} reuseport;
#vhost_traffic_status_filter_by_host on;
location /healthz {
access_log off;
@ -222,11 +237,14 @@ http {
proxy_pass http://127.0.0.1:10249/healthz;
}
location /nginx-status {
#vhost_traffic_status_display;
#vhost_traffic_status_display_format html;
location /nginx_status {
{{ if $cfg.enableVtsStatus }}
vhost_traffic_status_display;
vhost_traffic_status_display_format html;
{{ else }}
access_log off;
stub_status on;
{{ end }}
}
location / {

View file

@ -87,6 +87,13 @@ type nginxConfiguration struct {
// Sets the maximum allowed size of the client request body
BodySize string `structs:"body-size,omitempty"`
// EnableVtsStatus allows the replacement of the default status page with a third party module named
// nginx-module-vts - https://github.com/vozlt/nginx-module-vts
// By default this is disabled
EnableVtsStatus bool `structs:"enable-vts-status,omitempty"`
VtsStatusZoneSize string `structs:"vts-status-zone-size,omitempty"`
// http://nginx.org/en/docs/ngx_core_module.html#error_log
// Configures logging level [debug | info | notice | warn | error | crit | alert | emerg]
// Log levels above are listed in the order of increasing severity
@ -250,6 +257,7 @@ func newDefaultNginxCfg() nginxConfiguration {
UseProxyProtocol: false,
UseGzip: true,
WorkerProcesses: strconv.Itoa(runtime.NumCPU()),
VtsStatusZoneSize: "10m",
}
if glog.V(5) {