Add docs/examples for proxy_protocol

This commit is contained in:
Christian Simon 2016-05-06 09:23:52 +01:00
parent 2db2324c6c
commit ca53e1efb4
5 changed files with 112 additions and 0 deletions

View file

@ -132,6 +132,14 @@ To disable this behavior use `hsts=false` in the NGINX ConfigMap.
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
## Proxy Protocol
If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the acutal TCP connection itself.
Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol.
Please check the [proxy-protocol](examples/proxy-protocol/) example
## Exposing TCP services
Ingress does not support TCP services (yet). For this reason this Ingress controller uses a ConfigMap where the key is the external port to use and the value is

View file

@ -0,0 +1,34 @@
# Nginx ingress controller using Proxy Protocol
For using the Proxy Protocol in a load balancing solution, both the load balancer and its backend need to enable Proxy Protocol.
To enable it for NGINX you have to setup a [configmap](nginx-configmap.yaml) option.
## HAProxy
This HAProxy snippet would forward HTTP(S) traffic to a two worker kubernetes cluster, with NGINX running on the node ports, like defined in this example's [service](nginx-svc.yaml).
```
listen kube-nginx-http
bind :::80 v6only
bind 0.0.0.0:80
mode tcp
option tcplog
balance leastconn
server node1 <node-ip1>:32080 check-send-proxy inter 10s send-proxy
server node2 <node-ip2>:32080 check-send-proxy inter 10s send-proxy
listen kube-nginx-https
bind :::443 v6only
bind 0.0.0.0:443
mode tcp
option tcplog
balance leastconn
server node1 <node-ip1>:32443 check-send-proxy inter 10s send-proxy
server node2 <node-ip2>:32443 check-send-proxy inter 10s send-proxy
```
## ELBs in AWS
See this [documentation](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) how to enable Proxy Protocol in ELBs

View file

@ -0,0 +1,6 @@
apiVersion: v1
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
kind: ConfigMap

View file

@ -0,0 +1,45 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.61
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10249
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
- containerPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=default/default-http-backend
- --nginx-configmap=default/nginx-ingress-controller

View file

@ -0,0 +1,19 @@
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32080
protocol: TCP
name: http
- port: 443
targetPort: 443
nodePort: 32443
protocol: TCP
name: https
selector:
k8s-app: nginx-ingress-lb