GCE Ingress docs update

This commit is contained in:
Prashanth Balasubramanian 2016-07-06 13:23:38 -07:00
parent 9b762b7d54
commit 94ea4ab247
8 changed files with 294 additions and 149 deletions

View file

@ -4,9 +4,6 @@ As of the Kubernetes 1.2 release, the GCE L7 Loadbalancer controller is still a
This is a list of beta limitations:
* [Firewalls](#creating-the-firewall-rule-for-glbc-health-checks): You must create the firewall-rule required for GLBC's health checks to succeed.
* [UIDs](#running-multiple-loadbalanced-clusters-in-the-same-gce-project): If you're creating multiple clusters that will use Ingress within a single GCE project, you must assign a UID to GLBC so it doesn't stomp on resources from another cluster.
* [Health Checks](#health-checks): All Kubernetes services must serve a 200 page on '/', or whatever custom value you've specified through GLBC's `--health-check-path argument`.
* [IPs](#static-and-ephemeral-ips): Creating a simple HTTP Ingress will allocate an ephemeral IP. Creating an Ingress with a TLS section will allocate a static IP.
* [Latency](#latency): GLBC is not built for performance. Creating many Ingresses at a time can overwhelm it. It won't fall over, but will take its own time to churn through the Ingress queue.
* [Quota](#quota): By default, GCE projects are granted a quota of 3 Backend Services. This is insufficient for most Kubernetes clusters.
@ -87,67 +84,6 @@ Events:
```
## Health checks
Currently, all service backends must respond with a 200 on '/'. The content does not matter. If they fail to do so they will be deemed unhealthy by the GCE L7. This limitation is because there are 2 sets of health checks:
* From the kubernetes endpoints, taking the form of liveness/readiness probes
* From the GCE L7, which periodically pings '/'
We really want (1) to control the health of an instance but (2) is a GCE requirement. Ideally, we would point (2) at (1), but we still need (2) for pods that don't have a defined health check. This will probably get resolved when Ingress grows up.
## Running multiple loadbalanced clusters in the same GCE project
If you're creating multiple clusters that will use Ingress within a single GCE project, you MUST assign a UID to GLBC so it doesn't stomp on resources from another cluster. You can do so by:
```console
$ kubectl get rc --namespace=kube-system
NAME DESIRED CURRENT AGE
elasticsearch-logging-v1 2 2 26m
heapster-v1.0.0 1 1 26m
kibana-logging-v1 1 1 26m
kube-dns-v11 1 1 26m
kubernetes-dashboard-v1.0.0 1 1 26m
l7-lb-controller-v0.6.0 1 1 26m
monitoring-influxdb-grafana-v3 1 1 26m
$ kubectl edit rc l7-lb-controller-v0.6.0 --namespace=kube-system
```
And modify the args passed to the controller:
```yaml
- args:
- --default-backend-service=kube-system/default-http-backend
- --sync-period=300s
- --cluster-uid=uid
```
Saving the file should update the RC but not the existing pod. To do so, just delete the pod, and the RC will create a new one with the --cluster-uid args.
```console
$ kubectl delete pod -l name=glbc --namespace=kube-system
pod "l7-lb-controller-v0.6.0-ud9ix" deleted
$ kubectl get pod --namespace=kube-system -l name=glbc -o yaml | grep cluster-uid
- --cluster-uid=uid
```
## Creating the firewall rule for GLBC health checks
A default GKE/GCE cluster needs at least 1 firewall rule for GLBC to function. You can create it thus:
```console
$ gcloud compute firewall-rules create allow-130-211-0-0-22 \
--source-ranges 130.211.0.0/22 \
--target-tags $TAG \
--allow tcp:$NODE_PORT
```
Where `130.211.0.0/22` is the source range of the GCE L7, `$NODE_PORT` is the node port your Service is exposed on, i.e:
```console
$ kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services ${SERVICE_NAME}
```
and `$TAG` is an optional list of GKE instance tags, i.e:
```console
$ kubectl get nodes | awk '{print $1}' | tail -n +2 | grep -Po 'gke-[0-9,a-z]+-[0-9,a-z]+-node' | uniq
```
## Static and Ephemeral IPs
GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances-and-network#ephemeraladdress) and [static](https://cloud.google.com/compute/docs/instances-and-network#reservedaddress) IPs. A production website would always want a static IP, which ephemeral IPs are cheaper (both in terms of quota and cost), and are therefore better suited for experimentation.
@ -158,23 +94,29 @@ GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances
## Disabling GLBC
Since GLBC runs as a cluster addon, you cannot simply delete the RC. The easiest way to disable it is to do as follows:
Setting the annotation `kubernetes.io/ingress.class` to any value other than "gce" or the empty string, will force the GCE Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the GCE controller, eg:
* IFF you want to tear down existing L7 loadbalancers, hit the /delete-all-and-quit endpoint on the pod:
```console
$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
l7-lb-controller-7bb21 1/1 Running 0 1h
$ kubectl exec l7-lb-controller-7bb21 -c l7-lb-controller curl http://localhost:8081/delete-all-and-quit --namespace=kube-system
$ kubectl logs l7-lb-controller-7b221 -c l7-lb-controller --follow
...
I1007 00:30:00.322528 1 main.go:160] Handled quit, awaiting pod deletion.
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- secretName: tls-secret
backend:
serviceName: echoheaders-https
servicePort: 80
```
* Nullify the RC (but don't delete it or the addon controller will "fix" it for you)
As of Kubernetes 1.3, GLBC runs as a static pod on the master. If you want to totally disable it, you can ssh into the master node and delete the GLBC manifest file found at `/etc/kubernetes/manifests/glbc.manifest`. You can also disable it on GKE at cluster bring-up time through the `disable-addons` flag, eg:
```console
$ kubectl scale rc l7-lb-controller --replicas=0 --namespace=kube-system
gcloud container clusters create mycluster --network "default" --num-nodes 1 \
--machine-type n1-standard-2 --zone $ZONE \
--disable-addons HttpLoadBalancing \
--disk-size 50 --scopes storage-full
```

View file

@ -360,28 +360,29 @@ You just instructed the loadbalancer controller to quit, however if it had done
#### Health checks
Currently, all service backends must respond with a 200 on '/'. The content does not matter. If they fail to do so they will be deemed unhealthy by the GCE L7. This limitation is because there are 2 sets of health checks:
* From the kubernetes endpoints, taking the form of liveness/readiness probes
* From the GCE L7, which periodically pings '/'
We really want (1) to control the health of an instance but (2) is a GCE requirement. Ideally, we would point (2) at (1), but we still need (2) for pods that don't have a defined health check. This will probably get resolved when Ingress grows up.
Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer:
1. Respond with a 200 on '/'. The content does not matter.
2. Expose an arbitrary url as a `readiness` probe on the pods backing the Service.
The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, or HTTPS, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_check/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check.
## TLS
You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
```yaml
apiVersion: v1
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
kind: Secret
metadata:
name: testsecret
namespace: default
type: Opaque
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
```
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS.
```yaml
apiVersion: extensions/v1beta1
@ -409,95 +410,121 @@ if ($http_x_forwarded_proto = "http") {
}
```
And you can try with the [https_example](https_example/README.md):
Here's an example that demonstrates it, first lets create a self signed certificate valid for upto a year:
```console
$ cd https_example
$ make keys secret
# The CName used here is specific to the service specified in nginx-app.yaml.
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=example.com/O=example.com"
Generating a 2048 bit RSA private key
...........+++
....................................................+++
writing new private key to '/tmp/tls.key'
-----
godep go run make_secret.go -crt /tmp/tls.crt -key /tmp/tls.key > /tmp/tls.json
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=foobar.com"
$ kubectl create secret tls tls-secret --key=/tmp/tls.key --cert=/tmp/tls.crt
secret "tls-secret" created
```
This will generate a secret in `/tmp/tls.json`, first create it
Then the Services/Ingress to use it:
```console
$ kubectl create -f /tmp/tls.json
$ kubectl describe secret tls-secret
Name: tls-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1704 bytes
tls.crt: 1159 bytes
```yaml
$ echo "
apiVersion: v1
kind: Service
metadata:
name: echoheaders-https
labels:
app: echoheaders-https
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders-https
---
apiVersion: v1
kind: ReplicationController
metadata:
name: echoheaders-https
spec:
replicas: 2
template:
metadata:
labels:
app: echoheaders-https
spec:
containers:
- name: echoheaders-https
image: gcr.io/google_containers/echoserver:1.3
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
spec:
tls:
- secretName: tls-secret
backend:
serviceName: echoheaders-https
servicePort: 80
" | kubectl create -f -
```
Then create the HTTPS app:
This creates 2 GCE forwarding rules that use a single static ip. Port `80` redirects to port `443` which terminates TLS and sends traffic to your backend.
```console
$ kubectl create -f tls-app.yaml
$ kubectl get ing
NAME RULE BACKEND ADDRESS AGE
test - echoheaders-https:80 3s
...
NAME HOSTS ADDRESS PORTS AGE
test * 80, 443 5s
$ kubectl describe ing
Name: test
Namespace: default
Address: 130.211.5.76
Default backend: echoheaders-https:80 ()
Address: 130.211.21.233
Default backend: echoheaders-https:80 (10.180.1.7:8080,10.180.2.3:8080)
TLS:
tls-secret terminates
Rules:
Host Path Backends
---- ---- --------
* * echoheaders-https:80 (10.180.1.7:8080,10.180.2.3:8080)
Annotations:
url-map: k8s-um-default-test--uid
backends: {"k8s-be-31644--uid":"HEALTHY"}
forwarding-rule: k8s-fw-default-test--uid
https-forwarding-rule: k8s-fws-default-test--uid
https-target-proxy: k8s-tps-default-test--uid
static-ip: k8s-fw-default-test--uid
target-proxy: k8s-tp-default-test--uid
url-map: k8s-um-default-test--7d2d86e772b6c246
backends: {"k8s-be-32327--7d2d86e772b6c246":"HEALTHY"}
forwarding-rule: k8s-fw-default-test--7d2d86e772b6c246
https-forwarding-rule: k8s-fws-default-test--7d2d86e772b6c246
https-target-proxy: k8s-tps-default-test--7d2d86e772b6c246
static-ip: k8s-fw-default-test--7d2d86e772b6c246
target-proxy: k8s-tp-default-test--7d2d86e772b6c246
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 5m 1 {loadbalancer-controller } Normal ADD default/test
4m 4m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.76
12m 12m 1 {loadbalancer-controller } Normal ADD default/test
4m 4m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.21.233
```
Now you can perform 3 curl tests (`:80`, `:443`, `:80` following the redirect)
Testing reachability:
```console
$ curl 130.211.5.76
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.9.11</center>
</body>
</html>
$ curl https://130.211.5.76 -k
$ curl 130.211.21.233 -kL
CLIENT VALUES:
client_address=10.240.0.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://130.211.21.233:8080/
...
x-forwarded-for=104.132.0.73, 130.211.5.76
x-forwarded-proto=https
$ curl -L 130.211.5.76 -k
$ curl --resolve foobar.in:443:130.211.21.233 https://foobar.in --cacert /tmp/tls.crt
CLIENT VALUES:
client_address=10.240.0.4
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://bitrot.com:8080/
...
x-forwarded-for=104.132.0.73, 130.211.5.76
x-forwarded-proto=https
$ curl --resolve bitrot.in:443:130.211.21.233 https://foobar.in --cacert /tmp/tls.crt
curl: (51) SSL: certificate subject name 'foobar.in' does not match target host name 'foobar.in'
```
Note that the GCLB health checks *do not* get the `301` because they don't include `x-forwarded-proto`.
@ -617,6 +644,26 @@ glbc-fjtlq 0/1 CrashLoopBackOff 17 1h
```
If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones.
## Creating the firewall rule for GLBC health checks
A default GKE/GCE cluster needs at least 1 firewall rule for GLBC to function. The Ingress controller should create this for you automatically. You can also create it thus:
```console
$ gcloud compute firewall-rules create allow-130-211-0-0-22 \
--source-ranges 130.211.0.0/22 \
--target-tags $TAG \
--allow tcp:$NODE_PORT
```
Where `130.211.0.0/22` is the source range of the GCE L7, `$NODE_PORT` is the node port your Service is exposed on, i.e:
```console
$ kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services ${SERVICE_NAME}
```
and `$TAG` is an optional list of GKE instance tags, i.e:
```console
$ kubectl get nodes | awk '{print $1}' | tail -n +2 | grep -Po 'gke-[0-9,a-z]+-[0-9,a-z]+-node' | uniq
```
## GLBC Implementation Details
For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources.

View file

@ -0,0 +1,74 @@
# Simple HTTP health check example
The GCE Ingress controller adopts the readiness probe from the matching endpoints, provided the readiness probe doesn't require HTTPS or special headers.
Create the following app:
```console
$ kubectl create -f health_check_app.yaml
replicationcontroller "echoheaders" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31165) to serve traffic.
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
service "echoheadersx" created
You have exposed your service on an external port on all nodes in your
cluster. If you want to expose this service to the external internet, you may
need to set up firewall rules for the service port(s) (tcp:31020) to serve traffic.
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
service "echoheadersy" created
ingress "echomap" created
```
You should soon find an Ingress that is backed by a GCE Loadbalancer.
```console
$ kubectl describe ing echomap
Name: echomap
Namespace: default
Address: 107.178.255.228
Default backend: default-http-backend:80 (10.180.0.9:8080,10.240.0.2:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo echoheadersx:80 (<none>)
bar.baz.com
/bar echoheadersy:80 (<none>)
/foo echoheadersx:80 (<none>)
Annotations:
target-proxy: k8s-tp-default-echomap--a9d60e8176d933ee
url-map: k8s-um-default-echomap--a9d60e8176d933ee
backends: {"k8s-be-31020--a9d60e8176d933ee":"HEALTHY","k8s-be-31165--a9d60e8176d933ee":"HEALTHY","k8s-be-31686--a9d60e8176d933ee":"HEALTHY"}
forwarding-rule: k8s-fw-default-echomap--a9d60e8176d933ee
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
17m 17m 1 {loadbalancer-controller } Normal ADD default/echomap
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 107.178.255.228
$ curl 107.178.255.228/foo -H 'Host:foo.bar.com'
CLIENT VALUES:
client_address=10.240.0.5
command=GET
real path=/foo
query=nil
request_version=1.1
request_uri=http://foo.bar.com:8080/foo
...
```
You can confirm the health check endpoint point it's using one of 2 ways:
* Through the cloud console: compute > health checks > lookup your health check. It takes the form k8s-be-nodePort-hash, where nodePort in the example above is 31165 and 31020, as shown by the kubectl output.
* Through gcloud: Run `gcloud compute http-health-checks list`
## Limitations
A few points to note:
* The readiness probe must be exposed on the port matching the `servicePort` specified in the Ingress
* The readiness probe cannot have special requirements, like headers or HTTPS
* The probe timeouts are translated to GCE health check timeouts
* You must create the pods backing the endpoints with the given readiness probe. This *will not* work if you update the replication controller with a different readiness probe.

View file

@ -0,0 +1,82 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: echoheaders
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
spec:
containers:
- name: echoheaders
image: gcr.io/google_containers/echoserver:1.4
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
periodSeconds: 1
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 10
---
apiVersion: v1
kind: Service
metadata:
name: echoheadersx
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: v1
kind: Service
metadata:
name: echoheadersy
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echomap
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: echoheadersx
servicePort: 80
- host: bar.baz.com
http:
paths:
- path: /bar
backend:
serviceName: echoheadersy
servicePort: 80
- path: /foo
backend:
serviceName: echoheadersx
servicePort: 80