From 52bc74d315449fd93069b5bf69c145feb25e91c2 Mon Sep 17 00:00:00 2001 From: Nick Sardo Date: Thu, 13 Apr 2017 15:31:51 -0700 Subject: [PATCH] Updated more documentation --- controllers/gce/README.md | 32 ++++++++++-- .../gce/examples/backside_https/app.yaml | 50 +++++++++++++++++++ .../gce/examples/health_checks/README.md | 2 +- docs/faq/gce.md | 13 +++-- 4 files changed, 84 insertions(+), 13 deletions(-) create mode 100644 controllers/gce/examples/backside_https/app.yaml diff --git a/controllers/gce/README.md b/controllers/gce/README.md index aa9684344..cf45d037d 100644 --- a/controllers/gce/README.md +++ b/controllers/gce/README.md @@ -360,15 +360,14 @@ You just instructed the loadbalancer controller to quit, however if it had done #### Health checks -Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer: +Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP(S) health checks sent to it from the GCE loadbalancer: 1. Respond with a 200 on '/'. The content does not matter. 2. Expose an arbitrary url as a `readiness` probe on the pods backing the Service. -The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, or HTTPS, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check. +The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP(S) health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check. -## TLS - -You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg: +## Frontend HTTPS +For encrypted communication between the client to the load balancer, you can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg: ```yaml apiVersion: v1 @@ -399,6 +398,29 @@ spec: This creates 2 GCE forwarding rules that use a single static ip. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress. +## Backend HTTPS +For encrypted communication between the load balancer and your Kubernetes service, you need to decorate the the service's port as expecting HTTPS. There's an alpha [Service annotation](examples/backside_https/app.yaml) for specifying the expected protocol per service port. Upon seeing the protocol as HTTPS, the ingress controller will assemble a GCP L7 load balancer with an HTTPS backend-service with a HTTPS health check. + +The annotation value is a stringified JSON map of port-name to "HTTPS" or "HTTP". If you do not specify the port, "HTTP" is assumed. +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-echo-svc + annotations: + service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}' + labels: + app: echo +spec: + type: NodePort + ports: + - port: 443 + protocol: TCP + name: my-https-port + selector: + app: echo +``` + #### Redirecting HTTP to HTTPS To redirect traffic from `:80` to `:443` you need to examine the `x-forwarded-proto` header inserted by the GCE L7, since the Ingress does not support redirect rules. In nginx, this is as simple as adding the following lines to your config: diff --git a/controllers/gce/examples/backside_https/app.yaml b/controllers/gce/examples/backside_https/app.yaml new file mode 100644 index 000000000..6a01803a7 --- /dev/null +++ b/controllers/gce/examples/backside_https/app.yaml @@ -0,0 +1,50 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: my-echo-deploy +spec: + replicas: 2 + template: + metadata: + labels: + app: echo + spec: + containers: + - name: echoserver + image: nicksardo/echoserver:latest + imagePullPolicy: Always + ports: + - name: echo-443 + containerPort: 443 + # readinessProbe: # Health check settings can be retrieved from an HTTPS readinessProbe as well + # httpGet: + # path: /healthcheck # Custom health check path for testing + # scheme: HTTPS + # port: echo-443 +--- +apiVersion: v1 +kind: Service +metadata: + name: my-echo-svc + annotations: + service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}' # Must map port-name to HTTPS for the GCP ingress controller + labels: + app: echo +spec: + type: NodePort + ports: + - port: 12345 # Port doesn't matter as nodeport is used for Ingress + targetPort: echo-443 + protocol: TCP + name: my-https-port + selector: + app: echo +--- +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: my-echo-ingress +spec: + backend: + serviceName: my-echo-svc + servicePort: my-https-port diff --git a/controllers/gce/examples/health_checks/README.md b/controllers/gce/examples/health_checks/README.md index a2d6e710a..e7a4b35bf 100644 --- a/controllers/gce/examples/health_checks/README.md +++ b/controllers/gce/examples/health_checks/README.md @@ -1,6 +1,6 @@ # Simple HTTP health check example -The GCE Ingress controller adopts the readiness probe from the matching endpoints, provided the readiness probe doesn't require HTTPS or special headers. +The GCE Ingress controller adopts the readiness probe from the matching endpoints, provided the readiness probe doesn't require special headers. Create the following app: ```console diff --git a/docs/faq/gce.md b/docs/faq/gce.md index 392943c70..cf585ae46 100644 --- a/docs/faq/gce.md +++ b/docs/faq/gce.md @@ -42,7 +42,7 @@ Please check the following: 1. Output of `kubectl describe`, as shown [here](README.md#i-created-an-ingress-and-nothing-happens-what-now) 2. Do your Services all have a `NodePort`? -3. Do your Services either serve a http 200 on `/`, or have a readiness probe +3. Do your Services either serve an HTTP status code 200 on `/`, or have a readiness probe as described in [this section](#can-i-configure-gce-health-checks-through-the-ingress)? 4. Do you have enough GCP quota? @@ -68,8 +68,7 @@ Global Forwarding Rule -> TargetHTTPSProxy ``` In addition to this pipeline: -* Each Backend Service requires a HTTP health check to the NodePort of the - Service +* Each Backend Service requires a HTTP or HTTPS health check to the NodePort of the Service * Each port on the Backend Service has a matching port on the Instance Group * Each port on the Backend Service is exposed through a firewall-rule open to the GCE LB IP ranges (`130.211.0.0/22` and `35.191.0.0/16`) @@ -126,12 +125,12 @@ Please check the following: Currently health checks are not exposed through the Ingress resource, they're handled at the node level by Kubernetes daemons (kube-proxy and the kubelet). -However the GCE HTTP lb still requires a HTTP health check to measure node +However the GCE L7 lb still requires a HTTP(S) health check to measure node health. By default, this health check points at `/` on the nodePort associated with a given backend. Note that the purpose of this health check is NOT to determine when endpoint pods are overloaded, but rather, to detect when a given node is incapable of proxying requests for the Service:nodePort -alltogether. Overloaded endpoints are removed from the working set of a +altogether. Overloaded endpoints are removed from the working set of a Service via readiness probes conducted by the kubelet. If `/` doesn't work for your application, you can have the Ingress controller @@ -311,12 +310,12 @@ pointing to that Service's NodePort. Instance Group, these must be shared. There is 1 Ingress Instance Group per zone containing Kubernetes nodes. -* HTTP Health Checks: currently the http health checks point at the NodePort +* Health Checks: currently the health checks point at the NodePort of a BackendService. They don't *need* to be shared, but they are since BackendServices are shared. * Firewall rule: In a non-federated cluster there is a single firewall rule -that covers HTTP health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting) +that covers health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting) to Service nodePorts. Unique: