Add TLS section to GLBC docs, and BETA_LIMITATIONS
This commit is contained in:
parent
4409bed106
commit
d0a15b1267
7 changed files with 539 additions and 18 deletions
|
@ -26,7 +26,6 @@ http {
|
|||
server {
|
||||
listen 80;
|
||||
server_name {{$rule.Host}};
|
||||
resolver 127.0.0.1;
|
||||
{{ range $path := $rule.HTTP.Paths }}
|
||||
location {{$path.Path}} {
|
||||
proxy_set_header Host $host;
|
||||
|
@ -37,7 +36,7 @@ http {
|
|||
)
|
||||
```
|
||||
|
||||
You can take a similar approach to denormalize the Ingress to a [haproxy config](https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/template.cfg) or use it to configure a cloud loadbalancer such as a [GCE L7](https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/gce/README.md).
|
||||
You can take a similar approach to denormalize the Ingress to a [haproxy config](https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/template.cfg) or use it to configure a cloud loadbalancer such as a [GCE L7](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md).
|
||||
|
||||
And here is the Ingress controller's control loop:
|
||||
|
||||
|
@ -58,7 +57,7 @@ for {
|
|||
```
|
||||
|
||||
All this is doing is:
|
||||
* List Ingresses, optionally you can watch for changes (see [GCE Ingress controller](https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/gce/controller.go) for an example)
|
||||
* List Ingresses, optionally you can watch for changes (see [GCE Ingress controller](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/controller.go) for an example)
|
||||
* Executes the template and writes results to `/etc/nginx/nginx.conf`
|
||||
* Reloads nginx
|
||||
|
||||
|
@ -95,7 +94,6 @@ http {
|
|||
server {
|
||||
listen 80;
|
||||
server_name foo.bar.com;
|
||||
resolver 127.0.0.1;
|
||||
|
||||
location /foo {
|
||||
proxy_pass http://fooSvc;
|
||||
|
@ -104,7 +102,6 @@ http {
|
|||
server {
|
||||
listen 80;
|
||||
server_name bar.baz.com;
|
||||
resolver 127.0.0.1;
|
||||
|
||||
location /bar {
|
||||
proxy_pass http://barSvc;
|
||||
|
@ -128,8 +125,8 @@ $ curl --resolve foo.bar.com:80:104.197.203.179 foo.bar.com/foo
|
|||
|
||||
## Future work
|
||||
|
||||
This section can also bear the title "why anyone would want to write an Ingress controller instead of directly configuring Services". There is more to Ingress than webserver configuration. *Real* HA usually involves the configuration of gateways and packet forwarding devices, which most cloud providers allow you to do through an API. See the GCE Loadbalancer Controller, which is deployed as a [cluster addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc) in GCE and GKE clusters for more advanced Ingress configuration examples. Post 1.1 the Ingress resource will support at least the following:
|
||||
* TLS options (edge, passthrough, SNI etc)
|
||||
This section can also bear the title "why anyone would want to write an Ingress controller instead of directly configuring Services". There is more to Ingress than webserver configuration. *Real* HA usually involves the configuration of gateways and packet forwarding devices, which most cloud providers allow you to do through an API. See the GCE Loadbalancer Controller, which is deployed as a [cluster addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc) in GCE and GKE clusters for more advanced Ingress configuration examples. Post 1.2 the Ingress resource will support at least the following:
|
||||
* More TLS options (SNI, re-encrypt etc)
|
||||
* L4 and L7 loadbalancing (it currently only supports HTTP rules)
|
||||
* Ingress Rules that are not limited to a simple path regex (eg: redirect rules, session persistence)
|
||||
|
||||
|
|
180
controllers/gce/BETA_LIMITATIONS.md
Normal file
180
controllers/gce/BETA_LIMITATIONS.md
Normal file
|
@ -0,0 +1,180 @@
|
|||
# GLBC: Beta limitations
|
||||
|
||||
As of the Kubernetes 1.2 release, the GCE L7 Loadbalancer controller is still a *beta* product. We expect it to go GA in 1.3.
|
||||
|
||||
This is a list of beta limitations:
|
||||
|
||||
* [Firewalls](#creating-the-firewall-rule-for-glbc-health-checks): You must create the firewall-rule required for GLBC's health checks to succeed.
|
||||
* [UIDs](#running-multiple-loadbalanced-clusters-in-the-same-gce-project): If you're creating multiple clusters that will use Ingress within a single GCE project, you must assign a UID to GLBC so it doesn't stomp on resources from another cluster.
|
||||
* [Health Checks](#health-checks): All Kubernetes services must serve a 200 page on '/', or whatever custom value you've specified through GLBC's `--health-check-path argument`.
|
||||
* [IPs](#static-and-ephemeral-ips): Creating a simple HTTP Ingress will allocate an ephemeral IP. Creating an Ingress with a TLS section will allocate a static IP.
|
||||
* [Latency](#latency): GLBC is not built for performance. Creating many Ingresses at a time can overwhelm it. It won't fall over, but will take its own time to churn through the Ingress queue.
|
||||
* [Quota](#quota): By default, GCE projects are granted a quota of 3 Backend Services. This is insufficient for most Kubernetes clusters.
|
||||
* [Oauth scopes](https://cloud.google.com/compute/docs/authentication): By default GKE/GCE clusters are granted "compute/rw" permissions. If you setup a cluster without these permissions, GLBC is useless and you should delete the controller as described in the [section below](#disabling-glbc). If you don't delete the controller it will keep restarting.
|
||||
* [Default backends](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case): All L7 Loadbalancers created by GLBC have a default backend. If you don't specify one in your Ingress, GLBC will assign the 404 default backend mentioned above.
|
||||
* [Teardown](README.md#deletion): The recommended way to tear down a cluster with active Ingresses is to either delete each Ingress, or hit the `/delete-all-and-quit` endpoint on GLBC, before invoking a cluster teardown script (eg: kube-down.sh). You will have to manually cleanup GCE resources through the [cloud console](https://cloud.google.com/compute/docs/console#access) or [gcloud CLI](https://cloud.google.com/compute/docs/gcloud-compute/) if you simply tear down the cluster with active Ingresses.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you can receive traffic through the GCE L7 Loadbalancer Controller you need:
|
||||
* A Working Kubernetes cluster >= 1.1
|
||||
* At least 1 Kubernetes [NodePort Service](../../../../docs/user-guide/services.md#type-nodeport) (this is the endpoint for your Ingress)
|
||||
* A single instance of the L7 Loadbalancer Controller pod (if you're using the default GCE setup, this should already be running in the `kube-system` namespace)
|
||||
|
||||
## Quota
|
||||
|
||||
GLBC is not aware of your GCE quota. As of this writing users get 3 [GCE Backend Services](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) by default. If you plan on creating Ingresses for multiple Kubernetes Services, remember that each one requires a backend service, and request quota. Should you fail to do so the controller will poll periodically and grab the first free backend service slot it finds. You can view your quota:
|
||||
|
||||
```console
|
||||
$ gcloud compute project-info describe --project myproject
|
||||
```
|
||||
See [GCE documentation](https://cloud.google.com/compute/docs/resource-quotas#checking_your_quota) for how to request more.
|
||||
|
||||
## Latency
|
||||
|
||||
It takes ~1m to spin up a loadbalancer (this includes acquiring the public ip), and ~5-6m before the GCE api starts healthchecking backends. So as far as latency goes, here's what to expect:
|
||||
|
||||
Assume one creates the following simple Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
spec:
|
||||
backend:
|
||||
# This will just loopback to the default backend of GLBC
|
||||
serviceName: default-http-backend
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
* time, t=0
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - default-http-backend:80
|
||||
$ kubectl describe ing
|
||||
No events.
|
||||
```
|
||||
|
||||
* time, t=1m
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - default-http-backend:80 130.211.5.27
|
||||
|
||||
$ kubectl describe ing
|
||||
target-proxy: k8s-tp-default-test-ingress
|
||||
url-map: k8s-um-default-test-ingress
|
||||
backends: {"k8s-be-32342":"UNKNOWN"}
|
||||
forwarding-rule: k8s-fw-default-test-ingress
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
───────── ──────── ───── ──── ───────────── ────── ───────
|
||||
46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
|
||||
```
|
||||
|
||||
* time, t=5m
|
||||
```console
|
||||
$ kubectl describe ing
|
||||
target-proxy: k8s-tp-default-test-ingress
|
||||
url-map: k8s-um-default-test-ingress
|
||||
backends: {"k8s-be-32342":"HEALTHY"}
|
||||
forwarding-rule: k8s-fw-default-test-ingress
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
───────── ──────── ───── ──── ───────────── ────── ───────
|
||||
46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
|
||||
|
||||
```
|
||||
|
||||
## Health checks
|
||||
|
||||
Currently, all service backends must respond with a 200 on '/'. The content does not matter. If they fail to do so they will be deemed unhealthy by the GCE L7. This limitation is because there are 2 sets of health checks:
|
||||
* From the kubernetes endpoints, taking the form of liveness/readiness probes
|
||||
* From the GCE L7, which periodically pings '/'
|
||||
We really want (1) to control the health of an instance but (2) is a GCE requirement. Ideally, we would point (2) at (1), but we still need (2) for pods that don't have a defined health check. This will probably get resolved when Ingress grows up.
|
||||
|
||||
|
||||
## Running multiple loadbalanced clusters in the same GCE project
|
||||
|
||||
If you're creating multiple clusters that will use Ingress within a single GCE project, you MUST assign a UID to GLBC so it doesn't stomp on resources from another cluster. You can do so by:
|
||||
```console
|
||||
$ kubectl get rc --namespace=kube-system
|
||||
NAME DESIRED CURRENT AGE
|
||||
elasticsearch-logging-v1 2 2 26m
|
||||
heapster-v1.0.0 1 1 26m
|
||||
kibana-logging-v1 1 1 26m
|
||||
kube-dns-v11 1 1 26m
|
||||
kubernetes-dashboard-v1.0.0 1 1 26m
|
||||
l7-lb-controller-v0.6.0 1 1 26m
|
||||
monitoring-influxdb-grafana-v3 1 1 26m
|
||||
|
||||
$ kubectl edit rc l7-lb-controller-v0.6.0 --namespace=kube-system
|
||||
```
|
||||
|
||||
And modify the args passed to the controller:
|
||||
```yaml
|
||||
- args:
|
||||
- --default-backend-service=kube-system/default-http-backend
|
||||
- --sync-period=300s
|
||||
- --cluster-uid=uid
|
||||
```
|
||||
|
||||
Saving the file should update the RC but not the existing pod. To do so, just delete the pod, and the RC will create a new one with the --cluster-uid args.
|
||||
```console
|
||||
$ kubectl delete pod -l name=glbc --namespace=kube-system
|
||||
pod "l7-lb-controller-v0.6.0-ud9ix" deleted
|
||||
$ kubectl get pod --namespace=kube-system -l name=glbc -o yaml | grep cluster-uid
|
||||
- --cluster-uid=uid
|
||||
```
|
||||
|
||||
## Creating the firewall rule for GLBC health checks
|
||||
|
||||
A default GKE/GCE cluster needs at least 1 firewall rule for GLBC to function. You can create it thus:
|
||||
```console
|
||||
$ gcloud compute firewall-rules create allow-130-211-0-0-22 \
|
||||
--source-ranges 130.211.0.0/22 \
|
||||
--target-tags $TAG \
|
||||
--allow tcp:$NODE_PORT
|
||||
```
|
||||
|
||||
Where `130.211.0.0/22` is the source range of the GCE L7, `$NODE_PORT` is the node port your Service is exposed on, i.e:
|
||||
```console
|
||||
$ export NODE_PORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services echoheadersx)
|
||||
```
|
||||
|
||||
and `$TAG` is a list of GKE instance tags, i.e:
|
||||
```console
|
||||
$ export TAG=$(basename `gcloud container clusters describe ${CLUSTER_NAME} --zone ${ZONE} | grep gke | awk '{print $2}'` | sed -e s/group/node/)
|
||||
```
|
||||
|
||||
## Static and Ephemeral IPs
|
||||
|
||||
GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances-and-network#ephemeraladdress) and [static](https://cloud.google.com/compute/docs/instances-and-network#reservedaddress) IPs. A production website would always want a static IP, which ephemeral IPs are cheaper (both in terms of quota and cost), and are therefore better suited for experimentation.
|
||||
* Creating a HTTP Ingress (i.e an Ingress without a TLS section) allocates an ephemeral IP, because we don't believe HTTP is the right way to deploy an app.
|
||||
* Creating an Ingress with a TLS section allocates a static IP, because GLBC assumes you mean business.
|
||||
* Modifying an Ingress and adding a TLS section allocates a static IP, but the IP *will* change. This is a beta limitation.
|
||||
* You can [promote](https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip) an ephemeral to a static IP by hand, if required.
|
||||
|
||||
## Disabling GLBC
|
||||
|
||||
Since GLBC runs as a cluster addon, you cannot simply delete the RC. The easiest way to disable it is to do as follows:
|
||||
|
||||
* IFF you want to tear down existing L7 loadbalancers, hit the /delete-all-and-quit endpoint on the pod:
|
||||
|
||||
```console
|
||||
$ kubectl get pods --namespace=kube-system
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
l7-lb-controller-7bb21 1/1 Running 0 1h
|
||||
$ kubectl exec l7-lb-controller-7bb21 -c l7-lb-controller curl http://localhost:8081/delete-all-and-quit --namespace=kube-system
|
||||
$ kubectl logs l7-lb-controller-7b221 -c l7-lb-controller --follow
|
||||
...
|
||||
I1007 00:30:00.322528 1 main.go:160] Handled quit, awaiting pod deletion.
|
||||
```
|
||||
|
||||
* Nullify the RC (but don't delete it or the addon controller will "fix" it for you)
|
||||
```console
|
||||
$ kubectl scale rc l7-lb-controller --replicas=0 --namespace=kube-system
|
||||
```
|
||||
|
||||
|
|
@ -2,9 +2,12 @@
|
|||
|
||||
GLBC is a GCE L7 load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API.
|
||||
|
||||
## Disclaimer
|
||||
## A word to the wise
|
||||
|
||||
Please read the [beta limitations](BETA_LIMITATIONS.md) doc to before using this controller. In summary:
|
||||
|
||||
- This is a **work in progress**.
|
||||
- It relies on an experimental Kubernetes resource.
|
||||
- It relies on a beta Kubernetes resource.
|
||||
- The loadbalancer controller pod is not aware of your GCE quota.
|
||||
|
||||
## Overview
|
||||
|
@ -324,7 +327,7 @@ So simply delete the replication controller:
|
|||
$ kubectl get rc glbc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
|
||||
glbc default-http-backend gcr.io/google_containers/defaultbackend:1.0 k8s-app=glbc,version=v0.5 1 2m
|
||||
l7-lb-controller gcr.io/google_containers/glbc:0.5
|
||||
l7-lb-controller gcr.io/google_containers/glbc:0.6.0
|
||||
|
||||
$ kubectl delete rc glbc
|
||||
replicationcontroller "glbc" deleted
|
||||
|
@ -362,6 +365,200 @@ Currently, all service backends must respond with a 200 on '/'. The content does
|
|||
* From the GCE L7, which periodically pings '/'
|
||||
We really want (1) to control the health of an instance but (2) is a GCE requirement. Ideally, we would point (2) at (1), but we still need (2) for pods that don't have a defined health check. This will probably get resolved when Ingress grows up.
|
||||
|
||||
## TLS
|
||||
|
||||
You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must contain keys named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.crt: base64 encoded cert
|
||||
tls.key: base64 encoded key
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testsecret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: no-rules-map
|
||||
spec:
|
||||
tls:
|
||||
secretName: testsecret
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
This creates 2 GCE forwarding rules that use a single static ip. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress.
|
||||
|
||||
#### Redirecting HTTP to HTTPS
|
||||
|
||||
To redirect traffic from `:80` to `:443` you need to examine the `x-forwarded-proto` header inserted by the GCE L7, since the Ingress does not support redirect rules. In nginx, this is as simple as adding the following lines to your config:
|
||||
```nginx
|
||||
# Replace '_' with your hostname.
|
||||
server_name _;
|
||||
if ($http_x_forwarded_proto = "http") {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
```
|
||||
|
||||
And you can try with the [https_example](https_example/README.md):
|
||||
```console
|
||||
$ cd https_example
|
||||
$ make keys secret
|
||||
# The CName used here is specific to the service specified in nginx-app.yaml.
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=example.com/O=example.com"
|
||||
Generating a 2048 bit RSA private key
|
||||
...........+++
|
||||
....................................................+++
|
||||
writing new private key to '/tmp/tls.key'
|
||||
-----
|
||||
godep go run make_secret.go -crt /tmp/tls.crt -key /tmp/tls.key > /tmp/tls.json
|
||||
```
|
||||
|
||||
This will generate a secret in `/tmp/tls.json`, first create it
|
||||
|
||||
```console
|
||||
$ kubectl create -f /tmp/tls.json
|
||||
$ kubectl describe secret tls-secret
|
||||
Name: tls-secret
|
||||
Namespace: default
|
||||
Labels: <none>
|
||||
Annotations: <none>
|
||||
|
||||
Type: Opaque
|
||||
|
||||
Data
|
||||
====
|
||||
tls.key: 1704 bytes
|
||||
tls.crt: 1159 bytes
|
||||
```
|
||||
|
||||
Then create the HTTPS app:
|
||||
```console
|
||||
$ kubectl create -f tls-app.yaml
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS AGE
|
||||
test - echoheaders-https:80 3s
|
||||
|
||||
...
|
||||
|
||||
$ kubectl describe ing
|
||||
Name: test
|
||||
Namespace: default
|
||||
Address: 130.211.5.76
|
||||
Default backend: echoheaders-https:80 ()
|
||||
TLS:
|
||||
tls-secret terminates
|
||||
Rules:
|
||||
Host Path Backends
|
||||
---- ---- --------
|
||||
Annotations:
|
||||
url-map: k8s-um-default-test--uid
|
||||
backends: {"k8s-be-31644--uid":"HEALTHY"}
|
||||
forwarding-rule: k8s-fw-default-test--uid
|
||||
https-forwarding-rule: k8s-fws-default-test--uid
|
||||
https-target-proxy: k8s-tps-default-test--uid
|
||||
static-ip: k8s-fw-default-test--uid
|
||||
target-proxy: k8s-tp-default-test--uid
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
5m 5m 1 {loadbalancer-controller } Normal ADD default/test
|
||||
4m 4m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.5.76
|
||||
|
||||
```
|
||||
|
||||
Now you can perform 3 curl tests (`:80`, `:443`, `:80` following the redirect)
|
||||
```console
|
||||
$ curl 130.211.5.76
|
||||
<html>
|
||||
<head><title>301 Moved Permanently</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>301 Moved Permanently</h1></center>
|
||||
<hr><center>nginx/1.9.11</center>
|
||||
</body>
|
||||
</html>
|
||||
|
||||
$ curl https://130.211.5.76 -k
|
||||
CLIENT VALUES:
|
||||
...
|
||||
x-forwarded-for=104.132.0.73, 130.211.5.76
|
||||
x-forwarded-proto=https
|
||||
|
||||
$ curl -L 130.211.5.76 -k
|
||||
CLIENT VALUES:
|
||||
...
|
||||
x-forwarded-for=104.132.0.73, 130.211.5.76
|
||||
x-forwarded-proto=https
|
||||
```
|
||||
|
||||
Note that the GCLB health checks *do not* get the `301` because they don't include `x-forwarded-proto`.
|
||||
|
||||
#### Blocking HTTP
|
||||
|
||||
You can block traffic on `:80` through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through https and you don't want to waste the extra GCE forwarding rule, eg:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
annotations:
|
||||
kubernetes.io/ingress.allowHTTP: "false"
|
||||
spec:
|
||||
tls:
|
||||
# This assumes tls-secret exists.
|
||||
# To generate it run the make in this directory.
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Upon describing it you should only see a single GCE forwarding rule:
|
||||
```console
|
||||
$ kubectl describe ing
|
||||
Name: test
|
||||
Namespace: default
|
||||
Address: 130.211.10.121
|
||||
Default backend: echoheaders-https:80 (10.245.2.4:8080,10.245.3.4:8080)
|
||||
TLS:
|
||||
tls-secret terminates
|
||||
Rules:
|
||||
Host Path Backends
|
||||
---- ---- --------
|
||||
Annotations:
|
||||
https-target-proxy: k8s-tps-default-test--uid
|
||||
url-map: k8s-um-default-test--uid
|
||||
backends: {"k8s-be-31644--uid":"Unknown"}
|
||||
https-forwarding-rule: k8s-fws-default-test--uid
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
13m 13m 1 {loadbalancer-controller } Normal ADD default/test
|
||||
12m 12m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.10.121
|
||||
```
|
||||
|
||||
And curling `:80` should just `404`:
|
||||
```console
|
||||
$ curl 130.211.10.121
|
||||
...
|
||||
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
|
||||
<p><b>404.</b> <ins>That’s an error.</ins>
|
||||
|
||||
$ curl https://130.211.10.121 -k
|
||||
...
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
```
|
||||
|
||||
## Troubleshooting:
|
||||
|
||||
This controller is complicated because it exposes a tangled set of external resources as a single logical abstraction. It's recommended that you are at least *aware* of how one creates a GCE L7 [without a kubernetes Ingress](https://cloud.google.com/container-engine/docs/tutorials/http-balancer). If weird things happen, here are some basic debugging guidelines:
|
||||
|
@ -420,7 +617,7 @@ glbc-fjtlq 0/1 CrashLoopBackOff 17 1h
|
|||
```
|
||||
If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones.
|
||||
|
||||
## GCELBC Implementation Details
|
||||
## GLBC Implementation Details
|
||||
|
||||
For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources.
|
||||
|
||||
|
@ -434,7 +631,7 @@ Periodically, each pool checks that it has a valid connection to the next hop in
|
|||
|
||||
## Wishlist:
|
||||
|
||||
* E2e, integration tests
|
||||
* More E2e, integration tests
|
||||
* Better events
|
||||
* Detect leaked resources even if the Ingress has been deleted when the controller isn't around
|
||||
* Specify health checks (currently we just rely on kubernetes service/pod liveness probes and force pods to have a `/` endpoint that responds with 200 for GCE)
|
||||
|
@ -442,7 +639,5 @@ Periodically, each pool checks that it has a valid connection to the next hop in
|
|||
* Async pool management of backends/L7s etc
|
||||
* Retry back-off when GCE Quota is done
|
||||
* GCE Quota integration
|
||||
* HTTP support as the Ingress grows
|
||||
* More aggressive resource sharing
|
||||
|
||||
[]()
|
||||
|
|
32
controllers/gce/https_example/Makefile
Normal file
32
controllers/gce/https_example/Makefile
Normal file
|
@ -0,0 +1,32 @@
|
|||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
all:
|
||||
|
||||
KEY = /tmp/tls.key
|
||||
CERT = /tmp/tls.crt
|
||||
SECRET = /tmp/tls.json
|
||||
HOST=example.com
|
||||
NAME=tls-secret
|
||||
|
||||
keys:
|
||||
# The CName used here is specific to the service specified in nginx-app.yaml.
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)"
|
||||
|
||||
secret:
|
||||
godep go run make_secret.go -crt $(CERT) -key $(KEY) -name $(NAME) > $(SECRET)
|
||||
|
||||
clean:
|
||||
rm $(KEY)
|
||||
rm $(CERT)
|
71
controllers/gce/https_example/make_secret.go
Normal file
71
controllers/gce/https_example/make_secret.go
Normal file
|
@ -0,0 +1,71 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// A small script that converts the given open ssl public/private keys to
|
||||
// a secret that it writes to stdout as json. Most common use case is to
|
||||
// create a secret from self signed certificates used to authenticate with
|
||||
// a devserver. Usage: go run make_secret.go -crt ca.crt -key priv.key > secret.json
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apimachinery/registered"
|
||||
"k8s.io/kubernetes/pkg/runtime"
|
||||
|
||||
// This installs the legacy v1 API
|
||||
_ "k8s.io/kubernetes/pkg/api/install"
|
||||
)
|
||||
|
||||
// TODO:
|
||||
// Add a -o flag that writes to the specified destination file.
|
||||
// Teach the script to create crt and key if -crt and -key aren't specified.
|
||||
var (
|
||||
crt = flag.String("crt", "", "path to tls certificates.")
|
||||
key = flag.String("key", "", "path to tls private key.")
|
||||
name = flag.String("name", "tls-secret", "name of the secret.")
|
||||
)
|
||||
|
||||
func read(file string) []byte {
|
||||
b, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
log.Fatalf("Cannot read file %v, %v", file, err)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
if *crt == "" || *key == "" {
|
||||
log.Fatalf("Need to specify -crt -key and -template")
|
||||
}
|
||||
tlsCrt := read(*crt)
|
||||
tlsKey := read(*key)
|
||||
secret := &api.Secret{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: *name,
|
||||
},
|
||||
Data: map[string][]byte{
|
||||
api.TLSCertKey: tlsCrt,
|
||||
api.TLSPrivateKeyKey: tlsKey,
|
||||
},
|
||||
}
|
||||
fmt.Printf(runtime.EncodeOrDie(api.Codecs.LegacyCodec(registered.EnabledVersions()...), secret))
|
||||
}
|
46
controllers/gce/https_example/tls-app.yaml
Normal file
46
controllers/gce/https_example/tls-app.yaml
Normal file
|
@ -0,0 +1,46 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders-https
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders-https
|
||||
image: gcr.io/google_containers/echoserver:1.3
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
tls:
|
||||
# This assumes tls-secret exists.
|
||||
# To generate it run the make in this directory.
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
|
|
@ -24,18 +24,18 @@ metadata:
|
|||
name: l7-lb-controller
|
||||
labels:
|
||||
k8s-app: glbc
|
||||
version: v0.5.2
|
||||
version: v0.6.0
|
||||
spec:
|
||||
# There should never be more than 1 controller alive simultaneously.
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: glbc
|
||||
version: v0.5.2
|
||||
version: v0.6.0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: glbc
|
||||
version: v0.5.2
|
||||
version: v0.6.0
|
||||
name: glbc
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 600
|
||||
|
@ -61,7 +61,7 @@ spec:
|
|||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
- image: gcr.io/google_containers/glbc:0.5.2
|
||||
- image: gcr.io/google_containers/glbc:0.6.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
|
|
Loading…
Reference in a new issue