Merge branch 'copy-history' of https://github.com/aledbf/contrib into history
This commit is contained in:
commit
55acaabbd8
1887 changed files with 1009164 additions and 0 deletions
1606
Godeps/Godeps.json
generated
Normal file
1606
Godeps/Godeps.json
generated
Normal file
File diff suppressed because it is too large
Load diff
5
Godeps/Readme
generated
Normal file
5
Godeps/Readme
generated
Normal file
|
@ -0,0 +1,5 @@
|
|||
This directory tree is generated automatically by godep.
|
||||
|
||||
Please do not edit.
|
||||
|
||||
See https://github.com/tools/godep for more information.
|
|
@ -1,3 +1,139 @@
|
|||
# Ingress controllers
|
||||
|
||||
This directory contains ingress controllers.
|
||||
=======
|
||||
# Ingress Controllers
|
||||
|
||||
Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result. The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a loadbalancer, or a more complicated setup of frontends that provide GSLB, DDoS protection etc).
|
||||
|
||||
## What is an Ingress Controller?
|
||||
|
||||
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the ApiServer's `/ingresses` endpoint for updates to the [Ingress resource](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/ingress.md). Its job is to satisfy requests for ingress.
|
||||
|
||||
## Writing an Ingress Controller
|
||||
|
||||
Writing an Ingress controller is simple. By way of example, the [nginx controller] (nginx-alpha) does the following:
|
||||
* Poll until apiserver reports a new Ingress
|
||||
* Write the nginx config file based on a [go text/template](https://golang.org/pkg/text/template/)
|
||||
* Reload nginx
|
||||
|
||||
Pay attention to how it denormalizes the Kubernetes Ingress object into an nginx config:
|
||||
```go
|
||||
const (
|
||||
nginxConf = `
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
http {
|
||||
{{range $ing := .Items}}
|
||||
{{range $rule := $ing.Spec.Rules}}
|
||||
server {
|
||||
listen 80;
|
||||
server_name {{$rule.Host}};
|
||||
{{ range $path := $rule.HTTP.Paths }}
|
||||
location {{$path.Path}} {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://{{$path.Backend.ServiceName}}.{{$ing.Namespace}}.svc.cluster.local:{{$path.Backend.ServicePort}};
|
||||
}{{end}}
|
||||
}{{end}}{{end}}
|
||||
}`
|
||||
)
|
||||
```
|
||||
|
||||
You can take a similar approach to denormalize the Ingress to a [haproxy config](https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/template.cfg) or use it to configure a cloud loadbalancer such as a [GCE L7](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/README.md).
|
||||
|
||||
And here is the Ingress controller's control loop:
|
||||
|
||||
```go
|
||||
for {
|
||||
rateLimiter.Accept()
|
||||
ingresses, err := ingClient.List(labels.Everything(), fields.Everything())
|
||||
if err != nil || reflect.DeepEqual(ingresses.Items, known.Items) {
|
||||
continue
|
||||
}
|
||||
if w, err := os.Create("/etc/nginx/nginx.conf"); err != nil {
|
||||
log.Fatalf("Failed to open %v: %v", nginxConf, err)
|
||||
} else if err := tmpl.Execute(w, ingresses); err != nil {
|
||||
log.Fatalf("Failed to write template %v", err)
|
||||
}
|
||||
shellOut("nginx -s reload")
|
||||
}
|
||||
```
|
||||
|
||||
All this is doing is:
|
||||
* List Ingresses, optionally you can watch for changes (see [GCE Ingress controller](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/controller/controller.go) for an example)
|
||||
* Executes the template and writes results to `/etc/nginx/nginx.conf`
|
||||
* Reloads nginx
|
||||
|
||||
You can deploy this controller to a Kubernetes cluster by [creating an RC](nginx-alpha/rc.yaml). After doing so, if you were to create an Ingress such as:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: fooSvc
|
||||
servicePort: 80
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: barSvc
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Where `fooSvc` and `barSvc` are 2 services running in your Kubernetes cluster. The controller would satisfy the Ingress by writing a configuration file to /etc/nginx/nginx.conf:
|
||||
```nginx
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
http {
|
||||
server {
|
||||
listen 80;
|
||||
server_name foo.bar.com;
|
||||
|
||||
location /foo {
|
||||
proxy_pass http://fooSvc;
|
||||
}
|
||||
}
|
||||
server {
|
||||
listen 80;
|
||||
server_name bar.baz.com;
|
||||
|
||||
location /bar {
|
||||
proxy_pass http://barSvc;
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
And you can reach the `/foo` and `/bar` endpoints on the publicIP of the VM the nginx-ingress pod landed on.
|
||||
```
|
||||
$ kubectl get pods -o wide
|
||||
NAME READY STATUS RESTARTS AGE NODE
|
||||
nginx-ingress-tk7dl 1/1 Running 0 3m e2e-test-beeps-minion-15p3
|
||||
|
||||
$ kubectl get nodes e2e-test-beeps-minion-15p3 -o yaml | grep -i externalip -B 1
|
||||
- address: 104.197.203.179
|
||||
type: ExternalIP
|
||||
|
||||
$ curl --resolve foo.bar.com:80:104.197.203.179 foo.bar.com/foo
|
||||
```
|
||||
|
||||
## Future work
|
||||
|
||||
This section can also bear the title "why anyone would want to write an Ingress controller instead of directly configuring Services". There is more to Ingress than webserver configuration. *Real* HA usually involves the configuration of gateways and packet forwarding devices, which most cloud providers allow you to do through an API. See the GCE Loadbalancer Controller, which is deployed as a [cluster addon](https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/cluster-loadbalancing/glbc) in GCE and GKE clusters for more advanced Ingress configuration examples. Post 1.2 the Ingress resource will support at least the following:
|
||||
* More TLS options (SNI, re-encrypt etc)
|
||||
* L4 and L7 loadbalancing (it currently only supports HTTP rules)
|
||||
* Ingress Rules that are not limited to a simple path regex (eg: redirect rules, session persistence)
|
||||
|
||||
And is expected to be the way one configures "frontends" that handle user traffic for a Kubernetes cluster.
|
||||
|
||||
|
||||
|
|
178
controllers/gce/BETA_LIMITATIONS.md
Normal file
178
controllers/gce/BETA_LIMITATIONS.md
Normal file
|
@ -0,0 +1,178 @@
|
|||
# GLBC: Beta limitations
|
||||
|
||||
As of the Kubernetes 1.2 release, the GCE L7 Loadbalancer controller is still a *beta* product. We expect it to go GA in 1.3.
|
||||
|
||||
This is a list of beta limitations:
|
||||
|
||||
* [IPs](#static-and-ephemeral-ips): Creating a simple HTTP Ingress will allocate an ephemeral IP. Creating an Ingress with a TLS section will allocate a static IP.
|
||||
* [Latency](#latency): GLBC is not built for performance. Creating many Ingresses at a time can overwhelm it. It won't fall over, but will take its own time to churn through the Ingress queue.
|
||||
* [Quota](#quota): By default, GCE projects are granted a quota of 3 Backend Services. This is insufficient for most Kubernetes clusters.
|
||||
* [Oauth scopes](https://cloud.google.com/compute/docs/authentication): By default GKE/GCE clusters are granted "compute/rw" permissions. If you setup a cluster without these permissions, GLBC is useless and you should delete the controller as described in the [section below](#disabling-glbc). If you don't delete the controller it will keep restarting.
|
||||
* [Default backends](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case): All L7 Loadbalancers created by GLBC have a default backend. If you don't specify one in your Ingress, GLBC will assign the 404 default backend mentioned above.
|
||||
* [Load Balancing Algorithms](#load-balancing-algorithms): The ingress controller doesn't support fine grained control over loadbalancing algorithms yet.
|
||||
* [Large clusters](#large-clusters): Ingress on GCE isn't supported on large (>1000 nodes), single-zone clusters.
|
||||
* [Teardown](README.md#deletion): The recommended way to tear down a cluster with active Ingresses is to either delete each Ingress, or hit the `/delete-all-and-quit` endpoint on GLBC, before invoking a cluster teardown script (eg: kube-down.sh). You will have to manually cleanup GCE resources through the [cloud console](https://cloud.google.com/compute/docs/console#access) or [gcloud CLI](https://cloud.google.com/compute/docs/gcloud-compute/) if you simply tear down the cluster with active Ingresses.
|
||||
* [Changing UIDs](#changing-the-cluster-uid): You can change the UID used as a suffix for all your GCE cloud resources, but this requires you to delete existing Ingresses first.
|
||||
* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to permature teardown through the GCE console.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you can receive traffic through the GCE L7 Loadbalancer Controller you need:
|
||||
* A Working Kubernetes cluster >= 1.1
|
||||
* At least 1 Kubernetes [NodePort Service](../../../../docs/user-guide/services.md#type-nodeport) (this is the endpoint for your Ingress)
|
||||
* A single instance of the L7 Loadbalancer Controller pod, if you're running Kubernetes < 1.3 (the GCP ingress controller runs on the master in later versions)
|
||||
|
||||
## Quota
|
||||
|
||||
GLBC is not aware of your GCE quota. As of this writing users get 3 [GCE Backend Services](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) by default. If you plan on creating Ingresses for multiple Kubernetes Services, remember that each one requires a backend service, and request quota. Should you fail to do so the controller will poll periodically and grab the first free backend service slot it finds. You can view your quota:
|
||||
|
||||
```console
|
||||
$ gcloud compute project-info describe --project myproject
|
||||
```
|
||||
See [GCE documentation](https://cloud.google.com/compute/docs/resource-quotas#checking_your_quota) for how to request more.
|
||||
|
||||
## Latency
|
||||
|
||||
It takes ~1m to spin up a loadbalancer (this includes acquiring the public ip), and ~5-6m before the GCE api starts healthchecking backends. So as far as latency goes, here's what to expect:
|
||||
|
||||
Assume one creates the following simple Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test-ingress
|
||||
spec:
|
||||
backend:
|
||||
# This will just loopback to the default backend of GLBC
|
||||
serviceName: default-http-backend
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
* time, t=0
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - default-http-backend:80
|
||||
$ kubectl describe ing
|
||||
No events.
|
||||
```
|
||||
|
||||
* time, t=1m
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
test-ingress - default-http-backend:80 130.211.5.27
|
||||
|
||||
$ kubectl describe ing
|
||||
target-proxy: k8s-tp-default-test-ingress
|
||||
url-map: k8s-um-default-test-ingress
|
||||
backends: {"k8s-be-32342":"UNKNOWN"}
|
||||
forwarding-rule: k8s-fw-default-test-ingress
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
───────── ──────── ───── ──── ───────────── ────── ───────
|
||||
46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
|
||||
```
|
||||
|
||||
* time, t=5m
|
||||
```console
|
||||
$ kubectl describe ing
|
||||
target-proxy: k8s-tp-default-test-ingress
|
||||
url-map: k8s-um-default-test-ingress
|
||||
backends: {"k8s-be-32342":"HEALTHY"}
|
||||
forwarding-rule: k8s-fw-default-test-ingress
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Reason Message
|
||||
───────── ──────── ───── ──── ───────────── ────── ───────
|
||||
46s 46s 1 {loadbalancer-controller } Success Created loadbalancer 130.211.5.27
|
||||
|
||||
```
|
||||
|
||||
## Static and Ephemeral IPs
|
||||
|
||||
GCE has a concept of [ephemeral](https://cloud.google.com/compute/docs/instances-and-network#ephemeraladdress) and [static](https://cloud.google.com/compute/docs/instances-and-network#reservedaddress) IPs. A production website would always want a static IP, which ephemeral IPs are cheaper (both in terms of quota and cost), and are therefore better suited for experimentation.
|
||||
* Creating a HTTP Ingress (i.e an Ingress without a TLS section) allocates an ephemeral IP, because we don't believe HTTP is the right way to deploy an app.
|
||||
* Creating an Ingress with a TLS section allocates a static IP, because GLBC assumes you mean business.
|
||||
* Modifying an Ingress and adding a TLS section allocates a static IP, but the IP *will* change. This is a beta limitation.
|
||||
* You can [promote](https://cloud.google.com/compute/docs/instances-and-network#promote_ephemeral_ip) an ephemeral to a static IP by hand, if required.
|
||||
|
||||
## Load Balancing Algorithms
|
||||
|
||||
Right now, a kube-proxy nodePort is a necessary condition for Ingress on GCP. This is because the cloud lb doesn't understand how to route directly to your pods. Incorporating kube-proxy and cloud lb algorithms so they cooperate toward a common goal is still a work in progress. If you really want fine grained control over the algorithm, you should deploy the nginx ingress controller.
|
||||
|
||||
## Large clusters
|
||||
|
||||
Ingress is not yet supported on single zone clusters of size > 1000 nodes ([issue](https://github.com/kubernetes/contrib/issues/1724)). If you'd like to use Ingress on a large cluster, spread it across 2 or more zones such that no single zone contains more than a 1000 nodes. This is because there is a [limit](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-managed-instances) to the number of instances one can add to a single GCE Instance Group. In a multi-zone cluster, each zone gets its own instance group.
|
||||
|
||||
## Disabling GLBC
|
||||
|
||||
Setting the annotation `kubernetes.io/ingress.class` to any value other than "gce" or the empty string, will force the GCE Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the GCE controller, eg:
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "nginx"
|
||||
spec:
|
||||
tls:
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
As of Kubernetes 1.3, GLBC runs as a static pod on the master. If you want to totally disable it, you can ssh into the master node and delete the GLBC manifest file found at `/etc/kubernetes/manifests/glbc.manifest`. You can also disable it on GKE at cluster bring-up time through the `disable-addons` flag, eg:
|
||||
|
||||
```console
|
||||
gcloud container clusters create mycluster --network "default" --num-nodes 1 \
|
||||
--machine-type n1-standard-2 --zone $ZONE \
|
||||
--disable-addons HttpLoadBalancing \
|
||||
--disk-size 50 --scopes storage-full
|
||||
```
|
||||
|
||||
## Changing the cluster UID
|
||||
|
||||
The Ingress controller configures itself to add the UID it stores in a configmap in the `kube-system` namespace.
|
||||
|
||||
```console
|
||||
$ kubectl --namespace=kube-system get configmaps
|
||||
NAME DATA AGE
|
||||
ingress-uid 1 12d
|
||||
|
||||
$ kubectl --namespace=kube-system get configmaps -o yaml
|
||||
apiVersion: v1
|
||||
items:
|
||||
- apiVersion: v1
|
||||
data:
|
||||
uid: UID
|
||||
kind: ConfigMap
|
||||
...
|
||||
```
|
||||
|
||||
You can pick a different UID, but this requires you to:
|
||||
|
||||
1. Delete existing Ingresses
|
||||
2. Edit the configmap using `kubectl edit`
|
||||
3. Recreate the same Ingress
|
||||
|
||||
After step 3 the Ingress should come up using the new UID as the suffix of all cloud resources. You can't simply change the UID if you have existing Ingresses, because
|
||||
renaming a cloud resource requires a delete/create cycle that the Ingress controller does not currently automate. Note that the UID in step 1 might be an empty string,
|
||||
if you had a working Ingress before upgrading to Kubernetes 1.3.
|
||||
|
||||
__A note on setting the UID__: The Ingress controller uses the token `--` to split a machine generated prefix from the UID itself. If the user supplied UID is found to
|
||||
contain `--` the controller will take the token after the last `--`, and use an empty string if it ends with `--`. For example, if you insert `foo--bar` as the UID,
|
||||
the controller will assume `bar` is the UID. You can either edit the configmap and set the UID to `bar` to match the controller, or delete existing Ingresses as described
|
||||
above, and reset it to a string bereft of `--`.
|
||||
|
||||
## Cleaning up cloud resources
|
||||
|
||||
If you deleted a GKE/GCE cluster without first deleting the associated Ingresses, the controller would not have deleted the associated cloud resources. If you find yourself in such a situation, you can delete the resources by hand:
|
||||
|
||||
1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing"
|
||||
2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID
|
||||
3. Delete it, check the boxes to also casade the deletion down to associated resources (eg: backend-services)
|
||||
4. Switch to the "Compute Engine" tab, then choose "Instance Groups"
|
||||
5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID
|
||||
|
34
controllers/gce/Dockerfile
Normal file
34
controllers/gce/Dockerfile
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Copyright 2015 The Kubernetes Authors. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# TODO: use radial/busyboxplus:curl or alping instead
|
||||
FROM ubuntu:14.04
|
||||
MAINTAINER Prashanth B <beeps@google.com>
|
||||
|
||||
# so apt-get doesn't complain
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
RUN sed -i 's/^exit 101/exit 0/' /usr/sbin/policy-rc.d
|
||||
|
||||
# TODO: Move to using haproxy:1.5 image instead. Honestly,
|
||||
# that image isn't much smaller and the convenience of having
|
||||
# an ubuntu container for dev purposes trumps the tiny amounts
|
||||
# of disk and bandwidth we'd save in doing so.
|
||||
RUN \
|
||||
apt-get update && \
|
||||
apt-get install -y ca-certificates && \
|
||||
apt-get install -y curl && \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
ADD glbc glbc
|
||||
ENTRYPOINT ["/glbc"]
|
17
controllers/gce/Makefile
Normal file
17
controllers/gce/Makefile
Normal file
|
@ -0,0 +1,17 @@
|
|||
all: push
|
||||
|
||||
# 0.0 shouldn't clobber any released builds
|
||||
TAG = 0.8.0
|
||||
PREFIX = gcr.io/google_containers/glbc
|
||||
|
||||
server:
|
||||
CGO_ENABLED=0 GOOS=linux godep go build -a -installsuffix cgo -ldflags '-w' -o glbc *.go
|
||||
|
||||
container: server
|
||||
docker build -t $(PREFIX):$(TAG) .
|
||||
|
||||
push: container
|
||||
gcloud docker push $(PREFIX):$(TAG)
|
||||
|
||||
clean:
|
||||
rm -f glbc
|
690
controllers/gce/README.md
Normal file
690
controllers/gce/README.md
Normal file
|
@ -0,0 +1,690 @@
|
|||
# GLBC
|
||||
|
||||
GLBC is a GCE L7 load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API.
|
||||
|
||||
## A word to the wise
|
||||
|
||||
Please read the [beta limitations](BETA_LIMITATIONS.md) doc to before using this controller. In summary:
|
||||
|
||||
- This is a **work in progress**.
|
||||
- It relies on a beta Kubernetes resource.
|
||||
- The loadbalancer controller pod is not aware of your GCE quota.
|
||||
|
||||
## Overview
|
||||
|
||||
__A reminder on GCE L7__: Google Compute Engine does not have a single resource that represents a L7 loadbalancer. When a user request comes in, it is first handled by the global forwarding rule, which sends the traffic to an HTTP proxy service that sends the traffic to a URL map that parses the URL to see which backend service will handle the request. Each backend service is assigned a set of virtual machine instances grouped into instance groups.
|
||||
|
||||
__A reminder on Services__: A Kubernetes Service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name. This IP defaults to a cluster VIP in a private address range. You can direct ingress traffic to a particular Service by setting its `Type` to NodePort or LoadBalancer. NodePort opens up a port on *every* node in your cluster and proxies traffic to the endpoints of your service, while LoadBalancer allocates an L4 cloud loadbalancer.
|
||||
|
||||
### L7 Load balancing on Kubernetes
|
||||
|
||||
To achive L7 loadbalancing through Kubernetes, we employ a resource called `Ingress`. The Ingress is consumed by this loadbalancer controller, which creates the following GCE resource graph:
|
||||
|
||||
[Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules) -> [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) -> [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map) -> [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -> [Instance Group](https://cloud.google.com/compute/docs/instance-groups/)
|
||||
|
||||
The controller (glbc) manages the lifecycle of each component in the graph. It uses the Kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, and the rules on the Ingress become paths in the GCE Url Map. This allows you to route traffic to various backend Kubernetes Services through a single public IP, which is in contrast to `Type=LoadBalancer`, which allocates a public IP *per* Kubernetes Service. For this to work, the Kubernetes Service *must* have Type=NodePort.
|
||||
|
||||
### The Ingress
|
||||
|
||||
An Ingress in Kubernetes is a REST object, similar to a Service. A minimal Ingress might look like:
|
||||
|
||||
```yaml
|
||||
01. apiVersion: extensions/v1beta1
|
||||
02. kind: Ingress
|
||||
03. metadata:
|
||||
04. name: hostlessendpoint
|
||||
05. spec:
|
||||
06. rules:
|
||||
07. - http:
|
||||
08. paths:
|
||||
09. - path: /hostless
|
||||
10. backend:
|
||||
11. serviceName: test
|
||||
12. servicePort: 80
|
||||
```
|
||||
|
||||
POSTing this to the Kubernetes API server would result in glbc creating a GCE L7 that routes all traffic sent to `http://ip-of-loadbalancer/hostless` to :80 of the service named `test`. If the service doesn't exist yet, or doesn't have a nodePort, glbc will allocate an IP and wait till it does. Once the Service shows up, it will create the required path rules to route traffic to it.
|
||||
|
||||
__Lines 1-4__: Resource metadata used to tag GCE resources. For example, if you go to the console you would see a url map called: k8-fw-default-hostlessendpoint, where default is the namespace and hostlessendpoint is the name of the resource. The Kubernetes API server ensures that namespace/name is unique so there will never be any collisions.
|
||||
|
||||
__Lines 5-7__: Ingress Spec has all the information needed to configure a GCE L7. Most importantly, it contains a list of `rules`. A rule can take many forms, but the only rule relevant to glbc is the `http` rule.
|
||||
|
||||
__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to `*` in this example), a list of paths (eg: `/hostless`) each of which has an associated backend (`test:80`). Both the `host` and `path` must match the content of an incoming request before the L7 directs traffic to the `backend`.
|
||||
|
||||
__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule.
|
||||
|
||||
__Global Prameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc. Though glbc doesn't support HTTPS yet, security configs would also be global.
|
||||
|
||||
|
||||
## Load Balancer Management
|
||||
|
||||
You can manage a GCE L7 by creating/updating/deleting the associated Kubernetes Ingress.
|
||||
|
||||
### Creation
|
||||
|
||||
Before you can start creating Ingress you need to start up glbc. We can use the rc.yaml in this directory:
|
||||
```shell
|
||||
$ kubectl create -f rc.yaml
|
||||
replicationcontroller "glbc" created
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
glbc-6m6b6 2/2 Running 0 21s
|
||||
|
||||
```
|
||||
|
||||
A couple of things to note about this controller:
|
||||
* It needs a service with a node port to use as the default backend. This is the backend that's used when an Ingress does not specify the default.
|
||||
* It has an intentionally long terminationGracePeriod, this is only required with the --delete-all-on-quit flag (see [Deletion](#deletion))
|
||||
* Don't start 2 instances of the controller in a single cluster, they will fight each other.
|
||||
|
||||
The loadbalancer controller will watch for Services, Nodes and Ingress. Nodes already exist (the nodes in your cluster). We need to create the other 2. You can do so using the ingress-app.yaml in this directory.
|
||||
|
||||
A couple of things to note about the Ingress:
|
||||
* It creates a Replication Controller for a simple echoserver application, with 1 replica.
|
||||
* It creates 3 services for the same application pod: echoheaders[x, y, default]
|
||||
* It creates an Ingress with 2 hostnames and 3 endpoints (foo.bar.com{/foo} and bar.baz.com{/foo, /bar}) that access the given service
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ingress-app.yaml
|
||||
$ kubectl get svc
|
||||
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
|
||||
echoheadersdefault 10.0.43.119 nodes 80/TCP app=echoheaders 16m
|
||||
echoheadersx 10.0.126.10 nodes 80/TCP app=echoheaders 16m
|
||||
echoheadersy 10.0.134.238 nodes 80/TCP app=echoheaders 16m
|
||||
Kubernetes 10.0.0.1 <none> 443/TCP <none> 21h
|
||||
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
echomap - echoheadersdefault:80
|
||||
foo.bar.com
|
||||
/foo echoheadersx:80
|
||||
bar.baz.com
|
||||
/bar echoheadersy:80
|
||||
/foo echoheadersx:80
|
||||
```
|
||||
|
||||
You can tail the logs of the controller to observe its progress:
|
||||
```
|
||||
$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
|
||||
I1005 22:11:26.731845 1 instances.go:48] Creating instance group k8-ig-foo
|
||||
I1005 22:11:34.360689 1 controller.go:152] Created new loadbalancer controller
|
||||
I1005 22:11:34.360737 1 controller.go:172] Starting loadbalancer controller
|
||||
I1005 22:11:34.380757 1 controller.go:206] Syncing default/echomap
|
||||
I1005 22:11:34.380763 1 loadbalancer.go:134] Syncing loadbalancers [default/echomap]
|
||||
I1005 22:11:34.380810 1 loadbalancer.go:100] Creating l7 default-echomap
|
||||
I1005 22:11:34.385161 1 utils.go:83] Syncing e2e-test-beeps-minion-ugv1
|
||||
...
|
||||
```
|
||||
|
||||
When it's done, it will update the status of the Ingress with the ip of the L7 it created:
|
||||
```shell
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
echomap - echoheadersdefault:80 107.178.254.239
|
||||
foo.bar.com
|
||||
/foo echoheadersx:80
|
||||
bar.baz.com
|
||||
/bar echoheadersy:80
|
||||
/foo echoheadersx:80
|
||||
```
|
||||
|
||||
Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel:
|
||||
* A Global Forwarding Rule
|
||||
* An UrlMap
|
||||
* A TargetHTTPProxy
|
||||
* BackendServices (one for each Kubernetes nodePort service)
|
||||
* An Instance Group (with ports corresponding to the BackendServices)
|
||||
|
||||
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healtchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
|
||||
|
||||
```shell
|
||||
$ curl --resolve foo.bar.com:80:107.178.245.239 http://foo.bar.com/foo
|
||||
CLIENT VALUES:
|
||||
client_address=('10.240.29.196', 56401) (10.240.29.196)
|
||||
command=GET
|
||||
path=/echoheadersx
|
||||
real path=/echoheadersx
|
||||
query=
|
||||
request_version=HTTP/1.1
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=BaseHTTP/0.6
|
||||
sys_version=Python/3.4.3
|
||||
protocol_version=HTTP/1.0
|
||||
|
||||
HEADERS RECEIVED:
|
||||
Accept=*/*
|
||||
Connection=Keep-Alive
|
||||
Host=107.178.254.239
|
||||
User-Agent=curl/7.35.0
|
||||
Via=1.1 google
|
||||
X-Forwarded-For=216.239.45.73, 107.178.254.239
|
||||
X-Forwarded-Proto=http
|
||||
```
|
||||
|
||||
You can also edit `/etc/hosts` instead of using `--resolve`.
|
||||
|
||||
#### Updates
|
||||
|
||||
Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at /foo to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec:
|
||||
|
||||
```yaml
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /foo
|
||||
..
|
||||
```
|
||||
|
||||
and replace the existing Ingress (ignore errors about replacing the Service, we're using the same .yaml file but we only care about the Ingress):
|
||||
|
||||
```
|
||||
$ kubectl replace -f ingress-app.yaml
|
||||
ingress "echomap" replaced
|
||||
|
||||
$ curl http://107.178.254.239/foo
|
||||
CLIENT VALUES:
|
||||
client_address=('10.240.143.179', 59546) (10.240.143.179)
|
||||
command=GET
|
||||
path=/foo
|
||||
real path=/foo
|
||||
...
|
||||
|
||||
$ curl http://107.178.254.239/
|
||||
<pre>
|
||||
INTRODUCTION
|
||||
============
|
||||
This is an nginx webserver for simple loadbalancer testing. It works well
|
||||
for me but it might not have some of the features you want. If you would
|
||||
...
|
||||
```
|
||||
|
||||
A couple of things to note about this particular update:
|
||||
* An Ingress without a default backend inherits the backend of the Ingress controller.
|
||||
* A IngressRule without a host gets the wildcard. This is controller specific, some loadbalancer controllers do not respect anything but a DNS subdomain as the host. You *cannot* set the host to a regex.
|
||||
* You never want to delete then re-create an Ingress, as it will result in the controller tearing down and recreating the loadbalancer.
|
||||
|
||||
__Unexpected updates__: Since glbc constantly runs a control loop it won't allow you to break links that black hole traffic. An easy link to break is the url map itself, but you can also disconnect a target proxy from the urlmap, or remove an instance from the instance group (note this is different from *deleting* the instance, the loadbalancer controller will not recreate it if you do so). Modify one of the url links in the map to point to another backend through the GCE Control Panel UI, and wait till the controller sync (this happens as frequently as you tell it to, via the --resync-period flag). The same goes for the Kubernetes side of things, the API server will validate against obviously bad updates, but if you relink an Ingress so it points to the wrong backends the controller will blindly follow.
|
||||
|
||||
### Paths
|
||||
|
||||
Till now, our examples were simplified in that they hit an endpoint with a catch-all path regex. Most real world backends have subresources. Let's create service to test how the loadbalancer handles paths:
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginxtest
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginxtest
|
||||
spec:
|
||||
containers:
|
||||
- name: nginxtest
|
||||
image: bprashanth/nginxtest:1.0
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginxtest
|
||||
labels:
|
||||
app: nginxtest
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: nginxtest
|
||||
```
|
||||
|
||||
Running kubectl create against this manifest will given you a service with multiple endpoints:
|
||||
```shell
|
||||
$ kubectl get svc nginxtest -o yaml | grep -i nodeport:
|
||||
nodePort: 30404
|
||||
$ curl nodeip:30404/
|
||||
ENDPOINTS
|
||||
=========
|
||||
<a href="hostname">hostname</a>: An endpoint to query the hostname.
|
||||
<a href="stress">stress</a>: An endpoint to stress the host.
|
||||
<a href="fs/index.html">fs</a>: A file system for static content.
|
||||
|
||||
```
|
||||
You can put the nodeip:port into your browser and play around with the endpoints so you're familiar with what to expect. We will test the `/hostname` and `/fs/files/nginx.html` endpoints. Modify/create your Ingress:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginxtest-ingress
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /hostname
|
||||
backend:
|
||||
serviceName: nginxtest
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
And check the endpoint (you will have to wait till the update takes effect, this could be a few minutes):
|
||||
```shell
|
||||
$ kubectl replace -f ingress.yaml
|
||||
$ curl loadbalancerip/hostname
|
||||
nginx-tester-pod-name
|
||||
```
|
||||
|
||||
Note what just happened, the endpoint exposes /hostname, and the loadbalancer forwarded the entire matching url to the endpoint. This means if you had '/foo' in the Ingress and tried accessing /hostname, your endpoint would've received /foo/hostname and not known how to route it. Now update the Ingress to access static content via the /fs endpoint:
|
||||
```
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: nginxtest-ingress
|
||||
spec:
|
||||
rules:
|
||||
- http:
|
||||
paths:
|
||||
- path: /fs/*
|
||||
backend:
|
||||
serviceName: nginxtest
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
As before, wait a while for the update to take effect, and try accessing `loadbalancerip/fs/files/nginx.html`.
|
||||
|
||||
#### Deletion
|
||||
|
||||
Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources alltogether. Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
|
||||
```shell
|
||||
$ kubectl delete ing echomap
|
||||
$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
|
||||
I1007 00:25:45.099429 1 loadbalancer.go:144] Deleting lb default-echomap
|
||||
I1007 00:25:45.099432 1 loadbalancer.go:437] Deleting global forwarding rule k8-fw-default-echomap
|
||||
I1007 00:25:54.885823 1 loadbalancer.go:444] Deleting target proxy k8-tp-default-echomap
|
||||
I1007 00:25:58.446941 1 loadbalancer.go:451] Deleting url map k8-um-default-echomap
|
||||
I1007 00:26:02.043065 1 backends.go:176] Deleting backends []
|
||||
I1007 00:26:02.043188 1 backends.go:134] Deleting backend k8-be-30301
|
||||
I1007 00:26:05.591140 1 backends.go:134] Deleting backend k8-be-30284
|
||||
I1007 00:26:09.159016 1 controller.go:232] Finished syncing default/echomap
|
||||
```
|
||||
Note that it takes ~30 seconds to purge cloud resources, the API calls to create and delete are a one time cost. GCE BackendServices are ref-counted and deleted by the controller as you delete Kubernetes Ingress'. This is not sufficient for cleanup, because you might have deleted the Ingress while glbc was down, in which case it would leak cloud resources. You can delete the glbc and purge cloud resources in 2 more ways:
|
||||
|
||||
__The dev/test way__: If you want to delete everything in the cloud when the loadbalancer controller pod dies, start it with the --delete-all-on-quit flag. When a pod is killed it's first sent a SIGTERM, followed by a grace period (set to 10minutes for loadbalancer controllers), followed by a SIGKILL. The controller pod uses this time to delete cloud resources. Be careful with --delete-all-on-quit, because if you're running a production glbc and the scheduler re-schedules your pod for some reason, it will result in a loss of availability. You can do this because your rc.yaml has:
|
||||
```yaml
|
||||
args:
|
||||
# auto quit requires a high termination grace period.
|
||||
- --delete-all-on-quit=true
|
||||
```
|
||||
|
||||
So simply delete the replication controller:
|
||||
```shell
|
||||
$ kubectl get rc glbc
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
|
||||
glbc default-http-backend gcr.io/google_containers/defaultbackend:1.0 k8s-app=glbc,version=v0.5 1 2m
|
||||
l7-lb-controller gcr.io/google_containers/glbc:0.8.0
|
||||
|
||||
$ kubectl delete rc glbc
|
||||
replicationcontroller "glbc" deleted
|
||||
|
||||
$ kubectl get pods
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
glbc-6m6b6 1/1 Terminating 0 13m
|
||||
```
|
||||
|
||||
__The prod way__: If you didn't start the controller with `--delete-all-on-quit`, you can execute a GET on the `/delete-all-and-quit` endpoint. This endpoint is deliberately not exported.
|
||||
|
||||
```
|
||||
$ kubectl exec -it glbc-6m6b6 -- curl http://localhost:8081/delete-all-and-quit
|
||||
..Hangs till quit is done..
|
||||
|
||||
$ kubectl logs glbc-6m6b6 --follow
|
||||
I1007 00:26:09.159016 1 controller.go:232] Finished syncing default/echomap
|
||||
I1007 00:29:30.321419 1 controller.go:192] Shutting down controller queues.
|
||||
I1007 00:29:30.321970 1 controller.go:199] Shutting down cluster manager.
|
||||
I1007 00:29:30.321574 1 controller.go:178] Shutting down Loadbalancer Controller
|
||||
I1007 00:29:30.322378 1 main.go:160] Handled quit, awaiting pod deletion.
|
||||
I1007 00:29:30.321977 1 loadbalancer.go:154] Creating loadbalancers []
|
||||
I1007 00:29:30.322617 1 loadbalancer.go:192] Loadbalancer pool shutdown.
|
||||
I1007 00:29:30.322622 1 backends.go:176] Deleting backends []
|
||||
I1007 00:30:00.322528 1 main.go:160] Handled quit, awaiting pod deletion.
|
||||
I1007 00:30:30.322751 1 main.go:160] Handled quit, awaiting pod deletion
|
||||
```
|
||||
|
||||
You just instructed the loadbalancer controller to quit, however if it had done so, the replication controller would've just created another pod, so it waits around till you delete the rc.
|
||||
|
||||
#### Health checks
|
||||
|
||||
Currently, all service backends must satisfy *either* of the following requirements to pass the HTTP health checks sent to it from the GCE loadbalancer:
|
||||
1. Respond with a 200 on '/'. The content does not matter.
|
||||
2. Expose an arbitrary url as a `readiness` probe on the pods backing the Service.
|
||||
|
||||
The Ingress controller looks for a compatible readiness probe first, if it finds one, it adopts it as the GCE loadbalancer's HTTP health check. If there's no readiness probe, or the readiness probe requires special HTTP headers, or HTTPS, the Ingress controller points the GCE loadbalancer's HTTP health check at '/'. [This is an example](examples/health_checks/README.md) of an Ingress that adopts the readiness probe from the endpoints as its health check.
|
||||
|
||||
## TLS
|
||||
|
||||
You can secure an Ingress by specifying a [secret](http://kubernetes.io/docs/user-guide/secrets) that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller does not support SNI, so it will ignore all but the first cert in the TLS configuration section. The TLS secret must [contain keys](https://github.com/kubernetes/kubernetes/blob/master/pkg/api/types.go#L2696) named `tls.crt` and `tls.key` that contain the certificate and private key to use for TLS, eg:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testsecret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
data:
|
||||
tls.crt: base64 encoded cert
|
||||
tls.key: base64 encoded key
|
||||
```
|
||||
|
||||
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS.
|
||||
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: no-rules-map
|
||||
spec:
|
||||
tls:
|
||||
- secretName: testsecret
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
This creates 2 GCE forwarding rules that use a single static ip. Both `:80` and `:443` will direct traffic to your backend, which serves HTTP requests on the target port mentioned in the Service associated with the Ingress.
|
||||
|
||||
#### Redirecting HTTP to HTTPS
|
||||
|
||||
To redirect traffic from `:80` to `:443` you need to examine the `x-forwarded-proto` header inserted by the GCE L7, since the Ingress does not support redirect rules. In nginx, this is as simple as adding the following lines to your config:
|
||||
```nginx
|
||||
# Replace '_' with your hostname.
|
||||
server_name _;
|
||||
if ($http_x_forwarded_proto = "http") {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
```
|
||||
|
||||
Here's an example that demonstrates it, first lets create a self signed certificate valid for upto a year:
|
||||
|
||||
```console
|
||||
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=foobar.com"
|
||||
$ kubectl create secret tls tls-secret --key=/tmp/tls.key --cert=/tmp/tls.crt
|
||||
secret "tls-secret" created
|
||||
```
|
||||
|
||||
Then the Services/Ingress to use it:
|
||||
|
||||
```yaml
|
||||
$ echo "
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders-https
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders-https
|
||||
image: gcr.io/google_containers/echoserver:1.3
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
tls:
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
This creates 2 GCE forwarding rules that use a single static ip. Port `80` redirects to port `443` which terminates TLS and sends traffic to your backend.
|
||||
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
test * 80, 443 5s
|
||||
|
||||
$ kubectl describe ing
|
||||
Name: test
|
||||
Namespace: default
|
||||
Address: 130.211.21.233
|
||||
Default backend: echoheaders-https:80 (10.180.1.7:8080,10.180.2.3:8080)
|
||||
TLS:
|
||||
tls-secret terminates
|
||||
Rules:
|
||||
Host Path Backends
|
||||
---- ---- --------
|
||||
* * echoheaders-https:80 (10.180.1.7:8080,10.180.2.3:8080)
|
||||
Annotations:
|
||||
url-map: k8s-um-default-test--7d2d86e772b6c246
|
||||
backends: {"k8s-be-32327--7d2d86e772b6c246":"HEALTHY"}
|
||||
forwarding-rule: k8s-fw-default-test--7d2d86e772b6c246
|
||||
https-forwarding-rule: k8s-fws-default-test--7d2d86e772b6c246
|
||||
https-target-proxy: k8s-tps-default-test--7d2d86e772b6c246
|
||||
static-ip: k8s-fw-default-test--7d2d86e772b6c246
|
||||
target-proxy: k8s-tp-default-test--7d2d86e772b6c246
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
12m 12m 1 {loadbalancer-controller } Normal ADD default/test
|
||||
4m 4m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.21.233
|
||||
```
|
||||
|
||||
Testing reachability:
|
||||
```console
|
||||
$ curl 130.211.21.233 -kL
|
||||
CLIENT VALUES:
|
||||
client_address=10.240.0.4
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://130.211.21.233:8080/
|
||||
...
|
||||
|
||||
$ curl --resolve foobar.in:443:130.211.21.233 https://foobar.in --cacert /tmp/tls.crt
|
||||
CLIENT VALUES:
|
||||
client_address=10.240.0.4
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://bitrot.com:8080/
|
||||
...
|
||||
|
||||
$ curl --resolve bitrot.in:443:130.211.21.233 https://foobar.in --cacert /tmp/tls.crt
|
||||
curl: (51) SSL: certificate subject name 'foobar.in' does not match target host name 'foobar.in'
|
||||
```
|
||||
|
||||
Note that the GCLB health checks *do not* get the `301` because they don't include `x-forwarded-proto`.
|
||||
|
||||
#### Blocking HTTP
|
||||
|
||||
You can block traffic on `:80` through an annotation. You might want to do this if all your clients are only going to hit the loadbalancer through https and you don't want to waste the extra GCE forwarding rule, eg:
|
||||
```yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
annotations:
|
||||
kubernetes.io/ingress.allow-http: "false"
|
||||
spec:
|
||||
tls:
|
||||
# This assumes tls-secret exists.
|
||||
# To generate it run the make in this directory.
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
```
|
||||
|
||||
Upon describing it you should only see a single GCE forwarding rule:
|
||||
```console
|
||||
$ kubectl describe ing
|
||||
Name: test
|
||||
Namespace: default
|
||||
Address: 130.211.10.121
|
||||
Default backend: echoheaders-https:80 (10.245.2.4:8080,10.245.3.4:8080)
|
||||
TLS:
|
||||
tls-secret terminates
|
||||
Rules:
|
||||
Host Path Backends
|
||||
---- ---- --------
|
||||
Annotations:
|
||||
https-target-proxy: k8s-tps-default-test--uid
|
||||
url-map: k8s-um-default-test--uid
|
||||
backends: {"k8s-be-31644--uid":"Unknown"}
|
||||
https-forwarding-rule: k8s-fws-default-test--uid
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
13m 13m 1 {loadbalancer-controller } Normal ADD default/test
|
||||
12m 12m 1 {loadbalancer-controller } Normal CREATE ip: 130.211.10.121
|
||||
```
|
||||
|
||||
And curling `:80` should just `404`:
|
||||
```console
|
||||
$ curl 130.211.10.121
|
||||
...
|
||||
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
|
||||
<p><b>404.</b> <ins>That’s an error.</ins>
|
||||
|
||||
$ curl https://130.211.10.121 -k
|
||||
...
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
```
|
||||
|
||||
## Troubleshooting:
|
||||
|
||||
This controller is complicated because it exposes a tangled set of external resources as a single logical abstraction. It's recommended that you are at least *aware* of how one creates a GCE L7 [without a kubernetes Ingress](https://cloud.google.com/container-engine/docs/tutorials/http-balancer). If weird things happen, here are some basic debugging guidelines:
|
||||
|
||||
* Check loadbalancer controller pod logs via kubectl
|
||||
A typical sign of trouble is repeated retries in the logs:
|
||||
```shell
|
||||
I1006 18:58:53.451869 1 loadbalancer.go:268] Forwarding rule k8-fw-default-echomap already exists
|
||||
I1006 18:58:53.451955 1 backends.go:162] Syncing backends [30301 30284 30301]
|
||||
I1006 18:58:53.451998 1 backends.go:134] Deleting backend k8-be-30302
|
||||
E1006 18:58:57.029253 1 utils.go:71] Requeuing default/echomap, err googleapi: Error 400: The backendService resource 'projects/Kubernetesdev/global/backendServices/k8-be-30302' is already being used by 'projects/Kubernetesdev/global/urlMaps/k8-um-default-echomap'
|
||||
I1006 18:58:57.029336 1 utils.go:83] Syncing default/echomap
|
||||
```
|
||||
|
||||
This could be a bug or quota limitation. In the case of the former, please head over to slack or github.
|
||||
|
||||
* If you see a GET hanging, followed by a 502 with the following response:
|
||||
|
||||
```
|
||||
<html><head>
|
||||
<meta http-equiv="content-type" content="text/html;charset=utf-8">
|
||||
<title>502 Server Error</title>
|
||||
</head>
|
||||
<body text=#000000 bgcolor=#ffffff>
|
||||
<h1>Error: Server Error</h1>
|
||||
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
|
||||
<h2></h2>
|
||||
</body></html>
|
||||
```
|
||||
The loadbalancer is probably bootstrapping itself.
|
||||
|
||||
* If a GET responds with a 404 and the following response:
|
||||
```
|
||||
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
|
||||
<p><b>404.</b> <ins>That’s an error.</ins>
|
||||
<p>The requested URL <code>/hostless</code> was not found on this server. <ins>That’s all we know.</ins>
|
||||
```
|
||||
It means you have lost your IP somehow, or just typed in the wrong IP.
|
||||
|
||||
* If you see requests taking an abnormal amount of time, run the echoheaders pod and look for the client address
|
||||
```shell
|
||||
CLIENT VALUES:
|
||||
client_address=('10.240.29.196', 56401) (10.240.29.196)
|
||||
```
|
||||
|
||||
Then head over to the GCE node with internal ip 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back.
|
||||
|
||||
* Check if you can access the backend service directly via nodeip:nodeport
|
||||
* Check the GCE console
|
||||
* Make sure you only have a single loadbalancer controller running
|
||||
* Make sure the initial GCE health checks have passed
|
||||
* A crash loop looks like:
|
||||
```shell
|
||||
$ kubectl get pods
|
||||
glbc-fjtlq 0/1 CrashLoopBackOff 17 1h
|
||||
```
|
||||
If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones.
|
||||
|
||||
## Creating the firewall rule for GLBC health checks
|
||||
|
||||
A default GKE/GCE cluster needs at least 1 firewall rule for GLBC to function. The Ingress controller should create this for you automatically. You can also create it thus:
|
||||
```console
|
||||
$ gcloud compute firewall-rules create allow-130-211-0-0-22 \
|
||||
--source-ranges 130.211.0.0/22 \
|
||||
--target-tags $TAG \
|
||||
--allow tcp:$NODE_PORT
|
||||
```
|
||||
|
||||
Where `130.211.0.0/22` is the source range of the GCE L7, `$NODE_PORT` is the node port your Service is exposed on, i.e:
|
||||
```console
|
||||
$ kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services ${SERVICE_NAME}
|
||||
```
|
||||
|
||||
and `$TAG` is an optional list of GKE instance tags, i.e:
|
||||
```console
|
||||
$ kubectl get nodes | awk '{print $1}' | tail -n +2 | grep -Po 'gke-[0-9,a-z]+-[0-9,a-z]+-node' | uniq
|
||||
```
|
||||
|
||||
## GLBC Implementation Details
|
||||
|
||||
For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources.
|
||||
|
||||
The controller manages cloud resources through a notion of pools. Each pool is the representation of the last known state of a logical cloud resource. Pools are periodically synced with the desired state, as reflected by the Kubernetes api. When you create a new Ingress, the following happens:
|
||||
* Create BackendServices for each Kubernetes backend in the Ingress, through the backend pool.
|
||||
* Add nodePorts for each BackendService to an Instance Group with all the instances in your cluster, through the instance pool.
|
||||
* Create a UrlMap, TargetHttpProxy, Global Forwarding Rule through the loadbalancer pool.
|
||||
* Update the loadbalancer's urlmap according to the Ingress.
|
||||
|
||||
Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the Kubernetes nodes are a part of the instance group, and so on. Since Backends are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to backend services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend.
|
||||
|
||||
## Wishlist:
|
||||
|
||||
* More E2e, integration tests
|
||||
* Better events
|
||||
* Detect leaked resources even if the Ingress has been deleted when the controller isn't around
|
||||
* Specify health checks (currently we just rely on kubernetes service/pod liveness probes and force pods to have a `/` endpoint that responds with 200 for GCE)
|
||||
* Alleviate the NodePort requirement for Service Type=LoadBalancer.
|
||||
* Async pool management of backends/L7s etc
|
||||
* Retry back-off when GCE Quota is done
|
||||
* GCE Quota integration
|
||||
|
||||
[]()
|
307
controllers/gce/backends/backends.go
Normal file
307
controllers/gce/backends/backends.go
Normal file
|
@ -0,0 +1,307 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package backends
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
|
||||
"github.com/golang/glog"
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
|
||||
"k8s.io/contrib/ingress/controllers/gce/instances"
|
||||
"k8s.io/contrib/ingress/controllers/gce/storage"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
)
|
||||
|
||||
// Backends implements BackendPool.
|
||||
type Backends struct {
|
||||
cloud BackendServices
|
||||
nodePool instances.NodePool
|
||||
healthChecker healthchecks.HealthChecker
|
||||
snapshotter storage.Snapshotter
|
||||
// ignoredPorts are a set of ports excluded from GC, even
|
||||
// after the Ingress has been deleted. Note that invoking
|
||||
// a Delete() on these ports will still delete the backend.
|
||||
ignoredPorts sets.String
|
||||
namer *utils.Namer
|
||||
}
|
||||
|
||||
func portKey(port int64) string {
|
||||
return fmt.Sprintf("%d", port)
|
||||
}
|
||||
|
||||
// NewBackendPool returns a new backend pool.
|
||||
// - cloud: implements BackendServices and syncs backends with a cloud provider
|
||||
// - healthChecker: is capable of producing health checks for backends.
|
||||
// - nodePool: implements NodePool, used to create/delete new instance groups.
|
||||
// - namer: procudes names for backends.
|
||||
// - ignorePorts: is a set of ports to avoid syncing/GCing.
|
||||
// - resyncWithCloud: if true, periodically syncs with cloud resources.
|
||||
func NewBackendPool(
|
||||
cloud BackendServices,
|
||||
healthChecker healthchecks.HealthChecker,
|
||||
nodePool instances.NodePool,
|
||||
namer *utils.Namer,
|
||||
ignorePorts []int64,
|
||||
resyncWithCloud bool) *Backends {
|
||||
|
||||
ignored := []string{}
|
||||
for _, p := range ignorePorts {
|
||||
ignored = append(ignored, portKey(p))
|
||||
}
|
||||
backendPool := &Backends{
|
||||
cloud: cloud,
|
||||
nodePool: nodePool,
|
||||
healthChecker: healthChecker,
|
||||
namer: namer,
|
||||
ignoredPorts: sets.NewString(ignored...),
|
||||
}
|
||||
if !resyncWithCloud {
|
||||
backendPool.snapshotter = storage.NewInMemoryPool()
|
||||
return backendPool
|
||||
}
|
||||
backendPool.snapshotter = storage.NewCloudListingPool(
|
||||
func(i interface{}) (string, error) {
|
||||
bs := i.(*compute.BackendService)
|
||||
if !namer.NameBelongsToCluster(bs.Name) {
|
||||
return "", fmt.Errorf("Unrecognized name %v", bs.Name)
|
||||
}
|
||||
port, err := namer.BePort(bs.Name)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return port, nil
|
||||
},
|
||||
backendPool,
|
||||
30*time.Second,
|
||||
)
|
||||
return backendPool
|
||||
}
|
||||
|
||||
// Get returns a single backend.
|
||||
func (b *Backends) Get(port int64) (*compute.BackendService, error) {
|
||||
be, err := b.cloud.GetBackendService(b.namer.BeName(port))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
b.snapshotter.Add(portKey(port), be)
|
||||
return be, nil
|
||||
}
|
||||
|
||||
func (b *Backends) create(igs []*compute.InstanceGroup, namedPort *compute.NamedPort, name string) (*compute.BackendService, error) {
|
||||
// Create a new health check
|
||||
if err := b.healthChecker.Add(namedPort.Port); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
hc, err := b.healthChecker.Get(namedPort.Port)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Create a new backend
|
||||
backend := &compute.BackendService{
|
||||
Name: name,
|
||||
Protocol: "HTTP",
|
||||
Backends: getBackendsForIGs(igs),
|
||||
// Api expects one, means little to kubernetes.
|
||||
HealthChecks: []string{hc.SelfLink},
|
||||
Port: namedPort.Port,
|
||||
PortName: namedPort.Name,
|
||||
}
|
||||
if err := b.cloud.CreateBackendService(backend); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return b.Get(namedPort.Port)
|
||||
}
|
||||
|
||||
// Add will get or create a Backend for the given port.
|
||||
func (b *Backends) Add(port int64) error {
|
||||
// We must track the port even if creating the backend failed, because
|
||||
// we might've created a health-check for it.
|
||||
be := &compute.BackendService{}
|
||||
defer func() { b.snapshotter.Add(portKey(port), be) }()
|
||||
|
||||
igs, namedPort, err := b.nodePool.AddInstanceGroup(b.namer.IGName(), port)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
be, _ = b.Get(port)
|
||||
if be == nil {
|
||||
glog.Infof("Creating backend for %d instance groups, port %v named port %v",
|
||||
len(igs), port, namedPort)
|
||||
be, err = b.create(igs, namedPort, b.namer.BeName(port))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// we won't find any igs till the node pool syncs nodes.
|
||||
if len(igs) == 0 {
|
||||
return nil
|
||||
}
|
||||
if err := b.edgeHop(be, igs); err != nil {
|
||||
return err
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete deletes the Backend for the given port.
|
||||
func (b *Backends) Delete(port int64) (err error) {
|
||||
name := b.namer.BeName(port)
|
||||
glog.Infof("Deleting backend %v", name)
|
||||
defer func() {
|
||||
if utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
err = nil
|
||||
}
|
||||
if err == nil {
|
||||
b.snapshotter.Delete(portKey(port))
|
||||
}
|
||||
}()
|
||||
// Try deleting health checks even if a backend is not found.
|
||||
if err = b.cloud.DeleteBackendService(name); err != nil &&
|
||||
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
if err = b.healthChecker.Delete(port); err != nil &&
|
||||
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// List lists all backends.
|
||||
func (b *Backends) List() ([]interface{}, error) {
|
||||
// TODO: for consistency with the rest of this sub-package this method
|
||||
// should return a list of backend ports.
|
||||
interList := []interface{}{}
|
||||
be, err := b.cloud.ListBackendServices()
|
||||
if err != nil {
|
||||
return interList, err
|
||||
}
|
||||
for i := range be.Items {
|
||||
interList = append(interList, be.Items[i])
|
||||
}
|
||||
return interList, nil
|
||||
}
|
||||
|
||||
func getBackendsForIGs(igs []*compute.InstanceGroup) []*compute.Backend {
|
||||
backends := []*compute.Backend{}
|
||||
for _, ig := range igs {
|
||||
backends = append(backends, &compute.Backend{Group: ig.SelfLink})
|
||||
}
|
||||
return backends
|
||||
}
|
||||
|
||||
// edgeHop checks the links of the given backend by executing an edge hop.
|
||||
// It fixes broken links.
|
||||
func (b *Backends) edgeHop(be *compute.BackendService, igs []*compute.InstanceGroup) error {
|
||||
beIGs := sets.String{}
|
||||
for _, beToIG := range be.Backends {
|
||||
beIGs.Insert(beToIG.Group)
|
||||
}
|
||||
igLinks := sets.String{}
|
||||
for _, igToBE := range igs {
|
||||
igLinks.Insert(igToBE.SelfLink)
|
||||
}
|
||||
if beIGs.IsSuperset(igLinks) {
|
||||
return nil
|
||||
}
|
||||
glog.Infof("Backend %v has a broken edge, expected igs %+v, current igs %+v",
|
||||
be.Name, igLinks.List(), beIGs.List())
|
||||
|
||||
newBackends := []*compute.Backend{}
|
||||
for _, b := range getBackendsForIGs(igs) {
|
||||
if !beIGs.Has(b.Group) {
|
||||
newBackends = append(newBackends, b)
|
||||
}
|
||||
}
|
||||
be.Backends = append(be.Backends, newBackends...)
|
||||
if err := b.cloud.UpdateBackendService(be); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync syncs backend services corresponding to ports in the given list.
|
||||
func (b *Backends) Sync(svcNodePorts []int64) error {
|
||||
glog.V(3).Infof("Sync: backends %v", svcNodePorts)
|
||||
|
||||
// create backends for new ports, perform an edge hop for existing ports
|
||||
for _, port := range svcNodePorts {
|
||||
if err := b.Add(port); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GC garbage collects services corresponding to ports in the given list.
|
||||
func (b *Backends) GC(svcNodePorts []int64) error {
|
||||
knownPorts := sets.NewString()
|
||||
for _, port := range svcNodePorts {
|
||||
knownPorts.Insert(portKey(port))
|
||||
}
|
||||
pool := b.snapshotter.Snapshot()
|
||||
for port := range pool {
|
||||
p, err := strconv.Atoi(port)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
nodePort := int64(p)
|
||||
if knownPorts.Has(portKey(nodePort)) || b.ignoredPorts.Has(portKey(nodePort)) {
|
||||
continue
|
||||
}
|
||||
glog.V(3).Infof("GCing backend for port %v", p)
|
||||
if err := b.Delete(nodePort); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if len(svcNodePorts) == 0 {
|
||||
glog.Infof("Deleting instance group %v", b.namer.IGName())
|
||||
if err := b.nodePool.DeleteInstanceGroup(b.namer.IGName()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Shutdown deletes all backends and the default backend.
|
||||
// This will fail if one of the backends is being used by another resource.
|
||||
func (b *Backends) Shutdown() error {
|
||||
if err := b.GC([]int64{}); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Status returns the status of the given backend by name.
|
||||
func (b *Backends) Status(name string) string {
|
||||
backend, err := b.cloud.GetBackendService(name)
|
||||
if err != nil {
|
||||
return "Unknown"
|
||||
}
|
||||
// TODO: Include port, ip in the status, since it's in the health info.
|
||||
hs, err := b.cloud.GetHealth(name, backend.Backends[0].Group)
|
||||
if err != nil || len(hs.HealthStatus) == 0 || hs.HealthStatus[0] == nil {
|
||||
return "Unknown"
|
||||
}
|
||||
// TODO: State transition are important, not just the latest.
|
||||
return hs.HealthStatus[0].HealthState
|
||||
}
|
232
controllers/gce/backends/backends_test.go
Normal file
232
controllers/gce/backends/backends_test.go
Normal file
|
@ -0,0 +1,232 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package backends
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
|
||||
"k8s.io/contrib/ingress/controllers/gce/instances"
|
||||
"k8s.io/contrib/ingress/controllers/gce/storage"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
const defaultZone = "zone-a"
|
||||
|
||||
func newBackendPool(f BackendServices, fakeIGs instances.InstanceGroups, syncWithCloud bool) BackendPool {
|
||||
namer := &utils.Namer{}
|
||||
nodePool := instances.NewNodePool(fakeIGs)
|
||||
nodePool.Init(&instances.FakeZoneLister{[]string{defaultZone}})
|
||||
healthChecks := healthchecks.NewHealthChecker(healthchecks.NewFakeHealthChecks(), "/", namer)
|
||||
healthChecks.Init(&healthchecks.FakeHealthCheckGetter{nil})
|
||||
return NewBackendPool(
|
||||
f, healthChecks, nodePool, namer, []int64{}, syncWithCloud)
|
||||
}
|
||||
|
||||
func TestBackendPoolAdd(t *testing.T) {
|
||||
f := NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
pool := newBackendPool(f, fakeIGs, false)
|
||||
namer := utils.Namer{}
|
||||
|
||||
// Add a backend for a port, then re-add the same port and
|
||||
// make sure it corrects a broken link from the backend to
|
||||
// the instance group.
|
||||
nodePort := int64(8080)
|
||||
pool.Add(nodePort)
|
||||
beName := namer.BeName(nodePort)
|
||||
|
||||
// Check that the new backend has the right port
|
||||
be, err := f.GetBackendService(beName)
|
||||
if err != nil {
|
||||
t.Fatalf("Did not find expected backend %v", beName)
|
||||
}
|
||||
if be.Port != nodePort {
|
||||
t.Fatalf("Backend %v has wrong port %v, expected %v", be.Name, be.Port, nodePort)
|
||||
}
|
||||
// Check that the instance group has the new port
|
||||
var found bool
|
||||
for _, port := range fakeIGs.Ports {
|
||||
if port == nodePort {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("Port %v not added to instance group", nodePort)
|
||||
}
|
||||
|
||||
// Mess up the link between backend service and instance group.
|
||||
// This simulates a user doing foolish things through the UI.
|
||||
f.calls = []int{}
|
||||
be, err = f.GetBackendService(beName)
|
||||
be.Backends = []*compute.Backend{
|
||||
{Group: "test edge hop"},
|
||||
}
|
||||
f.UpdateBackendService(be)
|
||||
|
||||
pool.Add(nodePort)
|
||||
for _, call := range f.calls {
|
||||
if call == utils.Create {
|
||||
t.Fatalf("Unexpected create for existing backend service")
|
||||
}
|
||||
}
|
||||
gotBackend, err := f.GetBackendService(beName)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to find a backend with name %v: %v", beName, err)
|
||||
}
|
||||
gotGroup, err := fakeIGs.GetInstanceGroup(namer.IGName(), defaultZone)
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to find instance group %v", namer.IGName())
|
||||
}
|
||||
backendLinks := sets.NewString()
|
||||
for _, be := range gotBackend.Backends {
|
||||
backendLinks.Insert(be.Group)
|
||||
}
|
||||
if !backendLinks.Has(gotGroup.SelfLink) {
|
||||
t.Fatalf(
|
||||
"Broken instance group link, got: %+v expected: %v",
|
||||
backendLinks.List(),
|
||||
gotGroup.SelfLink)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackendPoolSync(t *testing.T) {
|
||||
// Call sync on a backend pool with a list of ports, make sure the pool
|
||||
// creates/deletes required ports.
|
||||
svcNodePorts := []int64{81, 82, 83}
|
||||
f := NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
pool := newBackendPool(f, fakeIGs, true)
|
||||
pool.Add(81)
|
||||
pool.Add(90)
|
||||
pool.Sync(svcNodePorts)
|
||||
pool.GC(svcNodePorts)
|
||||
if _, err := pool.Get(90); err == nil {
|
||||
t.Fatalf("Did not expect to find port 90")
|
||||
}
|
||||
for _, port := range svcNodePorts {
|
||||
if _, err := pool.Get(port); err != nil {
|
||||
t.Fatalf("Expected to find port %v", port)
|
||||
}
|
||||
}
|
||||
|
||||
svcNodePorts = []int64{81}
|
||||
deletedPorts := []int64{82, 83}
|
||||
pool.GC(svcNodePorts)
|
||||
for _, port := range deletedPorts {
|
||||
if _, err := pool.Get(port); err == nil {
|
||||
t.Fatalf("Pool contains %v after deletion", port)
|
||||
}
|
||||
}
|
||||
|
||||
// All these backends should be ignored because they don't belong to the cluster.
|
||||
// foo - non k8s managed backend
|
||||
// k8s-be-foo - foo is not a nodeport
|
||||
// k8s--bar--foo - too many cluster delimiters
|
||||
// k8s-be-3001--uid - another cluster tagged with uid
|
||||
unrelatedBackends := sets.NewString([]string{"foo", "k8s-be-foo", "k8s--bar--foo", "k8s-be-30001--uid"}...)
|
||||
for _, name := range unrelatedBackends.List() {
|
||||
f.CreateBackendService(&compute.BackendService{Name: name})
|
||||
}
|
||||
|
||||
namer := &utils.Namer{}
|
||||
// This backend should get deleted again since it is managed by this cluster.
|
||||
f.CreateBackendService(&compute.BackendService{Name: namer.BeName(deletedPorts[0])})
|
||||
|
||||
// TODO: Avoid casting.
|
||||
// Repopulate the pool with a cloud list, which now includes the 82 port
|
||||
// backend. This would happen if, say, an ingress backend is removed
|
||||
// while the controller is restarting.
|
||||
pool.(*Backends).snapshotter.(*storage.CloudListingPool).ReplenishPool()
|
||||
|
||||
pool.GC(svcNodePorts)
|
||||
|
||||
currBackends, _ := f.ListBackendServices()
|
||||
currSet := sets.NewString()
|
||||
for _, b := range currBackends.Items {
|
||||
currSet.Insert(b.Name)
|
||||
}
|
||||
// Port 81 still exists because it's an in-use service NodePort.
|
||||
knownBe := namer.BeName(81)
|
||||
if !currSet.Has(knownBe) {
|
||||
t.Fatalf("Expected %v to exist in backend pool", knownBe)
|
||||
}
|
||||
currSet.Delete(knownBe)
|
||||
if !currSet.Equal(unrelatedBackends) {
|
||||
t.Fatalf("Some unrelated backends were deleted. Expected %+v, got %+v", unrelatedBackends, currSet)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackendPoolShutdown(t *testing.T) {
|
||||
f := NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
pool := newBackendPool(f, fakeIGs, false)
|
||||
namer := utils.Namer{}
|
||||
|
||||
pool.Add(80)
|
||||
pool.Shutdown()
|
||||
if _, err := f.GetBackendService(namer.BeName(80)); err == nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBackendInstanceGroupClobbering(t *testing.T) {
|
||||
f := NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
pool := newBackendPool(f, fakeIGs, false)
|
||||
namer := utils.Namer{}
|
||||
|
||||
// This will add the instance group k8s-ig to the instance pool
|
||||
pool.Add(80)
|
||||
|
||||
be, err := f.GetBackendService(namer.BeName(80))
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
// Simulate another controller updating the same backend service with
|
||||
// a different instance group
|
||||
newGroups := []*compute.Backend{
|
||||
{Group: "k8s-ig-bar"},
|
||||
{Group: "k8s-ig-foo"},
|
||||
}
|
||||
be.Backends = append(be.Backends, newGroups...)
|
||||
if err := f.UpdateBackendService(be); err != nil {
|
||||
t.Fatalf("Failed to update backend service %v", be.Name)
|
||||
}
|
||||
|
||||
// Make sure repeated adds don't clobber the inserted instance group
|
||||
pool.Add(80)
|
||||
be, err = f.GetBackendService(namer.BeName(80))
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
gotGroups := sets.NewString()
|
||||
for _, g := range be.Backends {
|
||||
gotGroups.Insert(g.Group)
|
||||
}
|
||||
|
||||
// seed expectedGroups with the first group native to this controller
|
||||
expectedGroups := sets.NewString("k8s-ig")
|
||||
for _, newGroup := range newGroups {
|
||||
expectedGroups.Insert(newGroup.Group)
|
||||
}
|
||||
if !expectedGroups.Equal(gotGroups) {
|
||||
t.Fatalf("Expected %v Got %v", expectedGroups, gotGroups)
|
||||
}
|
||||
}
|
101
controllers/gce/backends/fakes.go
Normal file
101
controllers/gce/backends/fakes.go
Normal file
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package backends
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
)
|
||||
|
||||
// NewFakeBackendServices creates a new fake backend services manager.
|
||||
func NewFakeBackendServices() *FakeBackendServices {
|
||||
return &FakeBackendServices{
|
||||
backendServices: []*compute.BackendService{},
|
||||
}
|
||||
}
|
||||
|
||||
// FakeBackendServices fakes out GCE backend services.
|
||||
type FakeBackendServices struct {
|
||||
backendServices []*compute.BackendService
|
||||
calls []int
|
||||
}
|
||||
|
||||
// GetBackendService fakes getting a backend service from the cloud.
|
||||
func (f *FakeBackendServices) GetBackendService(name string) (*compute.BackendService, error) {
|
||||
f.calls = append(f.calls, utils.Get)
|
||||
for i := range f.backendServices {
|
||||
if name == f.backendServices[i].Name {
|
||||
return f.backendServices[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Backend service %v not found", name)
|
||||
}
|
||||
|
||||
// CreateBackendService fakes backend service creation.
|
||||
func (f *FakeBackendServices) CreateBackendService(be *compute.BackendService) error {
|
||||
f.calls = append(f.calls, utils.Create)
|
||||
be.SelfLink = be.Name
|
||||
f.backendServices = append(f.backendServices, be)
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteBackendService fakes backend service deletion.
|
||||
func (f *FakeBackendServices) DeleteBackendService(name string) error {
|
||||
f.calls = append(f.calls, utils.Delete)
|
||||
newBackends := []*compute.BackendService{}
|
||||
for i := range f.backendServices {
|
||||
if name != f.backendServices[i].Name {
|
||||
newBackends = append(newBackends, f.backendServices[i])
|
||||
}
|
||||
}
|
||||
f.backendServices = newBackends
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListBackendServices fakes backend service listing.
|
||||
func (f *FakeBackendServices) ListBackendServices() (*compute.BackendServiceList, error) {
|
||||
return &compute.BackendServiceList{Items: f.backendServices}, nil
|
||||
}
|
||||
|
||||
// UpdateBackendService fakes updating a backend service.
|
||||
func (f *FakeBackendServices) UpdateBackendService(be *compute.BackendService) error {
|
||||
f.calls = append(f.calls, utils.Update)
|
||||
for i := range f.backendServices {
|
||||
if f.backendServices[i].Name == be.Name {
|
||||
f.backendServices[i] = be
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetHealth fakes getting backend service health.
|
||||
func (f *FakeBackendServices) GetHealth(name, instanceGroupLink string) (*compute.BackendServiceGroupHealth, error) {
|
||||
be, err := f.GetBackendService(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
states := []*compute.HealthStatus{
|
||||
{
|
||||
HealthState: "HEALTHY",
|
||||
IpAddress: "",
|
||||
Port: be.Port,
|
||||
},
|
||||
}
|
||||
return &compute.BackendServiceGroupHealth{
|
||||
HealthStatus: states}, nil
|
||||
}
|
44
controllers/gce/backends/interfaces.go
Normal file
44
controllers/gce/backends/interfaces.go
Normal file
|
@ -0,0 +1,44 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package backends
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
)
|
||||
|
||||
// BackendPool is an interface to manage a pool of kubernetes nodePort services
|
||||
// as gce backendServices, and sync them through the BackendServices interface.
|
||||
type BackendPool interface {
|
||||
Add(port int64) error
|
||||
Get(port int64) (*compute.BackendService, error)
|
||||
Delete(port int64) error
|
||||
Sync(ports []int64) error
|
||||
GC(ports []int64) error
|
||||
Shutdown() error
|
||||
Status(name string) string
|
||||
List() ([]interface{}, error)
|
||||
}
|
||||
|
||||
// BackendServices is an interface for managing gce backend services.
|
||||
type BackendServices interface {
|
||||
GetBackendService(name string) (*compute.BackendService, error)
|
||||
UpdateBackendService(bg *compute.BackendService) error
|
||||
CreateBackendService(bg *compute.BackendService) error
|
||||
DeleteBackendService(name string) error
|
||||
ListBackendServices() (*compute.BackendServiceList, error)
|
||||
GetHealth(name, instanceGroupLink string) (*compute.BackendServiceGroupHealth, error)
|
||||
}
|
281
controllers/gce/controller/cluster_manager.go
Normal file
281
controllers/gce/controller/cluster_manager.go
Normal file
|
@ -0,0 +1,281 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"k8s.io/contrib/ingress/controllers/gce/backends"
|
||||
"k8s.io/contrib/ingress/controllers/gce/firewalls"
|
||||
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
|
||||
"k8s.io/contrib/ingress/controllers/gce/instances"
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/cloudprovider"
|
||||
gce "k8s.io/kubernetes/pkg/cloudprovider/providers/gce"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultPort = 80
|
||||
defaultHealthCheckPath = "/"
|
||||
|
||||
// A backend is created per nodePort, tagged with the nodeport.
|
||||
// This allows sharing of backends across loadbalancers.
|
||||
backendPrefix = "k8s-be"
|
||||
|
||||
// A single target proxy/urlmap/forwarding rule is created per loadbalancer.
|
||||
// Tagged with the namespace/name of the Ingress.
|
||||
targetProxyPrefix = "k8s-tp"
|
||||
forwardingRulePrefix = "k8s-fw"
|
||||
urlMapPrefix = "k8s-um"
|
||||
|
||||
// Used in the test RunServer method to denote a delete request.
|
||||
deleteType = "del"
|
||||
|
||||
// port 0 is used as a signal for port not found/no such port etc.
|
||||
invalidPort = 0
|
||||
|
||||
// Names longer than this are truncated, because of GCE restrictions.
|
||||
nameLenLimit = 62
|
||||
|
||||
// Sleep interval to retry cloud client creation.
|
||||
cloudClientRetryInterval = 10 * time.Second
|
||||
)
|
||||
|
||||
// ClusterManager manages cluster resource pools.
|
||||
type ClusterManager struct {
|
||||
ClusterNamer *utils.Namer
|
||||
defaultBackendNodePort int64
|
||||
instancePool instances.NodePool
|
||||
backendPool backends.BackendPool
|
||||
l7Pool loadbalancers.LoadBalancerPool
|
||||
firewallPool firewalls.SingleFirewallPool
|
||||
|
||||
// TODO: Refactor so we simply init a health check pool.
|
||||
// Currently health checks are tied to backends because each backend needs
|
||||
// the link of the associated health, but both the backend pool and
|
||||
// loadbalancer pool manage backends, because the lifetime of the default
|
||||
// backend is tied to the last/first loadbalancer not the life of the
|
||||
// nodeport service or Ingress.
|
||||
healthCheckers []healthchecks.HealthChecker
|
||||
}
|
||||
|
||||
// Init initializes the cluster manager.
|
||||
func (c *ClusterManager) Init(tr *GCETranslator) {
|
||||
c.instancePool.Init(tr)
|
||||
for _, h := range c.healthCheckers {
|
||||
h.Init(tr)
|
||||
}
|
||||
// TODO: Initialize other members as needed.
|
||||
}
|
||||
|
||||
// IsHealthy returns an error if the cluster manager is unhealthy.
|
||||
func (c *ClusterManager) IsHealthy() (err error) {
|
||||
// TODO: Expand on this, for now we just want to detect when the GCE client
|
||||
// is broken.
|
||||
_, err = c.backendPool.List()
|
||||
|
||||
// If this container is scheduled on a node without compute/rw it is
|
||||
// effectively useless, but it is healthy. Reporting it as unhealthy
|
||||
// will lead to container crashlooping.
|
||||
if utils.IsHTTPErrorCode(err, http.StatusForbidden) {
|
||||
glog.Infof("Reporting cluster as healthy, but unable to list backends: %v", err)
|
||||
return nil
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (c *ClusterManager) shutdown() error {
|
||||
if err := c.l7Pool.Shutdown(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := c.firewallPool.Shutdown(); err != nil {
|
||||
return err
|
||||
}
|
||||
// The backend pool will also delete instance groups.
|
||||
return c.backendPool.Shutdown()
|
||||
}
|
||||
|
||||
// Checkpoint performs a checkpoint with the cloud.
|
||||
// - lbNames are the names of L7 loadbalancers we wish to exist. If they already
|
||||
// exist, they should not have any broken links between say, a UrlMap and
|
||||
// TargetHttpProxy.
|
||||
// - nodeNames are the names of nodes we wish to add to all loadbalancer
|
||||
// instance groups.
|
||||
// - nodePorts are the ports for which we require BackendServices. Each of
|
||||
// these ports must also be opened on the corresponding Instance Group.
|
||||
// If in performing the checkpoint the cluster manager runs out of quota, a
|
||||
// googleapi 403 is returned.
|
||||
func (c *ClusterManager) Checkpoint(lbs []*loadbalancers.L7RuntimeInfo, nodeNames []string, nodePorts []int64) error {
|
||||
// Multiple ingress paths can point to the same service (and hence nodePort)
|
||||
// but each nodePort can only have one set of cloud resources behind it. So
|
||||
// don't waste time double validating GCE BackendServices.
|
||||
portMap := map[int64]struct{}{}
|
||||
for _, p := range nodePorts {
|
||||
portMap[p] = struct{}{}
|
||||
}
|
||||
nodePorts = []int64{}
|
||||
for p := range portMap {
|
||||
nodePorts = append(nodePorts, p)
|
||||
}
|
||||
if err := c.backendPool.Sync(nodePorts); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := c.instancePool.Sync(nodeNames); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := c.l7Pool.Sync(lbs); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// TODO: Manage default backend and its firewall rule in a centralized way.
|
||||
// DefaultBackend is managed in l7 pool, which doesn't understand instances,
|
||||
// which the firewall rule requires.
|
||||
fwNodePorts := nodePorts
|
||||
if len(fwNodePorts) != 0 {
|
||||
// If there are no Ingresses, we shouldn't be allowing traffic to the
|
||||
// default backend. Equally importantly if the cluster gets torn down
|
||||
// we shouldn't leak the firewall rule.
|
||||
fwNodePorts = append(fwNodePorts, c.defaultBackendNodePort)
|
||||
}
|
||||
if err := c.firewallPool.Sync(fwNodePorts, nodeNames); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GC garbage collects unused resources.
|
||||
// - lbNames are the names of L7 loadbalancers we wish to exist. Those not in
|
||||
// this list are removed from the cloud.
|
||||
// - nodePorts are the ports for which we want BackendServies. BackendServices
|
||||
// for ports not in this list are deleted.
|
||||
// This method ignores googleapi 404 errors (StatusNotFound).
|
||||
func (c *ClusterManager) GC(lbNames []string, nodePorts []int64) error {
|
||||
|
||||
// On GC:
|
||||
// * Loadbalancers need to get deleted before backends.
|
||||
// * Backends are refcounted in a shared pool.
|
||||
// * We always want to GC backends even if there was an error in GCing
|
||||
// loadbalancers, because the next Sync could rely on the GC for quota.
|
||||
// * There are at least 2 cases for backend GC:
|
||||
// 1. The loadbalancer has been deleted.
|
||||
// 2. An update to the url map drops the refcount of a backend. This can
|
||||
// happen when an Ingress is updated, if we don't GC after the update
|
||||
// we'll leak the backend.
|
||||
|
||||
lbErr := c.l7Pool.GC(lbNames)
|
||||
beErr := c.backendPool.GC(nodePorts)
|
||||
if lbErr != nil {
|
||||
return lbErr
|
||||
}
|
||||
if beErr != nil {
|
||||
return beErr
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getGCEClient(config io.Reader) *gce.GCECloud {
|
||||
// Creating the cloud interface involves resolving the metadata server to get
|
||||
// an oauth token. If this fails, the token provider assumes it's not on GCE.
|
||||
// No errors are thrown. So we need to keep retrying till it works because
|
||||
// we know we're on GCE.
|
||||
for {
|
||||
cloudInterface, err := cloudprovider.GetCloudProvider("gce", config)
|
||||
if err == nil {
|
||||
cloud := cloudInterface.(*gce.GCECloud)
|
||||
|
||||
// If this controller is scheduled on a node without compute/rw
|
||||
// it won't be allowed to list backends. We can assume that the
|
||||
// user has no need for Ingress in this case. If they grant
|
||||
// permissions to the node they will have to restart the controller
|
||||
// manually to re-create the client.
|
||||
if _, err = cloud.ListBackendServices(); err == nil || utils.IsHTTPErrorCode(err, http.StatusForbidden) {
|
||||
return cloud
|
||||
}
|
||||
glog.Warningf("Failed to list backend services, retrying: %v", err)
|
||||
} else {
|
||||
glog.Warningf("Failed to retrieve cloud interface, retrying: %v", err)
|
||||
}
|
||||
time.Sleep(cloudClientRetryInterval)
|
||||
}
|
||||
}
|
||||
|
||||
// NewClusterManager creates a cluster manager for shared resources.
|
||||
// - namer: is the namer used to tag cluster wide shared resources.
|
||||
// - defaultBackendNodePort: is the node port of glbc's default backend. This is
|
||||
// the kubernetes Service that serves the 404 page if no urls match.
|
||||
// - defaultHealthCheckPath: is the default path used for L7 health checks, eg: "/healthz".
|
||||
func NewClusterManager(
|
||||
configFilePath string,
|
||||
namer *utils.Namer,
|
||||
defaultBackendNodePort int64,
|
||||
defaultHealthCheckPath string) (*ClusterManager, error) {
|
||||
|
||||
// TODO: Make this more resilient. Currently we create the cloud client
|
||||
// and pass it through to all the pools. This makes unittesting easier.
|
||||
// However if the cloud client suddenly fails, we should try to re-create it
|
||||
// and continue.
|
||||
var cloud *gce.GCECloud
|
||||
if configFilePath != "" {
|
||||
glog.Infof("Reading config from path %v", configFilePath)
|
||||
config, err := os.Open(configFilePath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer config.Close()
|
||||
cloud = getGCEClient(config)
|
||||
glog.Infof("Successfully loaded cloudprovider using config %q", configFilePath)
|
||||
} else {
|
||||
// While you might be tempted to refactor so we simply assing nil to the
|
||||
// config and only invoke getGCEClient once, that will not do the right
|
||||
// thing because a nil check against an interface isn't true in golang.
|
||||
cloud = getGCEClient(nil)
|
||||
glog.Infof("Created GCE client without a config file")
|
||||
}
|
||||
|
||||
// Names are fundamental to the cluster, the uid allocator makes sure names don't collide.
|
||||
cluster := ClusterManager{ClusterNamer: namer}
|
||||
|
||||
// NodePool stores GCE vms that are in this Kubernetes cluster.
|
||||
cluster.instancePool = instances.NewNodePool(cloud)
|
||||
|
||||
// BackendPool creates GCE BackendServices and associated health checks.
|
||||
healthChecker := healthchecks.NewHealthChecker(cloud, defaultHealthCheckPath, cluster.ClusterNamer)
|
||||
// Loadbalancer pool manages the default backend and its health check.
|
||||
defaultBackendHealthChecker := healthchecks.NewHealthChecker(cloud, "/healthz", cluster.ClusterNamer)
|
||||
|
||||
cluster.healthCheckers = []healthchecks.HealthChecker{healthChecker, defaultBackendHealthChecker}
|
||||
|
||||
// TODO: This needs to change to a consolidated management of the default backend.
|
||||
cluster.backendPool = backends.NewBackendPool(
|
||||
cloud, healthChecker, cluster.instancePool, cluster.ClusterNamer, []int64{defaultBackendNodePort}, true)
|
||||
defaultBackendPool := backends.NewBackendPool(
|
||||
cloud, defaultBackendHealthChecker, cluster.instancePool, cluster.ClusterNamer, []int64{}, false)
|
||||
cluster.defaultBackendNodePort = defaultBackendNodePort
|
||||
|
||||
// L7 pool creates targetHTTPProxy, ForwardingRules, UrlMaps, StaticIPs.
|
||||
cluster.l7Pool = loadbalancers.NewLoadBalancerPool(
|
||||
cloud, defaultBackendPool, defaultBackendNodePort, cluster.ClusterNamer)
|
||||
cluster.firewallPool = firewalls.NewFirewallPool(cloud, cluster.ClusterNamer)
|
||||
return &cluster, nil
|
||||
}
|
475
controllers/gce/controller/controller.go
Normal file
475
controllers/gce/controller/controller.go
Normal file
|
@ -0,0 +1,475 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
"k8s.io/kubernetes/pkg/client/cache"
|
||||
"k8s.io/kubernetes/pkg/client/record"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
"k8s.io/kubernetes/pkg/controller/framework"
|
||||
"k8s.io/kubernetes/pkg/fields"
|
||||
"k8s.io/kubernetes/pkg/runtime"
|
||||
"k8s.io/kubernetes/pkg/watch"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
var (
|
||||
keyFunc = framework.DeletionHandlingMetaNamespaceKeyFunc
|
||||
|
||||
// DefaultClusterUID is the uid to use for clusters resources created by an
|
||||
// L7 controller created without specifying the --cluster-uid flag.
|
||||
DefaultClusterUID = ""
|
||||
|
||||
// Frequency to poll on local stores to sync.
|
||||
storeSyncPollPeriod = 5 * time.Second
|
||||
)
|
||||
|
||||
// LoadBalancerController watches the kubernetes api and adds/removes services
|
||||
// from the loadbalancer, via loadBalancerConfig.
|
||||
type LoadBalancerController struct {
|
||||
client *client.Client
|
||||
ingController *framework.Controller
|
||||
nodeController *framework.Controller
|
||||
svcController *framework.Controller
|
||||
podController *framework.Controller
|
||||
ingLister StoreToIngressLister
|
||||
nodeLister cache.StoreToNodeLister
|
||||
svcLister cache.StoreToServiceLister
|
||||
// Health checks are the readiness probes of containers on pods.
|
||||
podLister cache.StoreToPodLister
|
||||
// TODO: Watch secrets
|
||||
CloudClusterManager *ClusterManager
|
||||
recorder record.EventRecorder
|
||||
nodeQueue *taskQueue
|
||||
ingQueue *taskQueue
|
||||
tr *GCETranslator
|
||||
stopCh chan struct{}
|
||||
// stopLock is used to enforce only a single call to Stop is active.
|
||||
// Needed because we allow stopping through an http endpoint and
|
||||
// allowing concurrent stoppers leads to stack traces.
|
||||
stopLock sync.Mutex
|
||||
shutdown bool
|
||||
// tlsLoader loads secrets from the Kubernetes apiserver for Ingresses.
|
||||
tlsLoader tlsLoader
|
||||
// hasSynced returns true if all associated sub-controllers have synced.
|
||||
// Abstracted into a func for testing.
|
||||
hasSynced func() bool
|
||||
}
|
||||
|
||||
// NewLoadBalancerController creates a controller for gce loadbalancers.
|
||||
// - kubeClient: A kubernetes REST client.
|
||||
// - clusterManager: A ClusterManager capable of creating all cloud resources
|
||||
// required for L7 loadbalancing.
|
||||
// - resyncPeriod: Watchers relist from the Kubernetes API server this often.
|
||||
func NewLoadBalancerController(kubeClient *client.Client, clusterManager *ClusterManager, resyncPeriod time.Duration, namespace string) (*LoadBalancerController, error) {
|
||||
eventBroadcaster := record.NewBroadcaster()
|
||||
eventBroadcaster.StartLogging(glog.Infof)
|
||||
eventBroadcaster.StartRecordingToSink(kubeClient.Events(""))
|
||||
|
||||
lbc := LoadBalancerController{
|
||||
client: kubeClient,
|
||||
CloudClusterManager: clusterManager,
|
||||
stopCh: make(chan struct{}),
|
||||
recorder: eventBroadcaster.NewRecorder(
|
||||
api.EventSource{Component: "loadbalancer-controller"}),
|
||||
}
|
||||
lbc.nodeQueue = NewTaskQueue(lbc.syncNodes)
|
||||
lbc.ingQueue = NewTaskQueue(lbc.sync)
|
||||
lbc.hasSynced = lbc.storesSynced
|
||||
|
||||
// Ingress watch handlers
|
||||
pathHandlers := framework.ResourceEventHandlerFuncs{
|
||||
AddFunc: func(obj interface{}) {
|
||||
addIng := obj.(*extensions.Ingress)
|
||||
if !isGCEIngress(addIng) {
|
||||
glog.Infof("Ignoring add for ingress %v based on annotation %v", addIng.Name, ingressClassKey)
|
||||
return
|
||||
}
|
||||
lbc.recorder.Eventf(addIng, api.EventTypeNormal, "ADD", fmt.Sprintf("%s/%s", addIng.Namespace, addIng.Name))
|
||||
lbc.ingQueue.enqueue(obj)
|
||||
},
|
||||
DeleteFunc: func(obj interface{}) {
|
||||
delIng := obj.(*extensions.Ingress)
|
||||
if !isGCEIngress(delIng) {
|
||||
glog.Infof("Ignoring delete for ingress %v based on annotation %v", delIng.Name, ingressClassKey)
|
||||
return
|
||||
}
|
||||
glog.Infof("Delete notification received for Ingress %v/%v", delIng.Namespace, delIng.Name)
|
||||
lbc.ingQueue.enqueue(obj)
|
||||
},
|
||||
UpdateFunc: func(old, cur interface{}) {
|
||||
curIng := cur.(*extensions.Ingress)
|
||||
if !isGCEIngress(curIng) {
|
||||
return
|
||||
}
|
||||
if !reflect.DeepEqual(old, cur) {
|
||||
glog.V(3).Infof("Ingress %v changed, syncing", curIng.Name)
|
||||
}
|
||||
lbc.ingQueue.enqueue(cur)
|
||||
},
|
||||
}
|
||||
lbc.ingLister.Store, lbc.ingController = framework.NewInformer(
|
||||
&cache.ListWatch{
|
||||
ListFunc: ingressListFunc(lbc.client, namespace),
|
||||
WatchFunc: ingressWatchFunc(lbc.client, namespace),
|
||||
},
|
||||
&extensions.Ingress{}, resyncPeriod, pathHandlers)
|
||||
|
||||
// Service watch handlers
|
||||
svcHandlers := framework.ResourceEventHandlerFuncs{
|
||||
AddFunc: lbc.enqueueIngressForService,
|
||||
UpdateFunc: func(old, cur interface{}) {
|
||||
if !reflect.DeepEqual(old, cur) {
|
||||
lbc.enqueueIngressForService(cur)
|
||||
}
|
||||
},
|
||||
// Ingress deletes matter, service deletes don't.
|
||||
}
|
||||
|
||||
lbc.svcLister.Store, lbc.svcController = framework.NewInformer(
|
||||
cache.NewListWatchFromClient(
|
||||
lbc.client, "services", namespace, fields.Everything()),
|
||||
&api.Service{}, resyncPeriod, svcHandlers)
|
||||
|
||||
lbc.podLister.Indexer, lbc.podController = framework.NewIndexerInformer(
|
||||
cache.NewListWatchFromClient(lbc.client, "pods", namespace, fields.Everything()),
|
||||
&api.Pod{},
|
||||
resyncPeriod,
|
||||
framework.ResourceEventHandlerFuncs{},
|
||||
cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc},
|
||||
)
|
||||
|
||||
nodeHandlers := framework.ResourceEventHandlerFuncs{
|
||||
AddFunc: lbc.nodeQueue.enqueue,
|
||||
DeleteFunc: lbc.nodeQueue.enqueue,
|
||||
// Nodes are updated every 10s and we don't care, so no update handler.
|
||||
}
|
||||
// Node watch handlers
|
||||
lbc.nodeLister.Store, lbc.nodeController = framework.NewInformer(
|
||||
&cache.ListWatch{
|
||||
ListFunc: func(opts api.ListOptions) (runtime.Object, error) {
|
||||
return lbc.client.Get().
|
||||
Resource("nodes").
|
||||
FieldsSelectorParam(fields.Everything()).
|
||||
Do().
|
||||
Get()
|
||||
},
|
||||
WatchFunc: func(options api.ListOptions) (watch.Interface, error) {
|
||||
return lbc.client.Get().
|
||||
Prefix("watch").
|
||||
Resource("nodes").
|
||||
FieldsSelectorParam(fields.Everything()).
|
||||
Param("resourceVersion", options.ResourceVersion).Watch()
|
||||
},
|
||||
},
|
||||
&api.Node{}, 0, nodeHandlers)
|
||||
|
||||
lbc.tr = &GCETranslator{&lbc}
|
||||
lbc.tlsLoader = &apiServerTLSLoader{client: lbc.client}
|
||||
glog.V(3).Infof("Created new loadbalancer controller")
|
||||
|
||||
return &lbc, nil
|
||||
}
|
||||
|
||||
func ingressListFunc(c *client.Client, ns string) func(api.ListOptions) (runtime.Object, error) {
|
||||
return func(opts api.ListOptions) (runtime.Object, error) {
|
||||
return c.Extensions().Ingress(ns).List(opts)
|
||||
}
|
||||
}
|
||||
|
||||
func ingressWatchFunc(c *client.Client, ns string) func(options api.ListOptions) (watch.Interface, error) {
|
||||
return func(options api.ListOptions) (watch.Interface, error) {
|
||||
return c.Extensions().Ingress(ns).Watch(options)
|
||||
}
|
||||
}
|
||||
|
||||
// enqueueIngressForService enqueues all the Ingress' for a Service.
|
||||
func (lbc *LoadBalancerController) enqueueIngressForService(obj interface{}) {
|
||||
svc := obj.(*api.Service)
|
||||
ings, err := lbc.ingLister.GetServiceIngress(svc)
|
||||
if err != nil {
|
||||
glog.V(5).Infof("ignoring service %v: %v", svc.Name, err)
|
||||
return
|
||||
}
|
||||
for _, ing := range ings {
|
||||
if !isGCEIngress(&ing) {
|
||||
continue
|
||||
}
|
||||
lbc.ingQueue.enqueue(&ing)
|
||||
}
|
||||
}
|
||||
|
||||
// Run starts the loadbalancer controller.
|
||||
func (lbc *LoadBalancerController) Run() {
|
||||
glog.Infof("Starting loadbalancer controller")
|
||||
go lbc.ingController.Run(lbc.stopCh)
|
||||
go lbc.nodeController.Run(lbc.stopCh)
|
||||
go lbc.svcController.Run(lbc.stopCh)
|
||||
go lbc.podController.Run(lbc.stopCh)
|
||||
go lbc.ingQueue.run(time.Second, lbc.stopCh)
|
||||
go lbc.nodeQueue.run(time.Second, lbc.stopCh)
|
||||
<-lbc.stopCh
|
||||
glog.Infof("Shutting down Loadbalancer Controller")
|
||||
}
|
||||
|
||||
// Stop stops the loadbalancer controller. It also deletes cluster resources
|
||||
// if deleteAll is true.
|
||||
func (lbc *LoadBalancerController) Stop(deleteAll bool) error {
|
||||
// Stop is invoked from the http endpoint.
|
||||
lbc.stopLock.Lock()
|
||||
defer lbc.stopLock.Unlock()
|
||||
|
||||
// Only try draining the workqueue if we haven't already.
|
||||
if !lbc.shutdown {
|
||||
close(lbc.stopCh)
|
||||
glog.Infof("Shutting down controller queues.")
|
||||
lbc.ingQueue.shutdown()
|
||||
lbc.nodeQueue.shutdown()
|
||||
lbc.shutdown = true
|
||||
}
|
||||
|
||||
// Deleting shared cluster resources is idempotent.
|
||||
if deleteAll {
|
||||
glog.Infof("Shutting down cluster manager.")
|
||||
return lbc.CloudClusterManager.shutdown()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// storesSynced returns true if all the sub-controllers have finished their
|
||||
// first sync with apiserver.
|
||||
func (lbc *LoadBalancerController) storesSynced() bool {
|
||||
return (
|
||||
// wait for pods to sync so we don't allocate a default health check when
|
||||
// an endpoint has a readiness probe.
|
||||
lbc.podController.HasSynced() &&
|
||||
// wait for services so we don't thrash on backend creation.
|
||||
lbc.svcController.HasSynced() &&
|
||||
// wait for nodes so we don't disconnect a backend from an instance
|
||||
// group just because we don't realize there are nodes in that zone.
|
||||
lbc.nodeController.HasSynced() &&
|
||||
// Wait for ingresses as a safety measure. We don't really need this.
|
||||
lbc.ingController.HasSynced())
|
||||
}
|
||||
|
||||
// sync manages Ingress create/updates/deletes.
|
||||
func (lbc *LoadBalancerController) sync(key string) (err error) {
|
||||
if !lbc.hasSynced() {
|
||||
time.Sleep(storeSyncPollPeriod)
|
||||
return fmt.Errorf("Waiting for stores to sync")
|
||||
}
|
||||
glog.V(3).Infof("Syncing %v", key)
|
||||
|
||||
paths, err := lbc.ingLister.List()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
nodePorts := lbc.tr.toNodePorts(&paths)
|
||||
lbNames := lbc.ingLister.Store.ListKeys()
|
||||
lbs, err := lbc.ListRuntimeInfo()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
nodeNames, err := lbc.getReadyNodeNames()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
obj, ingExists, err := lbc.ingLister.Store.GetByKey(key)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// This performs a 2 phase checkpoint with the cloud:
|
||||
// * Phase 1 creates/verifies resources are as expected. At the end of a
|
||||
// successful checkpoint we know that existing L7s are WAI, and the L7
|
||||
// for the Ingress associated with "key" is ready for a UrlMap update.
|
||||
// If this encounters an error, eg for quota reasons, we want to invoke
|
||||
// Phase 2 right away and retry checkpointing.
|
||||
// * Phase 2 performs GC by refcounting shared resources. This needs to
|
||||
// happen periodically whether or not stage 1 fails. At the end of a
|
||||
// successful GC we know that there are no dangling cloud resources that
|
||||
// don't have an associated Kubernetes Ingress/Service/Endpoint.
|
||||
|
||||
defer func() {
|
||||
if deferErr := lbc.CloudClusterManager.GC(lbNames, nodePorts); deferErr != nil {
|
||||
err = fmt.Errorf("Error during sync %v, error during GC %v", err, deferErr)
|
||||
}
|
||||
glog.V(3).Infof("Finished syncing %v", key)
|
||||
}()
|
||||
|
||||
// Record any errors during sync and throw a single error at the end. This
|
||||
// allows us to free up associated cloud resources ASAP.
|
||||
var syncError error
|
||||
if err := lbc.CloudClusterManager.Checkpoint(lbs, nodeNames, nodePorts); err != nil {
|
||||
// TODO: Implement proper backoff for the queue.
|
||||
eventMsg := "GCE"
|
||||
if utils.IsHTTPErrorCode(err, http.StatusForbidden) {
|
||||
eventMsg += " :Quota"
|
||||
}
|
||||
if ingExists {
|
||||
lbc.recorder.Eventf(obj.(*extensions.Ingress), api.EventTypeWarning, eventMsg, err.Error())
|
||||
} else {
|
||||
err = fmt.Errorf("%v Error: %v", eventMsg, err)
|
||||
}
|
||||
syncError = err
|
||||
}
|
||||
|
||||
if !ingExists {
|
||||
return syncError
|
||||
}
|
||||
// Update the UrlMap of the single loadbalancer that came through the watch.
|
||||
l7, err := lbc.CloudClusterManager.l7Pool.Get(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("%v, unable to get loadbalancer: %v", syncError, err)
|
||||
}
|
||||
|
||||
ing := *obj.(*extensions.Ingress)
|
||||
if urlMap, err := lbc.tr.toUrlMap(&ing); err != nil {
|
||||
syncError = fmt.Errorf("%v, convert to url map error %v", syncError, err)
|
||||
} else if err := l7.UpdateUrlMap(urlMap); err != nil {
|
||||
lbc.recorder.Eventf(&ing, api.EventTypeWarning, "UrlMap", err.Error())
|
||||
syncError = fmt.Errorf("%v, update url map error: %v", syncError, err)
|
||||
} else if err := lbc.updateIngressStatus(l7, ing); err != nil {
|
||||
lbc.recorder.Eventf(&ing, api.EventTypeWarning, "Status", err.Error())
|
||||
syncError = fmt.Errorf("%v, update ingress error: %v", syncError, err)
|
||||
}
|
||||
return syncError
|
||||
}
|
||||
|
||||
// updateIngressStatus updates the IP and annotations of a loadbalancer.
|
||||
// The annotations are parsed by kubectl describe.
|
||||
func (lbc *LoadBalancerController) updateIngressStatus(l7 *loadbalancers.L7, ing extensions.Ingress) error {
|
||||
ingClient := lbc.client.Extensions().Ingress(ing.Namespace)
|
||||
|
||||
// Update IP through update/status endpoint
|
||||
ip := l7.GetIP()
|
||||
currIng, err := ingClient.Get(ing.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
currIng.Status = extensions.IngressStatus{
|
||||
LoadBalancer: api.LoadBalancerStatus{
|
||||
Ingress: []api.LoadBalancerIngress{
|
||||
{IP: ip},
|
||||
},
|
||||
},
|
||||
}
|
||||
if ip != "" {
|
||||
lbIPs := ing.Status.LoadBalancer.Ingress
|
||||
if len(lbIPs) == 0 || lbIPs[0].IP != ip {
|
||||
// TODO: If this update fails it's probably resource version related,
|
||||
// which means it's advantageous to retry right away vs requeuing.
|
||||
glog.Infof("Updating loadbalancer %v/%v with IP %v", ing.Namespace, ing.Name, ip)
|
||||
if _, err := ingClient.UpdateStatus(currIng); err != nil {
|
||||
return err
|
||||
}
|
||||
lbc.recorder.Eventf(currIng, api.EventTypeNormal, "CREATE", "ip: %v", ip)
|
||||
}
|
||||
}
|
||||
// Update annotations through /update endpoint
|
||||
currIng, err = ingClient.Get(ing.Name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
currIng.Annotations = loadbalancers.GetLBAnnotations(l7, currIng.Annotations, lbc.CloudClusterManager.backendPool)
|
||||
if !reflect.DeepEqual(ing.Annotations, currIng.Annotations) {
|
||||
glog.V(3).Infof("Updating annotations of %v/%v", ing.Namespace, ing.Name)
|
||||
if _, err := ingClient.Update(currIng); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListRuntimeInfo lists L7RuntimeInfo as understood by the loadbalancer module.
|
||||
func (lbc *LoadBalancerController) ListRuntimeInfo() (lbs []*loadbalancers.L7RuntimeInfo, err error) {
|
||||
ingList, err := lbc.ingLister.List()
|
||||
if err != nil {
|
||||
return lbs, err
|
||||
}
|
||||
for _, ing := range ingList.Items {
|
||||
k, err := keyFunc(&ing)
|
||||
if err != nil {
|
||||
glog.Warningf("Cannot get key for Ingress %v/%v: %v", ing.Namespace, ing.Name, err)
|
||||
continue
|
||||
}
|
||||
tls, err := lbc.tlsLoader.load(&ing)
|
||||
if err != nil {
|
||||
glog.Warningf("Cannot get certs for Ingress %v/%v: %v", ing.Namespace, ing.Name, err)
|
||||
}
|
||||
annotations := ingAnnotations(ing.ObjectMeta.Annotations)
|
||||
lbs = append(lbs, &loadbalancers.L7RuntimeInfo{
|
||||
Name: k,
|
||||
TLS: tls,
|
||||
AllowHTTP: annotations.allowHTTP(),
|
||||
StaticIPName: annotations.staticIPName(),
|
||||
})
|
||||
}
|
||||
return lbs, nil
|
||||
}
|
||||
|
||||
// syncNodes manages the syncing of kubernetes nodes to gce instance groups.
|
||||
// The instancegroups are referenced by loadbalancer backends.
|
||||
func (lbc *LoadBalancerController) syncNodes(key string) error {
|
||||
nodeNames, err := lbc.getReadyNodeNames()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := lbc.CloudClusterManager.instancePool.Sync(nodeNames); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getNodeReadyPredicate() cache.NodeConditionPredicate {
|
||||
return func(node *api.Node) bool {
|
||||
for ix := range node.Status.Conditions {
|
||||
condition := &node.Status.Conditions[ix]
|
||||
if condition.Type == api.NodeReady {
|
||||
return condition.Status == api.ConditionTrue
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
// getReadyNodeNames returns names of schedulable, ready nodes from the node lister.
|
||||
func (lbc *LoadBalancerController) getReadyNodeNames() ([]string, error) {
|
||||
nodeNames := []string{}
|
||||
nodes, err := lbc.nodeLister.NodeCondition(getNodeReadyPredicate()).List()
|
||||
if err != nil {
|
||||
return nodeNames, err
|
||||
}
|
||||
for _, n := range nodes {
|
||||
if n.Spec.Unschedulable {
|
||||
continue
|
||||
}
|
||||
nodeNames = append(nodeNames, n.Name)
|
||||
}
|
||||
return nodeNames, nil
|
||||
}
|
447
controllers/gce/controller/controller_test.go
Normal file
447
controllers/gce/controller/controller_test.go
Normal file
|
@ -0,0 +1,447 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"math/rand"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/firewalls"
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/api/testapi"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
"k8s.io/kubernetes/pkg/client/restclient"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
"k8s.io/kubernetes/pkg/util/intstr"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
"k8s.io/kubernetes/pkg/util/uuid"
|
||||
)
|
||||
|
||||
const testClusterName = "testcluster"
|
||||
|
||||
var (
|
||||
testPathMap = map[string]string{"/foo": defaultBackendName(testClusterName)}
|
||||
testIPManager = testIP{}
|
||||
)
|
||||
|
||||
// TODO: Use utils.Namer instead of this function.
|
||||
func defaultBackendName(clusterName string) string {
|
||||
return fmt.Sprintf("%v-%v", backendPrefix, clusterName)
|
||||
}
|
||||
|
||||
// newLoadBalancerController create a loadbalancer controller.
|
||||
func newLoadBalancerController(t *testing.T, cm *fakeClusterManager, masterUrl string) *LoadBalancerController {
|
||||
client := client.NewOrDie(&restclient.Config{Host: masterUrl, ContentConfig: restclient.ContentConfig{GroupVersion: testapi.Default.GroupVersion()}})
|
||||
lb, err := NewLoadBalancerController(client, cm.ClusterManager, 1*time.Second, api.NamespaceAll)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
lb.hasSynced = func() bool { return true }
|
||||
return lb
|
||||
}
|
||||
|
||||
// toHTTPIngressPaths converts the given pathMap to a list of HTTPIngressPaths.
|
||||
func toHTTPIngressPaths(pathMap map[string]string) []extensions.HTTPIngressPath {
|
||||
httpPaths := []extensions.HTTPIngressPath{}
|
||||
for path, backend := range pathMap {
|
||||
httpPaths = append(httpPaths, extensions.HTTPIngressPath{
|
||||
Path: path,
|
||||
Backend: extensions.IngressBackend{
|
||||
ServiceName: backend,
|
||||
ServicePort: testBackendPort,
|
||||
},
|
||||
})
|
||||
}
|
||||
return httpPaths
|
||||
}
|
||||
|
||||
// toIngressRules converts the given ingressRule map to a list of IngressRules.
|
||||
func toIngressRules(hostRules map[string]utils.FakeIngressRuleValueMap) []extensions.IngressRule {
|
||||
rules := []extensions.IngressRule{}
|
||||
for host, pathMap := range hostRules {
|
||||
rules = append(rules, extensions.IngressRule{
|
||||
Host: host,
|
||||
IngressRuleValue: extensions.IngressRuleValue{
|
||||
HTTP: &extensions.HTTPIngressRuleValue{
|
||||
Paths: toHTTPIngressPaths(pathMap),
|
||||
},
|
||||
},
|
||||
})
|
||||
}
|
||||
return rules
|
||||
}
|
||||
|
||||
// newIngress returns a new Ingress with the given path map.
|
||||
func newIngress(hostRules map[string]utils.FakeIngressRuleValueMap) *extensions.Ingress {
|
||||
return &extensions.Ingress{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: fmt.Sprintf("%v", uuid.NewUUID()),
|
||||
Namespace: api.NamespaceNone,
|
||||
},
|
||||
Spec: extensions.IngressSpec{
|
||||
Backend: &extensions.IngressBackend{
|
||||
ServiceName: defaultBackendName(testClusterName),
|
||||
ServicePort: testBackendPort,
|
||||
},
|
||||
Rules: toIngressRules(hostRules),
|
||||
},
|
||||
Status: extensions.IngressStatus{
|
||||
LoadBalancer: api.LoadBalancerStatus{
|
||||
Ingress: []api.LoadBalancerIngress{
|
||||
{IP: testIPManager.ip()},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// validIngress returns a valid Ingress.
|
||||
func validIngress() *extensions.Ingress {
|
||||
return newIngress(map[string]utils.FakeIngressRuleValueMap{
|
||||
"foo.bar.com": testPathMap,
|
||||
})
|
||||
}
|
||||
|
||||
// getKey returns the key for an ingress.
|
||||
func getKey(ing *extensions.Ingress, t *testing.T) string {
|
||||
key, err := keyFunc(ing)
|
||||
if err != nil {
|
||||
t.Fatalf("Unexpected error getting key for Ingress %v: %v", ing.Name, err)
|
||||
}
|
||||
return key
|
||||
}
|
||||
|
||||
// nodePortManager is a helper to allocate ports to services and
|
||||
// remember the allocations.
|
||||
type nodePortManager struct {
|
||||
portMap map[string]int
|
||||
start int
|
||||
end int
|
||||
namer utils.Namer
|
||||
}
|
||||
|
||||
// randPort generated pseudo random port numbers.
|
||||
func (p *nodePortManager) getNodePort(svcName string) int {
|
||||
if port, ok := p.portMap[svcName]; ok {
|
||||
return port
|
||||
}
|
||||
p.portMap[svcName] = rand.Intn(p.end-p.start) + p.start
|
||||
return p.portMap[svcName]
|
||||
}
|
||||
|
||||
// toNodePortSvcNames converts all service names in the given map to gce node
|
||||
// port names, eg foo -> k8-be-<foo nodeport>
|
||||
func (p *nodePortManager) toNodePortSvcNames(inputMap map[string]utils.FakeIngressRuleValueMap) map[string]utils.FakeIngressRuleValueMap {
|
||||
expectedMap := map[string]utils.FakeIngressRuleValueMap{}
|
||||
for host, rules := range inputMap {
|
||||
ruleMap := utils.FakeIngressRuleValueMap{}
|
||||
for path, svc := range rules {
|
||||
ruleMap[path] = p.namer.BeName(int64(p.portMap[svc]))
|
||||
}
|
||||
expectedMap[host] = ruleMap
|
||||
}
|
||||
return expectedMap
|
||||
}
|
||||
|
||||
func newPortManager(st, end int) *nodePortManager {
|
||||
return &nodePortManager{map[string]int{}, st, end, utils.Namer{}}
|
||||
}
|
||||
|
||||
// addIngress adds an ingress to the loadbalancer controllers ingress store. If
|
||||
// a nodePortManager is supplied, it also adds all backends to the service store
|
||||
// with a nodePort acquired through it.
|
||||
func addIngress(lbc *LoadBalancerController, ing *extensions.Ingress, pm *nodePortManager) {
|
||||
lbc.ingLister.Store.Add(ing)
|
||||
if pm == nil {
|
||||
return
|
||||
}
|
||||
for _, rule := range ing.Spec.Rules {
|
||||
for _, path := range rule.HTTP.Paths {
|
||||
svc := &api.Service{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: path.Backend.ServiceName,
|
||||
Namespace: ing.Namespace,
|
||||
},
|
||||
}
|
||||
var svcPort api.ServicePort
|
||||
switch path.Backend.ServicePort.Type {
|
||||
case intstr.Int:
|
||||
svcPort = api.ServicePort{Port: path.Backend.ServicePort.IntVal}
|
||||
default:
|
||||
svcPort = api.ServicePort{Name: path.Backend.ServicePort.StrVal}
|
||||
}
|
||||
svcPort.NodePort = int32(pm.getNodePort(path.Backend.ServiceName))
|
||||
svc.Spec.Ports = []api.ServicePort{svcPort}
|
||||
lbc.svcLister.Store.Add(svc)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestLbCreateDelete(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
inputMap1 := map[string]utils.FakeIngressRuleValueMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": "foo1svc",
|
||||
"/foo2": "foo2svc",
|
||||
},
|
||||
"bar.example.com": {
|
||||
"/bar1": "bar1svc",
|
||||
"/bar2": "bar2svc",
|
||||
},
|
||||
}
|
||||
inputMap2 := map[string]utils.FakeIngressRuleValueMap{
|
||||
"baz.foobar.com": {
|
||||
"/foo": "foo1svc",
|
||||
"/bar": "bar1svc",
|
||||
},
|
||||
}
|
||||
pm := newPortManager(1, 65536)
|
||||
ings := []*extensions.Ingress{}
|
||||
for _, m := range []map[string]utils.FakeIngressRuleValueMap{inputMap1, inputMap2} {
|
||||
newIng := newIngress(m)
|
||||
addIngress(lbc, newIng, pm)
|
||||
ingStoreKey := getKey(newIng, t)
|
||||
lbc.sync(ingStoreKey)
|
||||
l7, err := cm.l7Pool.Get(ingStoreKey)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(m))
|
||||
ings = append(ings, newIng)
|
||||
}
|
||||
lbc.ingLister.Store.Delete(ings[0])
|
||||
lbc.sync(getKey(ings[0], t))
|
||||
|
||||
// BackendServices associated with ports of deleted Ingress' should get gc'd
|
||||
// when the Ingress is deleted, regardless of the service. At the same time
|
||||
// we shouldn't pull shared backends out from existing loadbalancers.
|
||||
unexpected := []int{pm.portMap["foo2svc"], pm.portMap["bar2svc"]}
|
||||
expected := []int{pm.portMap["foo1svc"], pm.portMap["bar1svc"]}
|
||||
firewallPorts := sets.NewString()
|
||||
firewallName := pm.namer.FrName(pm.namer.FrSuffix())
|
||||
|
||||
if firewallRule, err := cm.firewallPool.(*firewalls.FirewallRules).GetFirewall(firewallName); err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
} else {
|
||||
if len(firewallRule.Allowed) != 1 {
|
||||
t.Fatalf("Expected a single firewall rule")
|
||||
}
|
||||
for _, p := range firewallRule.Allowed[0].Ports {
|
||||
firewallPorts.Insert(p)
|
||||
}
|
||||
}
|
||||
|
||||
for _, port := range expected {
|
||||
if _, err := cm.backendPool.Get(int64(port)); err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
if !firewallPorts.Has(fmt.Sprintf("%v", port)) {
|
||||
t.Fatalf("Expected a firewall rule for port %v", port)
|
||||
}
|
||||
}
|
||||
for _, port := range unexpected {
|
||||
if be, err := cm.backendPool.Get(int64(port)); err == nil {
|
||||
t.Fatalf("Found backend %+v for port %v", be, port)
|
||||
}
|
||||
}
|
||||
lbc.ingLister.Store.Delete(ings[1])
|
||||
lbc.sync(getKey(ings[1], t))
|
||||
|
||||
// No cluster resources (except the defaults used by the cluster manager)
|
||||
// should exist at this point.
|
||||
for _, port := range expected {
|
||||
if be, err := cm.backendPool.Get(int64(port)); err == nil {
|
||||
t.Fatalf("Found backend %+v for port %v", be, port)
|
||||
}
|
||||
}
|
||||
if len(cm.fakeLbs.Fw) != 0 || len(cm.fakeLbs.Um) != 0 || len(cm.fakeLbs.Tp) != 0 {
|
||||
t.Fatalf("Loadbalancer leaked resources")
|
||||
}
|
||||
for _, lbName := range []string{getKey(ings[0], t), getKey(ings[1], t)} {
|
||||
if l7, err := cm.l7Pool.Get(lbName); err == nil {
|
||||
t.Fatalf("Found unexpected loadbalandcer %+v: %v", l7, err)
|
||||
}
|
||||
}
|
||||
if firewallRule, err := cm.firewallPool.(*firewalls.FirewallRules).GetFirewall(firewallName); err == nil {
|
||||
t.Fatalf("Found unexpected firewall rule %v", firewallRule)
|
||||
}
|
||||
}
|
||||
|
||||
func TestLbFaultyUpdate(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
inputMap := map[string]utils.FakeIngressRuleValueMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": "foo1svc",
|
||||
"/foo2": "foo2svc",
|
||||
},
|
||||
"bar.example.com": {
|
||||
"/bar1": "bar1svc",
|
||||
"/bar2": "bar2svc",
|
||||
},
|
||||
}
|
||||
ing := newIngress(inputMap)
|
||||
pm := newPortManager(1, 65536)
|
||||
addIngress(lbc, ing, pm)
|
||||
|
||||
ingStoreKey := getKey(ing, t)
|
||||
lbc.sync(ingStoreKey)
|
||||
l7, err := cm.l7Pool.Get(ingStoreKey)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(inputMap))
|
||||
|
||||
// Change the urlmap directly through the lb pool, resync, and
|
||||
// make sure the controller corrects it.
|
||||
l7.UpdateUrlMap(utils.GCEURLMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": &compute.BackendService{SelfLink: "foo2svc"},
|
||||
},
|
||||
})
|
||||
|
||||
lbc.sync(ingStoreKey)
|
||||
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(inputMap))
|
||||
}
|
||||
|
||||
func TestLbDefaulting(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
// Make sure the controller plugs in the default values accepted by GCE.
|
||||
ing := newIngress(map[string]utils.FakeIngressRuleValueMap{"": {"": "foo1svc"}})
|
||||
pm := newPortManager(1, 65536)
|
||||
addIngress(lbc, ing, pm)
|
||||
|
||||
ingStoreKey := getKey(ing, t)
|
||||
lbc.sync(ingStoreKey)
|
||||
l7, err := cm.l7Pool.Get(ingStoreKey)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
expectedMap := map[string]utils.FakeIngressRuleValueMap{loadbalancers.DefaultHost: {loadbalancers.DefaultPath: "foo1svc"}}
|
||||
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(expectedMap))
|
||||
}
|
||||
|
||||
func TestLbNoService(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
inputMap := map[string]utils.FakeIngressRuleValueMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": "foo1svc",
|
||||
},
|
||||
}
|
||||
ing := newIngress(inputMap)
|
||||
ing.Spec.Backend.ServiceName = "foo1svc"
|
||||
ingStoreKey := getKey(ing, t)
|
||||
|
||||
// Adds ingress to store, but doesn't create an associated service.
|
||||
// This will still create the associated loadbalancer, it will just
|
||||
// have empty rules. The rules will get corrected when the service
|
||||
// pops up.
|
||||
addIngress(lbc, ing, nil)
|
||||
lbc.sync(ingStoreKey)
|
||||
|
||||
l7, err := cm.l7Pool.Get(ingStoreKey)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
// Creates the service, next sync should have complete url map.
|
||||
pm := newPortManager(1, 65536)
|
||||
addIngress(lbc, ing, pm)
|
||||
lbc.enqueueIngressForService(&api.Service{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: "foo1svc",
|
||||
Namespace: ing.Namespace,
|
||||
},
|
||||
})
|
||||
// TODO: This will hang if the previous step failed to insert into queue
|
||||
key, _ := lbc.ingQueue.queue.Get()
|
||||
lbc.sync(key.(string))
|
||||
|
||||
inputMap[utils.DefaultBackendKey] = map[string]string{
|
||||
utils.DefaultBackendKey: "foo1svc",
|
||||
}
|
||||
expectedMap := pm.toNodePortSvcNames(inputMap)
|
||||
cm.fakeLbs.CheckURLMap(t, l7, expectedMap)
|
||||
}
|
||||
|
||||
func TestLbChangeStaticIP(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
inputMap := map[string]utils.FakeIngressRuleValueMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": "foo1svc",
|
||||
},
|
||||
}
|
||||
ing := newIngress(inputMap)
|
||||
ing.Spec.Backend.ServiceName = "foo1svc"
|
||||
cert := extensions.IngressTLS{SecretName: "foo"}
|
||||
ing.Spec.TLS = []extensions.IngressTLS{cert}
|
||||
|
||||
// Add some certs so we get 2 forwarding rules, the changed static IP
|
||||
// should be assigned to both the HTTP and HTTPS forwarding rules.
|
||||
lbc.tlsLoader = &fakeTLSSecretLoader{
|
||||
fakeCerts: map[string]*loadbalancers.TLSCerts{
|
||||
cert.SecretName: {Key: "foo", Cert: "bar"},
|
||||
},
|
||||
}
|
||||
|
||||
pm := newPortManager(1, 65536)
|
||||
addIngress(lbc, ing, pm)
|
||||
ingStoreKey := getKey(ing, t)
|
||||
|
||||
// First sync creates forwarding rules and allocates an IP.
|
||||
lbc.sync(ingStoreKey)
|
||||
|
||||
// First allocate a static ip, then specify a userip in annotations.
|
||||
// The forwarding rules should contain the user ip.
|
||||
// The static ip should get cleaned up on lb tear down.
|
||||
oldIP := ing.Status.LoadBalancer.Ingress[0].IP
|
||||
oldRules := cm.fakeLbs.GetForwardingRulesWithIPs([]string{oldIP})
|
||||
if len(oldRules) != 2 || oldRules[0].IPAddress != oldRules[1].IPAddress {
|
||||
t.Fatalf("Expected 2 forwarding rules with the same IP.")
|
||||
}
|
||||
|
||||
ing.Annotations = map[string]string{staticIPNameKey: "testip"}
|
||||
cm.fakeLbs.ReserveGlobalStaticIP("testip", "1.2.3.4")
|
||||
|
||||
// Second sync reassigns 1.2.3.4 to existing forwarding rule (by recreating it)
|
||||
lbc.sync(ingStoreKey)
|
||||
|
||||
newRules := cm.fakeLbs.GetForwardingRulesWithIPs([]string{"1.2.3.4"})
|
||||
if len(newRules) != 2 || newRules[0].IPAddress != newRules[1].IPAddress || newRules[1].IPAddress != "1.2.3.4" {
|
||||
t.Fatalf("Found unexpected forwaring rules after changing static IP annotation.")
|
||||
}
|
||||
}
|
||||
|
||||
type testIP struct {
|
||||
start int
|
||||
}
|
||||
|
||||
func (t *testIP) ip() string {
|
||||
t.start++
|
||||
return fmt.Sprintf("0.0.0.%v", t.start)
|
||||
}
|
||||
|
||||
// TODO: Test lb status update when annotation stabilize
|
52
controllers/gce/controller/doc.go
Normal file
52
controllers/gce/controller/doc.go
Normal file
|
@ -0,0 +1,52 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// This is the structure of the gce l7 controller:
|
||||
// apiserver <-> controller ---> pools --> cloud
|
||||
// | |
|
||||
// |-> Ingress |-> backends
|
||||
// |-> Services | |-> health checks
|
||||
// |-> Nodes |
|
||||
// |-> instance groups
|
||||
// | |-> port per backend
|
||||
// |
|
||||
// |-> loadbalancers
|
||||
// |-> http proxy
|
||||
// |-> forwarding rule
|
||||
// |-> urlmap
|
||||
// * apiserver: kubernetes api serer.
|
||||
// * controller: gce l7 controller, watches apiserver and interacts
|
||||
// with sync pools. The controller doesn't know anything about the cloud.
|
||||
// Communication between the controller and pools is 1 way.
|
||||
// * pool: the controller tells each pool about desired state by inserting
|
||||
// into shared memory store. The pools sync this with the cloud. Pools are
|
||||
// also responsible for periodically checking the edge links between various
|
||||
// cloud resources.
|
||||
//
|
||||
// A note on sync pools: this package has 3 sync pools: for node, instances and
|
||||
// loadbalancer resources. A sync pool is meant to record all creates/deletes
|
||||
// performed by a controller and periodically verify that links are not broken.
|
||||
// For example, the controller might create a backend via backendPool.Add(),
|
||||
// the backend pool remembers this and continuously verifies that the backend
|
||||
// is connected to the right instance group, and that the instance group has
|
||||
// the right ports open.
|
||||
//
|
||||
// A note on naming convention: per golang style guide for Initialisms, Http
|
||||
// should be HTTP and Url should be URL, however because these interfaces
|
||||
// must match their siblings in the Kubernetes cloud provider, which are in turn
|
||||
// consistent with GCE compute API, there might be inconsistencies.
|
||||
|
||||
package controller
|
78
controllers/gce/controller/fakes.go
Normal file
78
controllers/gce/controller/fakes.go
Normal file
|
@ -0,0 +1,78 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"k8s.io/kubernetes/pkg/util/intstr"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
|
||||
"k8s.io/contrib/ingress/controllers/gce/backends"
|
||||
"k8s.io/contrib/ingress/controllers/gce/firewalls"
|
||||
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
|
||||
"k8s.io/contrib/ingress/controllers/gce/instances"
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
)
|
||||
|
||||
const (
|
||||
testDefaultBeNodePort = int64(3000)
|
||||
)
|
||||
|
||||
var testBackendPort = intstr.IntOrString{Type: intstr.Int, IntVal: 80}
|
||||
|
||||
// ClusterManager fake
|
||||
type fakeClusterManager struct {
|
||||
*ClusterManager
|
||||
fakeLbs *loadbalancers.FakeLoadBalancers
|
||||
fakeBackends *backends.FakeBackendServices
|
||||
fakeIGs *instances.FakeInstanceGroups
|
||||
}
|
||||
|
||||
// NewFakeClusterManager creates a new fake ClusterManager.
|
||||
func NewFakeClusterManager(clusterName string) *fakeClusterManager {
|
||||
fakeLbs := loadbalancers.NewFakeLoadBalancers(clusterName)
|
||||
fakeBackends := backends.NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
fakeHCs := healthchecks.NewFakeHealthChecks()
|
||||
namer := utils.NewNamer(clusterName)
|
||||
|
||||
nodePool := instances.NewNodePool(fakeIGs)
|
||||
nodePool.Init(&instances.FakeZoneLister{[]string{"zone-a"}})
|
||||
|
||||
healthChecker := healthchecks.NewHealthChecker(fakeHCs, "/", namer)
|
||||
healthChecker.Init(&healthchecks.FakeHealthCheckGetter{nil})
|
||||
|
||||
backendPool := backends.NewBackendPool(
|
||||
fakeBackends,
|
||||
healthChecker, nodePool, namer, []int64{}, false)
|
||||
l7Pool := loadbalancers.NewLoadBalancerPool(
|
||||
fakeLbs,
|
||||
// TODO: change this
|
||||
backendPool,
|
||||
testDefaultBeNodePort,
|
||||
namer,
|
||||
)
|
||||
frPool := firewalls.NewFirewallPool(firewalls.NewFakeFirewallRules(), namer)
|
||||
cm := &ClusterManager{
|
||||
ClusterNamer: namer,
|
||||
instancePool: nodePool,
|
||||
backendPool: backendPool,
|
||||
l7Pool: l7Pool,
|
||||
firewallPool: frPool,
|
||||
}
|
||||
return &fakeClusterManager{cm, fakeLbs, fakeBackends, fakeIGs}
|
||||
}
|
98
controllers/gce/controller/tls.go
Normal file
98
controllers/gce/controller/tls.go
Normal file
|
@ -0,0 +1,98 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
// secretLoaders returns a type containing all the secrets of an Ingress.
|
||||
type tlsLoader interface {
|
||||
load(ing *extensions.Ingress) (*loadbalancers.TLSCerts, error)
|
||||
validate(certs *loadbalancers.TLSCerts) error
|
||||
}
|
||||
|
||||
// TODO: Add better cert validation.
|
||||
type noOPValidator struct{}
|
||||
|
||||
func (n *noOPValidator) validate(certs *loadbalancers.TLSCerts) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// apiServerTLSLoader loads TLS certs from the apiserver.
|
||||
type apiServerTLSLoader struct {
|
||||
noOPValidator
|
||||
client *client.Client
|
||||
}
|
||||
|
||||
func (t *apiServerTLSLoader) load(ing *extensions.Ingress) (*loadbalancers.TLSCerts, error) {
|
||||
if len(ing.Spec.TLS) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
// GCE L7s currently only support a single cert.
|
||||
if len(ing.Spec.TLS) > 1 {
|
||||
glog.Warningf("Ignoring %d certs and taking the first for ingress %v/%v",
|
||||
len(ing.Spec.TLS)-1, ing.Namespace, ing.Name)
|
||||
}
|
||||
secretName := ing.Spec.TLS[0].SecretName
|
||||
// TODO: Replace this for a secret watcher.
|
||||
glog.V(3).Infof("Retrieving secret for ing %v with name %v", ing.Name, secretName)
|
||||
secret, err := t.client.Secrets(ing.Namespace).Get(secretName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
cert, ok := secret.Data[api.TLSCertKey]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("Secret %v has no private key", secretName)
|
||||
}
|
||||
key, ok := secret.Data[api.TLSPrivateKeyKey]
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("Secret %v has no cert", secretName)
|
||||
}
|
||||
certs := &loadbalancers.TLSCerts{Key: string(key), Cert: string(cert)}
|
||||
if err := t.validate(certs); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return certs, nil
|
||||
}
|
||||
|
||||
// TODO: Add support for file loading so we can support HTTPS default backends.
|
||||
|
||||
// fakeTLSSecretLoader fakes out TLS loading.
|
||||
type fakeTLSSecretLoader struct {
|
||||
noOPValidator
|
||||
fakeCerts map[string]*loadbalancers.TLSCerts
|
||||
}
|
||||
|
||||
func (f *fakeTLSSecretLoader) load(ing *extensions.Ingress) (*loadbalancers.TLSCerts, error) {
|
||||
if len(ing.Spec.TLS) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
for name, cert := range f.fakeCerts {
|
||||
if ing.Spec.TLS[0].SecretName == name {
|
||||
return cert, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Couldn't find secret for ingress %v", ing.Name)
|
||||
}
|
236
controllers/gce/controller/util_test.go
Normal file
236
controllers/gce/controller/util_test.go
Normal file
|
@ -0,0 +1,236 @@
|
|||
/*
|
||||
Copyright 2016 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/api/unversioned"
|
||||
"k8s.io/kubernetes/pkg/util/intstr"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
// Pods created in loops start from this time, for routines that
|
||||
// sort on timestamp.
|
||||
var firstPodCreationTime = time.Date(2006, 01, 02, 15, 04, 05, 0, time.UTC)
|
||||
|
||||
func TestZoneListing(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
zoneToNode := map[string][]string{
|
||||
"zone-1": {"n1"},
|
||||
"zone-2": {"n2"},
|
||||
}
|
||||
addNodes(lbc, zoneToNode)
|
||||
zones, err := lbc.tr.ListZones()
|
||||
if err != nil {
|
||||
t.Errorf("Failed to list zones: %v", err)
|
||||
}
|
||||
for expectedZone := range zoneToNode {
|
||||
found := false
|
||||
for _, gotZone := range zones {
|
||||
if gotZone == expectedZone {
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
t.Fatalf("Expected zones %v; Got zones %v", zoneToNode, zones)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestInstancesAddedToZones(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
zoneToNode := map[string][]string{
|
||||
"zone-1": {"n1", "n2"},
|
||||
"zone-2": {"n3"},
|
||||
}
|
||||
addNodes(lbc, zoneToNode)
|
||||
|
||||
// Create 2 igs, one per zone.
|
||||
testIG := "test-ig"
|
||||
testPort := int64(3001)
|
||||
lbc.CloudClusterManager.instancePool.AddInstanceGroup(testIG, testPort)
|
||||
|
||||
// node pool syncs kube-nodes, this will add them to both igs.
|
||||
lbc.CloudClusterManager.instancePool.Sync([]string{"n1", "n2", "n3"})
|
||||
gotZonesToNode := cm.fakeIGs.GetInstancesByZone()
|
||||
|
||||
i := 0
|
||||
for z, nodeNames := range zoneToNode {
|
||||
if ig, err := cm.fakeIGs.GetInstanceGroup(testIG, z); err != nil {
|
||||
t.Errorf("Failed to find ig %v in zone %v, found %+v: %v", testIG, z, ig, err)
|
||||
}
|
||||
if cm.fakeIGs.Ports[i] != testPort {
|
||||
t.Errorf("Expected the same node port on all igs, got ports %+v", cm.fakeIGs.Ports)
|
||||
}
|
||||
expNodes := sets.NewString(nodeNames...)
|
||||
gotNodes := sets.NewString(gotZonesToNode[z]...)
|
||||
if !gotNodes.Equal(expNodes) {
|
||||
t.Errorf("Nodes not added to zones, expected %+v got %+v", expNodes, gotNodes)
|
||||
}
|
||||
i++
|
||||
}
|
||||
}
|
||||
|
||||
func TestProbeGetter(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
nodePortToHealthCheck := map[int64]string{
|
||||
3001: "/healthz",
|
||||
3002: "/foo",
|
||||
}
|
||||
addPods(lbc, nodePortToHealthCheck, api.NamespaceDefault)
|
||||
for p, exp := range nodePortToHealthCheck {
|
||||
got, err := lbc.tr.HealthCheck(p)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to get health check for node port %v: %v", p, err)
|
||||
} else if got.RequestPath != exp {
|
||||
t.Errorf("Wrong health check for node port %v, got %v expected %v", p, got.RequestPath, exp)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestProbeGetterCrossNamespace(t *testing.T) {
|
||||
cm := NewFakeClusterManager(DefaultClusterUID)
|
||||
lbc := newLoadBalancerController(t, cm, "")
|
||||
|
||||
firstPod := &api.Pod{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
// labels match those added by "addPods", but ns and health check
|
||||
// path is different. If this pod was created in the same ns, it
|
||||
// would become the health check.
|
||||
Labels: map[string]string{"app-3001": "test"},
|
||||
Name: fmt.Sprintf("test-pod-new-ns"),
|
||||
Namespace: "new-ns",
|
||||
CreationTimestamp: unversioned.NewTime(firstPodCreationTime.Add(-time.Duration(time.Hour))),
|
||||
},
|
||||
Spec: api.PodSpec{
|
||||
Containers: []api.Container{
|
||||
{
|
||||
Ports: []api.ContainerPort{{ContainerPort: 80}},
|
||||
ReadinessProbe: &api.Probe{
|
||||
Handler: api.Handler{
|
||||
HTTPGet: &api.HTTPGetAction{
|
||||
Scheme: api.URISchemeHTTP,
|
||||
Path: "/badpath",
|
||||
Port: intstr.IntOrString{
|
||||
Type: intstr.Int,
|
||||
IntVal: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
lbc.podLister.Indexer.Add(firstPod)
|
||||
nodePortToHealthCheck := map[int64]string{
|
||||
3001: "/healthz",
|
||||
}
|
||||
addPods(lbc, nodePortToHealthCheck, api.NamespaceDefault)
|
||||
|
||||
for p, exp := range nodePortToHealthCheck {
|
||||
got, err := lbc.tr.HealthCheck(p)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to get health check for node port %v: %v", p, err)
|
||||
} else if got.RequestPath != exp {
|
||||
t.Errorf("Wrong health check for node port %v, got %v expected %v", p, got.RequestPath, exp)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func addPods(lbc *LoadBalancerController, nodePortToHealthCheck map[int64]string, ns string) {
|
||||
delay := time.Minute
|
||||
for np, u := range nodePortToHealthCheck {
|
||||
l := map[string]string{fmt.Sprintf("app-%d", np): "test"}
|
||||
svc := &api.Service{
|
||||
Spec: api.ServiceSpec{
|
||||
Selector: l,
|
||||
Ports: []api.ServicePort{
|
||||
{
|
||||
NodePort: int32(np),
|
||||
TargetPort: intstr.IntOrString{
|
||||
Type: intstr.Int,
|
||||
IntVal: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
svc.Name = fmt.Sprintf("%d", np)
|
||||
svc.Namespace = ns
|
||||
lbc.svcLister.Store.Add(svc)
|
||||
|
||||
pod := &api.Pod{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Labels: l,
|
||||
Name: fmt.Sprintf("%d", np),
|
||||
Namespace: ns,
|
||||
CreationTimestamp: unversioned.NewTime(firstPodCreationTime.Add(delay)),
|
||||
},
|
||||
Spec: api.PodSpec{
|
||||
Containers: []api.Container{
|
||||
{
|
||||
Ports: []api.ContainerPort{{ContainerPort: 80}},
|
||||
ReadinessProbe: &api.Probe{
|
||||
Handler: api.Handler{
|
||||
HTTPGet: &api.HTTPGetAction{
|
||||
Scheme: api.URISchemeHTTP,
|
||||
Path: u,
|
||||
Port: intstr.IntOrString{
|
||||
Type: intstr.Int,
|
||||
IntVal: 80,
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
lbc.podLister.Indexer.Add(pod)
|
||||
delay = 2 * delay
|
||||
}
|
||||
}
|
||||
|
||||
func addNodes(lbc *LoadBalancerController, zoneToNode map[string][]string) {
|
||||
for zone, nodes := range zoneToNode {
|
||||
for _, node := range nodes {
|
||||
n := &api.Node{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: node,
|
||||
Labels: map[string]string{
|
||||
zoneKey: zone,
|
||||
},
|
||||
},
|
||||
Status: api.NodeStatus{
|
||||
Conditions: []api.NodeCondition{
|
||||
{Type: api.NodeReady, Status: api.ConditionTrue},
|
||||
},
|
||||
},
|
||||
}
|
||||
lbc.nodeLister.Store.Add(n)
|
||||
}
|
||||
}
|
||||
lbc.CloudClusterManager.instancePool.Init(lbc.tr)
|
||||
}
|
511
controllers/gce/controller/utils.go
Normal file
511
controllers/gce/controller/utils.go
Normal file
|
@ -0,0 +1,511 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package controller
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"sort"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
"k8s.io/kubernetes/pkg/client/cache"
|
||||
"k8s.io/kubernetes/pkg/labels"
|
||||
"k8s.io/kubernetes/pkg/util/intstr"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
"k8s.io/kubernetes/pkg/util/wait"
|
||||
"k8s.io/kubernetes/pkg/util/workqueue"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
const (
|
||||
// allowHTTPKey tells the Ingress controller to allow/block HTTP access.
|
||||
// If either unset or set to true, the controller will create a
|
||||
// forwarding-rule for port 80, and any additional rules based on the TLS
|
||||
// section of the Ingress. If set to false, the controller will only create
|
||||
// rules for port 443 based on the TLS section.
|
||||
allowHTTPKey = "kubernetes.io/ingress.allow-http"
|
||||
|
||||
// staticIPNameKey tells the Ingress controller to use a specific GCE
|
||||
// static ip for its forwarding rules. If specified, the Ingress controller
|
||||
// assigns the static ip by this name to the forwarding rules of the given
|
||||
// Ingress. The controller *does not* manage this ip, it is the users
|
||||
// responsibility to create/delete it.
|
||||
staticIPNameKey = "kubernetes.io/ingress.global-static-ip-name"
|
||||
|
||||
// ingressClassKey picks a specific "class" for the Ingress. The controller
|
||||
// only processes Ingresses with this annotation either unset, or set
|
||||
// to either gceIngessClass or the empty string.
|
||||
ingressClassKey = "kubernetes.io/ingress.class"
|
||||
gceIngressClass = "gce"
|
||||
|
||||
// Label key to denote which GCE zone a Kubernetes node is in.
|
||||
zoneKey = "failure-domain.beta.kubernetes.io/zone"
|
||||
defaultZone = ""
|
||||
)
|
||||
|
||||
// ingAnnotations represents Ingress annotations.
|
||||
type ingAnnotations map[string]string
|
||||
|
||||
// allowHTTP returns the allowHTTP flag. True by default.
|
||||
func (ing ingAnnotations) allowHTTP() bool {
|
||||
val, ok := ing[allowHTTPKey]
|
||||
if !ok {
|
||||
return true
|
||||
}
|
||||
v, err := strconv.ParseBool(val)
|
||||
if err != nil {
|
||||
return true
|
||||
}
|
||||
return v
|
||||
}
|
||||
|
||||
func (ing ingAnnotations) staticIPName() string {
|
||||
val, ok := ing[staticIPNameKey]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
func (ing ingAnnotations) ingressClass() string {
|
||||
val, ok := ing[ingressClassKey]
|
||||
if !ok {
|
||||
return ""
|
||||
}
|
||||
return val
|
||||
}
|
||||
|
||||
// isGCEIngress returns true if the given Ingress either doesn't specify the
|
||||
// ingress.class annotation, or it's set to "gce".
|
||||
func isGCEIngress(ing *extensions.Ingress) bool {
|
||||
class := ingAnnotations(ing.ObjectMeta.Annotations).ingressClass()
|
||||
return class == "" || class == gceIngressClass
|
||||
}
|
||||
|
||||
// errorNodePortNotFound is an implementation of error.
|
||||
type errorNodePortNotFound struct {
|
||||
backend extensions.IngressBackend
|
||||
origErr error
|
||||
}
|
||||
|
||||
func (e errorNodePortNotFound) Error() string {
|
||||
return fmt.Sprintf("Could not find nodeport for backend %+v: %v",
|
||||
e.backend, e.origErr)
|
||||
}
|
||||
|
||||
// taskQueue manages a work queue through an independent worker that
|
||||
// invokes the given sync function for every work item inserted.
|
||||
type taskQueue struct {
|
||||
// queue is the work queue the worker polls
|
||||
queue workqueue.RateLimitingInterface
|
||||
// sync is called for each item in the queue
|
||||
sync func(string) error
|
||||
// workerDone is closed when the worker exits
|
||||
workerDone chan struct{}
|
||||
}
|
||||
|
||||
func (t *taskQueue) run(period time.Duration, stopCh <-chan struct{}) {
|
||||
wait.Until(t.worker, period, stopCh)
|
||||
}
|
||||
|
||||
// enqueue enqueues ns/name of the given api object in the task queue.
|
||||
func (t *taskQueue) enqueue(obj interface{}) {
|
||||
key, err := keyFunc(obj)
|
||||
if err != nil {
|
||||
glog.Infof("Couldn't get key for object %+v: %v", obj, err)
|
||||
return
|
||||
}
|
||||
t.queue.Add(key)
|
||||
}
|
||||
|
||||
// worker processes work in the queue through sync.
|
||||
func (t *taskQueue) worker() {
|
||||
for {
|
||||
key, quit := t.queue.Get()
|
||||
if quit {
|
||||
close(t.workerDone)
|
||||
return
|
||||
}
|
||||
glog.V(3).Infof("Syncing %v", key)
|
||||
if err := t.sync(key.(string)); err != nil {
|
||||
glog.Errorf("Requeuing %v, err %v", key, err)
|
||||
t.queue.AddRateLimited(key)
|
||||
} else {
|
||||
t.queue.Forget(key)
|
||||
}
|
||||
t.queue.Done(key)
|
||||
}
|
||||
}
|
||||
|
||||
// shutdown shuts down the work queue and waits for the worker to ACK
|
||||
func (t *taskQueue) shutdown() {
|
||||
t.queue.ShutDown()
|
||||
<-t.workerDone
|
||||
}
|
||||
|
||||
// NewTaskQueue creates a new task queue with the given sync function.
|
||||
// The sync function is called for every element inserted into the queue.
|
||||
func NewTaskQueue(syncFn func(string) error) *taskQueue {
|
||||
return &taskQueue{
|
||||
queue: workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter()),
|
||||
sync: syncFn,
|
||||
workerDone: make(chan struct{}),
|
||||
}
|
||||
}
|
||||
|
||||
// compareLinks returns true if the 2 self links are equal.
|
||||
func compareLinks(l1, l2 string) bool {
|
||||
// TODO: These can be partial links
|
||||
return l1 == l2 && l1 != ""
|
||||
}
|
||||
|
||||
// StoreToIngressLister makes a Store that lists Ingress.
|
||||
// TODO: Move this to cache/listers post 1.1.
|
||||
type StoreToIngressLister struct {
|
||||
cache.Store
|
||||
}
|
||||
|
||||
// List lists all Ingress' in the store.
|
||||
func (s *StoreToIngressLister) List() (ing extensions.IngressList, err error) {
|
||||
for _, m := range s.Store.List() {
|
||||
newIng := m.(*extensions.Ingress)
|
||||
if isGCEIngress(newIng) {
|
||||
ing.Items = append(ing.Items, *newIng)
|
||||
}
|
||||
}
|
||||
return ing, nil
|
||||
}
|
||||
|
||||
// GetServiceIngress gets all the Ingress' that have rules pointing to a service.
|
||||
// Note that this ignores services without the right nodePorts.
|
||||
func (s *StoreToIngressLister) GetServiceIngress(svc *api.Service) (ings []extensions.Ingress, err error) {
|
||||
for _, m := range s.Store.List() {
|
||||
ing := *m.(*extensions.Ingress)
|
||||
if ing.Namespace != svc.Namespace {
|
||||
continue
|
||||
}
|
||||
for _, rules := range ing.Spec.Rules {
|
||||
if rules.IngressRuleValue.HTTP == nil {
|
||||
continue
|
||||
}
|
||||
for _, p := range rules.IngressRuleValue.HTTP.Paths {
|
||||
if p.Backend.ServiceName == svc.Name {
|
||||
ings = append(ings, ing)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(ings) == 0 {
|
||||
err = fmt.Errorf("No ingress for service %v", svc.Name)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// GCETranslator helps with kubernetes -> gce api conversion.
|
||||
type GCETranslator struct {
|
||||
*LoadBalancerController
|
||||
}
|
||||
|
||||
// toUrlMap converts an ingress to a map of subdomain: url-regex: gce backend.
|
||||
func (t *GCETranslator) toUrlMap(ing *extensions.Ingress) (utils.GCEURLMap, error) {
|
||||
hostPathBackend := utils.GCEURLMap{}
|
||||
for _, rule := range ing.Spec.Rules {
|
||||
if rule.HTTP == nil {
|
||||
glog.Errorf("Ignoring non http Ingress rule")
|
||||
continue
|
||||
}
|
||||
pathToBackend := map[string]*compute.BackendService{}
|
||||
for _, p := range rule.HTTP.Paths {
|
||||
backend, err := t.toGCEBackend(&p.Backend, ing.Namespace)
|
||||
if err != nil {
|
||||
// If a service doesn't have a nodeport we can still forward traffic
|
||||
// to all other services under the assumption that the user will
|
||||
// modify nodeport.
|
||||
if _, ok := err.(errorNodePortNotFound); ok {
|
||||
glog.Infof("%v", err)
|
||||
continue
|
||||
}
|
||||
|
||||
// If a service doesn't have a backend, there's nothing the user
|
||||
// can do to correct this (the admin might've limited quota).
|
||||
// So keep requeuing the l7 till all backends exist.
|
||||
return utils.GCEURLMap{}, err
|
||||
}
|
||||
// The Ingress spec defines empty path as catch-all, so if a user
|
||||
// asks for a single host and multiple empty paths, all traffic is
|
||||
// sent to one of the last backend in the rules list.
|
||||
path := p.Path
|
||||
if path == "" {
|
||||
path = loadbalancers.DefaultPath
|
||||
}
|
||||
pathToBackend[path] = backend
|
||||
}
|
||||
// If multiple hostless rule sets are specified, last one wins
|
||||
host := rule.Host
|
||||
if host == "" {
|
||||
host = loadbalancers.DefaultHost
|
||||
}
|
||||
hostPathBackend[host] = pathToBackend
|
||||
}
|
||||
defaultBackend, _ := t.toGCEBackend(ing.Spec.Backend, ing.Namespace)
|
||||
hostPathBackend.PutDefaultBackend(defaultBackend)
|
||||
return hostPathBackend, nil
|
||||
}
|
||||
|
||||
func (t *GCETranslator) toGCEBackend(be *extensions.IngressBackend, ns string) (*compute.BackendService, error) {
|
||||
if be == nil {
|
||||
return nil, nil
|
||||
}
|
||||
port, err := t.getServiceNodePort(*be, ns)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
backend, err := t.CloudClusterManager.backendPool.Get(int64(port))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf(
|
||||
"No GCE backend exists for port %v, kube backend %+v", port, be)
|
||||
}
|
||||
return backend, nil
|
||||
}
|
||||
|
||||
// getServiceNodePort looks in the svc store for a matching service:port,
|
||||
// and returns the nodeport.
|
||||
func (t *GCETranslator) getServiceNodePort(be extensions.IngressBackend, namespace string) (int, error) {
|
||||
obj, exists, err := t.svcLister.Store.Get(
|
||||
&api.Service{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: be.ServiceName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
})
|
||||
if !exists {
|
||||
return invalidPort, errorNodePortNotFound{be, fmt.Errorf(
|
||||
"Service %v/%v not found in store", namespace, be.ServiceName)}
|
||||
}
|
||||
if err != nil {
|
||||
return invalidPort, errorNodePortNotFound{be, err}
|
||||
}
|
||||
var nodePort int
|
||||
for _, p := range obj.(*api.Service).Spec.Ports {
|
||||
switch be.ServicePort.Type {
|
||||
case intstr.Int:
|
||||
if p.Port == be.ServicePort.IntVal {
|
||||
nodePort = int(p.NodePort)
|
||||
break
|
||||
}
|
||||
default:
|
||||
if p.Name == be.ServicePort.StrVal {
|
||||
nodePort = int(p.NodePort)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
if nodePort != invalidPort {
|
||||
return nodePort, nil
|
||||
}
|
||||
return invalidPort, errorNodePortNotFound{be, fmt.Errorf(
|
||||
"Could not find matching nodeport from service.")}
|
||||
}
|
||||
|
||||
// toNodePorts converts a pathlist to a flat list of nodeports.
|
||||
func (t *GCETranslator) toNodePorts(ings *extensions.IngressList) []int64 {
|
||||
knownPorts := []int64{}
|
||||
for _, ing := range ings.Items {
|
||||
defaultBackend := ing.Spec.Backend
|
||||
if defaultBackend != nil {
|
||||
port, err := t.getServiceNodePort(*defaultBackend, ing.Namespace)
|
||||
if err != nil {
|
||||
glog.Infof("%v", err)
|
||||
} else {
|
||||
knownPorts = append(knownPorts, int64(port))
|
||||
}
|
||||
}
|
||||
for _, rule := range ing.Spec.Rules {
|
||||
if rule.HTTP == nil {
|
||||
glog.Errorf("Ignoring non http Ingress rule.")
|
||||
continue
|
||||
}
|
||||
for _, path := range rule.HTTP.Paths {
|
||||
port, err := t.getServiceNodePort(path.Backend, ing.Namespace)
|
||||
if err != nil {
|
||||
glog.Infof("%v", err)
|
||||
continue
|
||||
}
|
||||
knownPorts = append(knownPorts, int64(port))
|
||||
}
|
||||
}
|
||||
}
|
||||
return knownPorts
|
||||
}
|
||||
|
||||
func getZone(n *api.Node) string {
|
||||
zone, ok := n.Labels[zoneKey]
|
||||
if !ok {
|
||||
return defaultZone
|
||||
}
|
||||
return zone
|
||||
}
|
||||
|
||||
// GetZoneForNode returns the zone for a given node by looking up its zone label.
|
||||
func (t *GCETranslator) GetZoneForNode(name string) (string, error) {
|
||||
nodes, err := t.nodeLister.NodeCondition(getNodeReadyPredicate()).List()
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
for _, n := range nodes {
|
||||
if n.Name == name {
|
||||
// TODO: Make this more resilient to label changes by listing
|
||||
// cloud nodes and figuring out zone.
|
||||
return getZone(n), nil
|
||||
}
|
||||
}
|
||||
return "", fmt.Errorf("Node not found %v", name)
|
||||
}
|
||||
|
||||
// ListZones returns a list of zones this Kubernetes cluster spans.
|
||||
func (t *GCETranslator) ListZones() ([]string, error) {
|
||||
zones := sets.String{}
|
||||
readyNodes, err := t.nodeLister.NodeCondition(getNodeReadyPredicate()).List()
|
||||
if err != nil {
|
||||
return zones.List(), err
|
||||
}
|
||||
for _, n := range readyNodes {
|
||||
zones.Insert(getZone(n))
|
||||
}
|
||||
return zones.List(), nil
|
||||
}
|
||||
|
||||
// isPortEqual compares the given IntOrString ports
|
||||
func isPortEqual(port, targetPort intstr.IntOrString) bool {
|
||||
if targetPort.Type == intstr.Int {
|
||||
return port.IntVal == targetPort.IntVal
|
||||
}
|
||||
return port.StrVal == targetPort.StrVal
|
||||
}
|
||||
|
||||
// geHTTPProbe returns the http readiness probe from the first container
|
||||
// that matches targetPort, from the set of pods matching the given labels.
|
||||
func (t *GCETranslator) getHTTPProbe(svc api.Service, targetPort intstr.IntOrString) (*api.Probe, error) {
|
||||
l := svc.Spec.Selector
|
||||
|
||||
// Lookup any container with a matching targetPort from the set of pods
|
||||
// with a matching label selector.
|
||||
pl, err := t.podLister.List(labels.SelectorFromSet(labels.Set(l)))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// If multiple endpoints have different health checks, take the first
|
||||
sort.Sort(PodsByCreationTimestamp(pl))
|
||||
|
||||
for _, pod := range pl {
|
||||
if pod.Namespace != svc.Namespace {
|
||||
continue
|
||||
}
|
||||
logStr := fmt.Sprintf("Pod %v matching service selectors %v (targetport %+v)", pod.Name, l, targetPort)
|
||||
for _, c := range pod.Spec.Containers {
|
||||
if !isSimpleHTTPProbe(c.ReadinessProbe) {
|
||||
continue
|
||||
}
|
||||
for _, p := range c.Ports {
|
||||
cPort := intstr.IntOrString{IntVal: p.ContainerPort, StrVal: p.Name}
|
||||
if isPortEqual(cPort, targetPort) {
|
||||
if isPortEqual(c.ReadinessProbe.Handler.HTTPGet.Port, targetPort) {
|
||||
return c.ReadinessProbe, nil
|
||||
}
|
||||
glog.Infof("%v: found matching targetPort on container %v, but not on readinessProbe (%+v)",
|
||||
logStr, c.Name, c.ReadinessProbe.Handler.HTTPGet.Port)
|
||||
}
|
||||
}
|
||||
}
|
||||
glog.V(4).Infof("%v: lacks a matching HTTP probe for use in health checks.", logStr)
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// isSimpleHTTPProbe returns true if the given Probe is:
|
||||
// - an HTTPGet probe, as opposed to a tcp or exec probe
|
||||
// - has a scheme of HTTP, as opposed to HTTPS
|
||||
// - has no special host or headers fields
|
||||
func isSimpleHTTPProbe(probe *api.Probe) bool {
|
||||
return (probe != nil && probe.Handler.HTTPGet != nil && probe.Handler.HTTPGet.Host == "" &&
|
||||
probe.Handler.HTTPGet.Scheme == api.URISchemeHTTP && len(probe.Handler.HTTPGet.HTTPHeaders) == 0)
|
||||
}
|
||||
|
||||
// HealthCheck returns the http readiness probe for the endpoint backing the
|
||||
// given nodePort. If no probe is found it returns a health check with "" as
|
||||
// the request path, callers are responsible for swapping this out for the
|
||||
// appropriate default.
|
||||
func (t *GCETranslator) HealthCheck(port int64) (*compute.HttpHealthCheck, error) {
|
||||
sl, err := t.svcLister.List()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Find the label and target port of the one service with the given nodePort
|
||||
for _, s := range sl.Items {
|
||||
for _, p := range s.Spec.Ports {
|
||||
if int32(port) == p.NodePort {
|
||||
rp, err := t.getHTTPProbe(s, p.TargetPort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if rp == nil {
|
||||
glog.Infof("No pod in service %v with node port %v has declared a matching readiness probe for health checks.", s.Name, port)
|
||||
break
|
||||
}
|
||||
healthPath := rp.Handler.HTTPGet.Path
|
||||
// GCE requires a leading "/" for health check urls.
|
||||
if string(healthPath[0]) != "/" {
|
||||
healthPath = fmt.Sprintf("/%v", healthPath)
|
||||
}
|
||||
host := rp.Handler.HTTPGet.Host
|
||||
glog.Infof("Found custom health check for Service %v nodeport %v: %v%v", s.Name, port, host, healthPath)
|
||||
return &compute.HttpHealthCheck{
|
||||
Port: port,
|
||||
RequestPath: healthPath,
|
||||
Host: host,
|
||||
Description: "kubernetes L7 health check from readiness probe.",
|
||||
CheckIntervalSec: int64(rp.PeriodSeconds),
|
||||
TimeoutSec: int64(rp.TimeoutSeconds),
|
||||
HealthyThreshold: int64(rp.SuccessThreshold),
|
||||
UnhealthyThreshold: int64(rp.FailureThreshold),
|
||||
// TODO: include headers after updating compute godep.
|
||||
}, nil
|
||||
}
|
||||
}
|
||||
}
|
||||
return utils.DefaultHealthCheckTemplate(port), nil
|
||||
}
|
||||
|
||||
// PodsByCreationTimestamp sorts a list of Pods by creation timestamp, using their names as a tie breaker.
|
||||
type PodsByCreationTimestamp []*api.Pod
|
||||
|
||||
func (o PodsByCreationTimestamp) Len() int { return len(o) }
|
||||
func (o PodsByCreationTimestamp) Swap(i, j int) { o[i], o[j] = o[j], o[i] }
|
||||
|
||||
func (o PodsByCreationTimestamp) Less(i, j int) bool {
|
||||
if o[i].CreationTimestamp.Equal(o[j].CreationTimestamp) {
|
||||
return o[i].Name < o[j].Name
|
||||
}
|
||||
return o[i].CreationTimestamp.Before(o[j].CreationTimestamp)
|
||||
}
|
74
controllers/gce/examples/health_checks/README.md
Normal file
74
controllers/gce/examples/health_checks/README.md
Normal file
|
@ -0,0 +1,74 @@
|
|||
# Simple HTTP health check example
|
||||
|
||||
The GCE Ingress controller adopts the readiness probe from the matching endpoints, provided the readiness probe doesn't require HTTPS or special headers.
|
||||
|
||||
Create the following app:
|
||||
```console
|
||||
$ kubectl create -f health_check_app.yaml
|
||||
replicationcontroller "echoheaders" created
|
||||
You have exposed your service on an external port on all nodes in your
|
||||
cluster. If you want to expose this service to the external internet, you may
|
||||
need to set up firewall rules for the service port(s) (tcp:31165) to serve traffic.
|
||||
|
||||
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
|
||||
service "echoheadersx" created
|
||||
You have exposed your service on an external port on all nodes in your
|
||||
cluster. If you want to expose this service to the external internet, you may
|
||||
need to set up firewall rules for the service port(s) (tcp:31020) to serve traffic.
|
||||
|
||||
See http://releases.k8s.io/HEAD/docs/user-guide/services-firewalls.md for more details.
|
||||
service "echoheadersy" created
|
||||
ingress "echomap" created
|
||||
```
|
||||
|
||||
You should soon find an Ingress that is backed by a GCE Loadbalancer.
|
||||
|
||||
```console
|
||||
$ kubectl describe ing echomap
|
||||
Name: echomap
|
||||
Namespace: default
|
||||
Address: 107.178.255.228
|
||||
Default backend: default-http-backend:80 (10.180.0.9:8080,10.240.0.2:8080)
|
||||
Rules:
|
||||
Host Path Backends
|
||||
---- ---- --------
|
||||
foo.bar.com
|
||||
/foo echoheadersx:80 (<none>)
|
||||
bar.baz.com
|
||||
/bar echoheadersy:80 (<none>)
|
||||
/foo echoheadersx:80 (<none>)
|
||||
Annotations:
|
||||
target-proxy: k8s-tp-default-echomap--a9d60e8176d933ee
|
||||
url-map: k8s-um-default-echomap--a9d60e8176d933ee
|
||||
backends: {"k8s-be-31020--a9d60e8176d933ee":"HEALTHY","k8s-be-31165--a9d60e8176d933ee":"HEALTHY","k8s-be-31686--a9d60e8176d933ee":"HEALTHY"}
|
||||
forwarding-rule: k8s-fw-default-echomap--a9d60e8176d933ee
|
||||
Events:
|
||||
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
|
||||
--------- -------- ----- ---- ------------- -------- ------ -------
|
||||
17m 17m 1 {loadbalancer-controller } Normal ADD default/echomap
|
||||
15m 15m 1 {loadbalancer-controller } Normal CREATE ip: 107.178.255.228
|
||||
|
||||
$ curl 107.178.255.228/foo -H 'Host:foo.bar.com'
|
||||
CLIENT VALUES:
|
||||
client_address=10.240.0.5
|
||||
command=GET
|
||||
real path=/foo
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://foo.bar.com:8080/foo
|
||||
...
|
||||
```
|
||||
|
||||
You can confirm the health check endpoint point it's using one of 2 ways:
|
||||
* Through the cloud console: compute > health checks > lookup your health check. It takes the form k8s-be-nodePort-hash, where nodePort in the example above is 31165 and 31020, as shown by the kubectl output.
|
||||
* Through gcloud: Run `gcloud compute http-health-checks list`
|
||||
|
||||
## Limitations
|
||||
|
||||
A few points to note:
|
||||
* The readiness probe must be exposed on the port matching the `servicePort` specified in the Ingress
|
||||
* The readiness probe cannot have special requirements, like headers or HTTPS
|
||||
* The probe timeouts are translated to GCE health check timeouts
|
||||
* You must create the pods backing the endpoints with the given readiness probe. This *will not* work if you update the replication controller with a different readiness probe.
|
||||
|
||||
|
82
controllers/gce/examples/health_checks/health_check_app.yaml
Normal file
82
controllers/gce/examples/health_checks/health_check_app.yaml
Normal file
|
@ -0,0 +1,82 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders
|
||||
image: gcr.io/google_containers/echoserver:1.4
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
periodSeconds: 1
|
||||
timeoutSeconds: 1
|
||||
successThreshold: 1
|
||||
failureThreshold: 10
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheadersx
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheadersy
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echomap
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheadersx
|
||||
servicePort: 80
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: echoheadersy
|
||||
servicePort: 80
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheadersx
|
||||
servicePort: 80
|
||||
|
32
controllers/gce/examples/https/Makefile
Normal file
32
controllers/gce/examples/https/Makefile
Normal file
|
@ -0,0 +1,32 @@
|
|||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
all:
|
||||
|
||||
KEY = /tmp/tls.key
|
||||
CERT = /tmp/tls.crt
|
||||
SECRET = /tmp/tls.json
|
||||
HOST=example.com
|
||||
NAME=tls-secret
|
||||
|
||||
keys:
|
||||
# The CName used here is specific to the service specified in nginx-app.yaml.
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)"
|
||||
|
||||
secret:
|
||||
godep go run make_secret.go -crt $(CERT) -key $(KEY) -name $(NAME) > $(SECRET)
|
||||
|
||||
clean:
|
||||
rm $(KEY)
|
||||
rm $(CERT)
|
20
controllers/gce/examples/https/README.md
Normal file
20
controllers/gce/examples/https/README.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
# Simple TLS example
|
||||
|
||||
Create secret
|
||||
```console
|
||||
$ make keys secret
|
||||
$ kubectl create -f /tmp/tls.json
|
||||
```
|
||||
|
||||
Make sure you have the l7 controller running:
|
||||
```console
|
||||
$ kubectl --namespace=kube-system get pod -l name=glbc
|
||||
NAME
|
||||
l7-lb-controller-v0.6.0-1770t ...
|
||||
```
|
||||
Also make sure you have a [firewall rule](https://github.com/kubernetes/contrib/blob/master/ingress/controllers/gce/BETA_LIMITATIONS.md#creating-the-fir-glbc-health-checks) for the node port of the Service.
|
||||
|
||||
Create Ingress
|
||||
```console
|
||||
$ kubectl create -f tls-app.yaml
|
||||
```
|
71
controllers/gce/examples/https/make_secret.go
Normal file
71
controllers/gce/examples/https/make_secret.go
Normal file
|
@ -0,0 +1,71 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// A small script that converts the given open ssl public/private keys to
|
||||
// a secret that it writes to stdout as json. Most common use case is to
|
||||
// create a secret from self signed certificates used to authenticate with
|
||||
// a devserver. Usage: go run make_secret.go -crt ca.crt -key priv.key > secret.json
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"log"
|
||||
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apimachinery/registered"
|
||||
"k8s.io/kubernetes/pkg/runtime"
|
||||
|
||||
// This installs the legacy v1 API
|
||||
_ "k8s.io/kubernetes/pkg/api/install"
|
||||
)
|
||||
|
||||
// TODO:
|
||||
// Add a -o flag that writes to the specified destination file.
|
||||
// Teach the script to create crt and key if -crt and -key aren't specified.
|
||||
var (
|
||||
crt = flag.String("crt", "", "path to tls certificates.")
|
||||
key = flag.String("key", "", "path to tls private key.")
|
||||
name = flag.String("name", "tls-secret", "name of the secret.")
|
||||
)
|
||||
|
||||
func read(file string) []byte {
|
||||
b, err := ioutil.ReadFile(file)
|
||||
if err != nil {
|
||||
log.Fatalf("Cannot read file %v, %v", file, err)
|
||||
}
|
||||
return b
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
if *crt == "" || *key == "" {
|
||||
log.Fatalf("Need to specify -crt -key and -template")
|
||||
}
|
||||
tlsCrt := read(*crt)
|
||||
tlsKey := read(*key)
|
||||
secret := &api.Secret{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: *name,
|
||||
},
|
||||
Data: map[string][]byte{
|
||||
api.TLSCertKey: tlsCrt,
|
||||
api.TLSPrivateKeyKey: tlsKey,
|
||||
},
|
||||
}
|
||||
fmt.Printf(runtime.EncodeOrDie(api.Codecs.LegacyCodec(registered.EnabledVersions()...), secret))
|
||||
}
|
46
controllers/gce/examples/https/tls-app.yaml
Normal file
46
controllers/gce/examples/https/tls-app.yaml
Normal file
|
@ -0,0 +1,46 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders-https
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders-https
|
||||
spec:
|
||||
replicas: 2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders-https
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders-https
|
||||
image: gcr.io/google_containers/echoserver:1.3
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: test
|
||||
spec:
|
||||
tls:
|
||||
# This assumes tls-secret exists.
|
||||
# To generate it run the make in this directory.
|
||||
- secretName: tls-secret
|
||||
backend:
|
||||
serviceName: echoheaders-https
|
||||
servicePort: 80
|
||||
|
104
controllers/gce/firewalls/fakes.go
Normal file
104
controllers/gce/firewalls/fakes.go
Normal file
|
@ -0,0 +1,104 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package firewalls
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
netset "k8s.io/kubernetes/pkg/util/net/sets"
|
||||
)
|
||||
|
||||
type fakeFirewallRules struct {
|
||||
fw []*compute.Firewall
|
||||
namer utils.Namer
|
||||
}
|
||||
|
||||
func (f *fakeFirewallRules) GetFirewall(name string) (*compute.Firewall, error) {
|
||||
for _, rule := range f.fw {
|
||||
if rule.Name == name {
|
||||
return rule, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Firewall rule %v not found.", name)
|
||||
}
|
||||
|
||||
func (f *fakeFirewallRules) CreateFirewall(name, msgTag string, srcRange netset.IPNet, ports []int64, hosts []string) error {
|
||||
strPorts := []string{}
|
||||
for _, p := range ports {
|
||||
strPorts = append(strPorts, fmt.Sprintf("%v", p))
|
||||
}
|
||||
f.fw = append(f.fw, &compute.Firewall{
|
||||
// To accurately mimic the cloudprovider we need to add the k8s-fw
|
||||
// prefix to the given rule name.
|
||||
Name: f.namer.FrName(name),
|
||||
SourceRanges: srcRange.StringSlice(),
|
||||
Allowed: []*compute.FirewallAllowed{{Ports: strPorts}},
|
||||
})
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *fakeFirewallRules) DeleteFirewall(name string) error {
|
||||
firewalls := []*compute.Firewall{}
|
||||
exists := false
|
||||
// We need the full name for the same reason as CreateFirewall.
|
||||
name = f.namer.FrName(name)
|
||||
for _, rule := range f.fw {
|
||||
if rule.Name == name {
|
||||
exists = true
|
||||
continue
|
||||
}
|
||||
firewalls = append(firewalls, rule)
|
||||
}
|
||||
if !exists {
|
||||
return fmt.Errorf("Failed to find health check %v", name)
|
||||
}
|
||||
f.fw = firewalls
|
||||
return nil
|
||||
}
|
||||
|
||||
func (f *fakeFirewallRules) UpdateFirewall(name, msgTag string, srcRange netset.IPNet, ports []int64, hosts []string) error {
|
||||
var exists bool
|
||||
strPorts := []string{}
|
||||
for _, p := range ports {
|
||||
strPorts = append(strPorts, fmt.Sprintf("%v", p))
|
||||
}
|
||||
|
||||
// To accurately mimic the cloudprovider we need to add the k8s-fw
|
||||
// prefix to the given rule name.
|
||||
name = f.namer.FrName(name)
|
||||
for i := range f.fw {
|
||||
if f.fw[i].Name == name {
|
||||
exists = true
|
||||
f.fw[i] = &compute.Firewall{
|
||||
Name: name,
|
||||
SourceRanges: srcRange.StringSlice(),
|
||||
Allowed: []*compute.FirewallAllowed{{Ports: strPorts}},
|
||||
}
|
||||
}
|
||||
}
|
||||
if exists {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("Update failed for rule %v, srcRange %v ports %v, rule not found", name, srcRange, ports)
|
||||
}
|
||||
|
||||
// NewFakeFirewallRules creates a fake for firewall rules.
|
||||
func NewFakeFirewallRules() *fakeFirewallRules {
|
||||
return &fakeFirewallRules{fw: []*compute.Firewall{}, namer: utils.Namer{}}
|
||||
}
|
94
controllers/gce/firewalls/firewalls.go
Normal file
94
controllers/gce/firewalls/firewalls.go
Normal file
|
@ -0,0 +1,94 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package firewalls
|
||||
|
||||
import (
|
||||
"github.com/golang/glog"
|
||||
"strconv"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
netset "k8s.io/kubernetes/pkg/util/net/sets"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
// Src range from which the GCE L7 performs health checks.
|
||||
const l7SrcRange = "130.211.0.0/22"
|
||||
|
||||
// FirewallRules manages firewall rules.
|
||||
type FirewallRules struct {
|
||||
cloud Firewall
|
||||
namer *utils.Namer
|
||||
srcRange netset.IPNet
|
||||
}
|
||||
|
||||
// NewFirewallPool creates a new firewall rule manager.
|
||||
// cloud: the cloud object implementing Firewall.
|
||||
// namer: cluster namer.
|
||||
func NewFirewallPool(cloud Firewall, namer *utils.Namer) SingleFirewallPool {
|
||||
srcNetSet, err := netset.ParseIPNets(l7SrcRange)
|
||||
if err != nil {
|
||||
glog.Fatalf("Could not parse L7 src range %v for firewall rule: %v", l7SrcRange, err)
|
||||
}
|
||||
return &FirewallRules{cloud: cloud, namer: namer, srcRange: srcNetSet}
|
||||
}
|
||||
|
||||
// Sync sync firewall rules with the cloud.
|
||||
func (fr *FirewallRules) Sync(nodePorts []int64, nodeNames []string) error {
|
||||
if len(nodePorts) == 0 {
|
||||
return fr.Shutdown()
|
||||
}
|
||||
// Firewall rule prefix must match that inserted by the gce library.
|
||||
suffix := fr.namer.FrSuffix()
|
||||
// TODO: Fix upstream gce cloudprovider lib so GET also takes the suffix
|
||||
// instead of the whole name.
|
||||
name := fr.namer.FrName(suffix)
|
||||
rule, _ := fr.cloud.GetFirewall(name)
|
||||
if rule == nil {
|
||||
glog.Infof("Creating global l7 firewall rule %v", name)
|
||||
return fr.cloud.CreateFirewall(suffix, "GCE L7 firewall rule", fr.srcRange, nodePorts, nodeNames)
|
||||
}
|
||||
|
||||
requiredPorts := sets.NewString()
|
||||
for _, p := range nodePorts {
|
||||
requiredPorts.Insert(strconv.Itoa(int(p)))
|
||||
}
|
||||
existingPorts := sets.NewString()
|
||||
for _, allowed := range rule.Allowed {
|
||||
for _, p := range allowed.Ports {
|
||||
existingPorts.Insert(p)
|
||||
}
|
||||
}
|
||||
if requiredPorts.Equal(existingPorts) {
|
||||
return nil
|
||||
}
|
||||
glog.V(3).Infof("Firewall rule %v already exists, updating nodeports %v", name, nodePorts)
|
||||
return fr.cloud.UpdateFirewall(suffix, "GCE L7 firewall rule", fr.srcRange, nodePorts, nodeNames)
|
||||
}
|
||||
|
||||
// Shutdown shuts down this firewall rules manager.
|
||||
func (fr *FirewallRules) Shutdown() error {
|
||||
glog.Infof("Deleting fireawll rule with suffix %v", fr.namer.FrSuffix())
|
||||
return fr.cloud.DeleteFirewall(fr.namer.FrSuffix())
|
||||
}
|
||||
|
||||
// GetFirewall just returns the firewall object corresponding to the given name.
|
||||
// TODO: Currently only used in testing. Modify so we don't leak compute
|
||||
// objects out of this interface by returning just the (src, ports, error).
|
||||
func (fr *FirewallRules) GetFirewall(name string) (*compute.Firewall, error) {
|
||||
return fr.cloud.GetFirewall(name)
|
||||
}
|
39
controllers/gce/firewalls/interfaces.go
Normal file
39
controllers/gce/firewalls/interfaces.go
Normal file
|
@ -0,0 +1,39 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package firewalls
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
netset "k8s.io/kubernetes/pkg/util/net/sets"
|
||||
)
|
||||
|
||||
// SingleFirewallPool syncs the firewall rule for L7 traffic.
|
||||
type SingleFirewallPool interface {
|
||||
// TODO: Take a list of node ports for the firewall.
|
||||
Sync(nodePorts []int64, nodeNames []string) error
|
||||
Shutdown() error
|
||||
}
|
||||
|
||||
// Firewall interfaces with the GCE firewall api.
|
||||
// This interface is a little different from the rest because it dovetails into
|
||||
// the same firewall methods used by the TCPLoadBalancer.
|
||||
type Firewall interface {
|
||||
CreateFirewall(name, msgTag string, srcRange netset.IPNet, ports []int64, hosts []string) error
|
||||
GetFirewall(name string) (*compute.Firewall, error)
|
||||
DeleteFirewall(name string) error
|
||||
UpdateFirewall(name, msgTag string, srcRange netset.IPNet, ports []int64, hosts []string) error
|
||||
}
|
101
controllers/gce/healthchecks/fakes.go
Normal file
101
controllers/gce/healthchecks/fakes.go
Normal file
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package healthchecks
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
)
|
||||
|
||||
// NewFakeHealthChecks returns a new FakeHealthChecks.
|
||||
func NewFakeHealthChecks() *FakeHealthChecks {
|
||||
return &FakeHealthChecks{hc: []*compute.HttpHealthCheck{}}
|
||||
}
|
||||
|
||||
// FakeHealthCheckGetter implements the healthCheckGetter interface for tests.
|
||||
type FakeHealthCheckGetter struct {
|
||||
DefaultHealthCheck *compute.HttpHealthCheck
|
||||
}
|
||||
|
||||
// HealthCheck returns the health check for the given port. If a health check
|
||||
// isn't stored under the DefaultHealthCheck member, it constructs one.
|
||||
func (h *FakeHealthCheckGetter) HealthCheck(port int64) (*compute.HttpHealthCheck, error) {
|
||||
if h.DefaultHealthCheck == nil {
|
||||
return utils.DefaultHealthCheckTemplate(port), nil
|
||||
}
|
||||
return h.DefaultHealthCheck, nil
|
||||
}
|
||||
|
||||
// FakeHealthChecks fakes out health checks.
|
||||
type FakeHealthChecks struct {
|
||||
hc []*compute.HttpHealthCheck
|
||||
}
|
||||
|
||||
// CreateHttpHealthCheck fakes out http health check creation.
|
||||
func (f *FakeHealthChecks) CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error {
|
||||
f.hc = append(f.hc, hc)
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetHttpHealthCheck fakes out getting a http health check from the cloud.
|
||||
func (f *FakeHealthChecks) GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error) {
|
||||
for _, h := range f.hc {
|
||||
if h.Name == name {
|
||||
return h, nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Health check %v not found.", name)
|
||||
}
|
||||
|
||||
// DeleteHttpHealthCheck fakes out deleting a http health check.
|
||||
func (f *FakeHealthChecks) DeleteHttpHealthCheck(name string) error {
|
||||
healthChecks := []*compute.HttpHealthCheck{}
|
||||
exists := false
|
||||
for _, h := range f.hc {
|
||||
if h.Name == name {
|
||||
exists = true
|
||||
continue
|
||||
}
|
||||
healthChecks = append(healthChecks, h)
|
||||
}
|
||||
if !exists {
|
||||
return fmt.Errorf("Failed to find health check %v", name)
|
||||
}
|
||||
f.hc = healthChecks
|
||||
return nil
|
||||
}
|
||||
|
||||
// UpdateHttpHealthCheck sends the given health check as an update.
|
||||
func (f *FakeHealthChecks) UpdateHttpHealthCheck(hc *compute.HttpHealthCheck) error {
|
||||
healthChecks := []*compute.HttpHealthCheck{}
|
||||
found := false
|
||||
for _, h := range f.hc {
|
||||
if h.Name == hc.Name {
|
||||
healthChecks = append(healthChecks, hc)
|
||||
found = true
|
||||
} else {
|
||||
healthChecks = append(healthChecks, h)
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return fmt.Errorf("Cannot update a non-existent health check %v", hc.Name)
|
||||
}
|
||||
f.hc = healthChecks
|
||||
return nil
|
||||
}
|
93
controllers/gce/healthchecks/healthchecks.go
Normal file
93
controllers/gce/healthchecks/healthchecks.go
Normal file
|
@ -0,0 +1,93 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package healthchecks
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
|
||||
"github.com/golang/glog"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
// HealthChecks manages health checks.
|
||||
type HealthChecks struct {
|
||||
cloud SingleHealthCheck
|
||||
defaultPath string
|
||||
namer *utils.Namer
|
||||
healthCheckGetter
|
||||
}
|
||||
|
||||
// NewHealthChecker creates a new health checker.
|
||||
// cloud: the cloud object implementing SingleHealthCheck.
|
||||
// defaultHealthCheckPath: is the HTTP path to use for health checks.
|
||||
func NewHealthChecker(cloud SingleHealthCheck, defaultHealthCheckPath string, namer *utils.Namer) HealthChecker {
|
||||
return &HealthChecks{cloud, defaultHealthCheckPath, namer, nil}
|
||||
}
|
||||
|
||||
// Init initializes the health checker.
|
||||
func (h *HealthChecks) Init(r healthCheckGetter) {
|
||||
h.healthCheckGetter = r
|
||||
}
|
||||
|
||||
// Add adds a healthcheck if one for the same port doesn't already exist.
|
||||
func (h *HealthChecks) Add(port int64) error {
|
||||
wantHC, err := h.healthCheckGetter.HealthCheck(port)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if wantHC.RequestPath == "" {
|
||||
wantHC.RequestPath = h.defaultPath
|
||||
}
|
||||
name := h.namer.BeName(port)
|
||||
wantHC.Name = name
|
||||
hc, _ := h.Get(port)
|
||||
if hc == nil {
|
||||
// TODO: check if the readiness probe has changed and update the
|
||||
// health check.
|
||||
glog.Infof("Creating health check %v", name)
|
||||
if err := h.cloud.CreateHttpHealthCheck(wantHC); err != nil {
|
||||
return err
|
||||
}
|
||||
} else if wantHC.RequestPath != hc.RequestPath {
|
||||
// TODO: reconcile health checks, and compare headers interval etc.
|
||||
// Currently Ingress doesn't expose all the health check params
|
||||
// natively, so some users prefer to hand modify the check.
|
||||
glog.Infof("Unexpected request path on health check %v, has %v want %v, NOT reconciling",
|
||||
name, hc.RequestPath, wantHC.RequestPath)
|
||||
} else {
|
||||
glog.Infof("Health check %v already exists and has the expected path %v", hc.Name, hc.RequestPath)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete deletes the health check by port.
|
||||
func (h *HealthChecks) Delete(port int64) error {
|
||||
name := h.namer.BeName(port)
|
||||
glog.Infof("Deleting health check %v", name)
|
||||
if err := h.cloud.DeleteHttpHealthCheck(h.namer.BeName(port)); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Get returns the given health check.
|
||||
func (h *HealthChecks) Get(port int64) (*compute.HttpHealthCheck, error) {
|
||||
return h.cloud.GetHttpHealthCheck(h.namer.BeName(port))
|
||||
}
|
44
controllers/gce/healthchecks/interfaces.go
Normal file
44
controllers/gce/healthchecks/interfaces.go
Normal file
|
@ -0,0 +1,44 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package healthchecks
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
)
|
||||
|
||||
// healthCheckGetter retrieves health checks.
|
||||
type healthCheckGetter interface {
|
||||
// HealthCheck returns the HTTP readiness check for a node port.
|
||||
HealthCheck(nodePort int64) (*compute.HttpHealthCheck, error)
|
||||
}
|
||||
|
||||
// SingleHealthCheck is an interface to manage a single GCE health check.
|
||||
type SingleHealthCheck interface {
|
||||
CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error
|
||||
UpdateHttpHealthCheck(hc *compute.HttpHealthCheck) error
|
||||
DeleteHttpHealthCheck(name string) error
|
||||
GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error)
|
||||
}
|
||||
|
||||
// HealthChecker is an interface to manage cloud HTTPHealthChecks.
|
||||
type HealthChecker interface {
|
||||
Init(h healthCheckGetter)
|
||||
|
||||
Add(port int64) error
|
||||
Delete(port int64) error
|
||||
Get(port int64) (*compute.HttpHealthCheck, error)
|
||||
}
|
87
controllers/gce/ingress-app.yaml
Normal file
87
controllers/gce/ingress-app.yaml
Normal file
|
@ -0,0 +1,87 @@
|
|||
# This Service writes the HTTP request headers out to the response. Access it
|
||||
# through its NodePort, LoadBalancer or Ingress endpoint.
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheadersx
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
nodePort: 30301
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheadersy
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
nodePort: 30284
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
# This is a replication controller for the endpoint that services the 3
|
||||
# Services above.
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders
|
||||
image: gcr.io/google_containers/echoserver:1.4
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
# This is the Ingress resource that creates a HTTP Loadbalancer configured
|
||||
# according to the Ingress rules.
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echomap
|
||||
spec:
|
||||
backend:
|
||||
# Re-use echoheadersx as the default backend so we stay under the default
|
||||
# quota for gce BackendServices.
|
||||
serviceName: echoheadersx
|
||||
servicePort: 80
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheadersx
|
||||
servicePort: 80
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: echoheadersy
|
||||
servicePort: 80
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheadersx
|
||||
servicePort: 80
|
||||
|
169
controllers/gce/instances/fakes.go
Normal file
169
controllers/gce/instances/fakes.go
Normal file
|
@ -0,0 +1,169 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package instances
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
// NewFakeInstanceGroups creates a new FakeInstanceGroups.
|
||||
func NewFakeInstanceGroups(nodes sets.String) *FakeInstanceGroups {
|
||||
return &FakeInstanceGroups{
|
||||
instances: nodes,
|
||||
listResult: getInstanceList(nodes),
|
||||
namer: utils.Namer{},
|
||||
zonesToInstances: map[string][]string{},
|
||||
}
|
||||
}
|
||||
|
||||
// InstanceGroup fakes
|
||||
|
||||
// FakeZoneLister records zones for nodes.
|
||||
type FakeZoneLister struct {
|
||||
Zones []string
|
||||
}
|
||||
|
||||
// ListZones returns the list of zones.
|
||||
func (z *FakeZoneLister) ListZones() ([]string, error) {
|
||||
return z.Zones, nil
|
||||
}
|
||||
|
||||
// GetZoneForNode returns the only zone stored in the fake zone lister.
|
||||
func (z *FakeZoneLister) GetZoneForNode(name string) (string, error) {
|
||||
// TODO: evolve as required, it's currently needed just to satisfy the
|
||||
// interface in unittests that don't care about zones. See unittests in
|
||||
// controller/util_test for actual zoneLister testing.
|
||||
return z.Zones[0], nil
|
||||
}
|
||||
|
||||
// FakeInstanceGroups fakes out the instance groups api.
|
||||
type FakeInstanceGroups struct {
|
||||
instances sets.String
|
||||
instanceGroups []*compute.InstanceGroup
|
||||
Ports []int64
|
||||
getResult *compute.InstanceGroup
|
||||
listResult *compute.InstanceGroupsListInstances
|
||||
calls []int
|
||||
namer utils.Namer
|
||||
zonesToInstances map[string][]string
|
||||
}
|
||||
|
||||
// GetInstanceGroup fakes getting an instance group from the cloud.
|
||||
func (f *FakeInstanceGroups) GetInstanceGroup(name, zone string) (*compute.InstanceGroup, error) {
|
||||
f.calls = append(f.calls, utils.Get)
|
||||
for _, ig := range f.instanceGroups {
|
||||
if ig.Name == name && ig.Zone == zone {
|
||||
return ig, nil
|
||||
}
|
||||
}
|
||||
// TODO: Return googleapi 404 error
|
||||
return nil, fmt.Errorf("Instance group %v not found", name)
|
||||
}
|
||||
|
||||
// CreateInstanceGroup fakes instance group creation.
|
||||
func (f *FakeInstanceGroups) CreateInstanceGroup(name, zone string) (*compute.InstanceGroup, error) {
|
||||
newGroup := &compute.InstanceGroup{Name: name, SelfLink: name, Zone: zone}
|
||||
f.instanceGroups = append(f.instanceGroups, newGroup)
|
||||
return newGroup, nil
|
||||
}
|
||||
|
||||
// DeleteInstanceGroup fakes instance group deletion.
|
||||
func (f *FakeInstanceGroups) DeleteInstanceGroup(name, zone string) error {
|
||||
newGroups := []*compute.InstanceGroup{}
|
||||
found := false
|
||||
for _, ig := range f.instanceGroups {
|
||||
if ig.Name == name {
|
||||
found = true
|
||||
continue
|
||||
}
|
||||
newGroups = append(newGroups, ig)
|
||||
}
|
||||
if !found {
|
||||
return fmt.Errorf("Instance Group %v not found", name)
|
||||
}
|
||||
f.instanceGroups = newGroups
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListInstancesInInstanceGroup fakes listing instances in an instance group.
|
||||
func (f *FakeInstanceGroups) ListInstancesInInstanceGroup(name, zone string, state string) (*compute.InstanceGroupsListInstances, error) {
|
||||
return f.listResult, nil
|
||||
}
|
||||
|
||||
// AddInstancesToInstanceGroup fakes adding instances to an instance group.
|
||||
func (f *FakeInstanceGroups) AddInstancesToInstanceGroup(name, zone string, instanceNames []string) error {
|
||||
f.calls = append(f.calls, utils.AddInstances)
|
||||
f.instances.Insert(instanceNames...)
|
||||
if _, ok := f.zonesToInstances[zone]; !ok {
|
||||
f.zonesToInstances[zone] = []string{}
|
||||
}
|
||||
f.zonesToInstances[zone] = append(f.zonesToInstances[zone], instanceNames...)
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetInstancesByZone returns the zone to instances map.
|
||||
func (f *FakeInstanceGroups) GetInstancesByZone() map[string][]string {
|
||||
return f.zonesToInstances
|
||||
}
|
||||
|
||||
// RemoveInstancesFromInstanceGroup fakes removing instances from an instance group.
|
||||
func (f *FakeInstanceGroups) RemoveInstancesFromInstanceGroup(name, zone string, instanceNames []string) error {
|
||||
f.calls = append(f.calls, utils.RemoveInstances)
|
||||
f.instances.Delete(instanceNames...)
|
||||
l, ok := f.zonesToInstances[zone]
|
||||
if !ok {
|
||||
return nil
|
||||
}
|
||||
newIns := []string{}
|
||||
delIns := sets.NewString(instanceNames...)
|
||||
for _, oldIns := range l {
|
||||
if delIns.Has(oldIns) {
|
||||
continue
|
||||
}
|
||||
newIns = append(newIns, oldIns)
|
||||
}
|
||||
f.zonesToInstances[zone] = newIns
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddPortToInstanceGroup fakes adding ports to an Instance Group.
|
||||
func (f *FakeInstanceGroups) AddPortToInstanceGroup(ig *compute.InstanceGroup, port int64) (*compute.NamedPort, error) {
|
||||
f.Ports = append(f.Ports, port)
|
||||
return &compute.NamedPort{Name: f.namer.BeName(port), Port: port}, nil
|
||||
}
|
||||
|
||||
// getInstanceList returns an instance list based on the given names.
|
||||
// The names cannot contain a '.', the real gce api validates against this.
|
||||
func getInstanceList(nodeNames sets.String) *compute.InstanceGroupsListInstances {
|
||||
instanceNames := nodeNames.List()
|
||||
computeInstances := []*compute.InstanceWithNamedPorts{}
|
||||
for _, name := range instanceNames {
|
||||
instanceLink := fmt.Sprintf(
|
||||
"https://www.googleapis.com/compute/v1/projects/%s/zones/%s/instances/%s",
|
||||
"project", "zone", name)
|
||||
computeInstances = append(
|
||||
computeInstances, &compute.InstanceWithNamedPorts{
|
||||
Instance: instanceLink})
|
||||
}
|
||||
return &compute.InstanceGroupsListInstances{
|
||||
Items: computeInstances,
|
||||
}
|
||||
}
|
244
controllers/gce/instances/instances.go
Normal file
244
controllers/gce/instances/instances.go
Normal file
|
@ -0,0 +1,244 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package instances
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"strings"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/storage"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
const (
|
||||
// State string required by gce library to list all instances.
|
||||
allInstances = "ALL"
|
||||
)
|
||||
|
||||
// Instances implements NodePool.
|
||||
type Instances struct {
|
||||
cloud InstanceGroups
|
||||
// zones is a list of zones seeded by Kubernetes node zones.
|
||||
// TODO: we can figure this out.
|
||||
snapshotter storage.Snapshotter
|
||||
zoneLister
|
||||
}
|
||||
|
||||
// NewNodePool creates a new node pool.
|
||||
// - cloud: implements InstanceGroups, used to sync Kubernetes nodes with
|
||||
// members of the cloud InstanceGroup.
|
||||
func NewNodePool(cloud InstanceGroups) NodePool {
|
||||
return &Instances{cloud, storage.NewInMemoryPool(), nil}
|
||||
}
|
||||
|
||||
// Init initializes the instance pool. The given zoneLister is used to list
|
||||
// all zones that require an instance group, and to lookup which zone a
|
||||
// given Kubernetes node is in so we can add it to the right instance group.
|
||||
func (i *Instances) Init(zl zoneLister) {
|
||||
i.zoneLister = zl
|
||||
}
|
||||
|
||||
// AddInstanceGroup creates or gets an instance group if it doesn't exist
|
||||
// and adds the given port to it. Returns a list of one instance group per zone,
|
||||
// all of which have the exact same named port.
|
||||
func (i *Instances) AddInstanceGroup(name string, port int64) ([]*compute.InstanceGroup, *compute.NamedPort, error) {
|
||||
igs := []*compute.InstanceGroup{}
|
||||
namedPort := &compute.NamedPort{}
|
||||
|
||||
zones, err := i.ListZones()
|
||||
if err != nil {
|
||||
return igs, namedPort, err
|
||||
}
|
||||
|
||||
for _, zone := range zones {
|
||||
ig, _ := i.Get(name, zone)
|
||||
var err error
|
||||
if ig == nil {
|
||||
glog.Infof("Creating instance group %v in zone %v", name, zone)
|
||||
ig, err = i.cloud.CreateInstanceGroup(name, zone)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
} else {
|
||||
glog.V(3).Infof("Instance group %v already exists in zone %v, adding port %d to it", name, zone, port)
|
||||
}
|
||||
defer i.snapshotter.Add(name, struct{}{})
|
||||
namedPort, err = i.cloud.AddPortToInstanceGroup(ig, port)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
igs = append(igs, ig)
|
||||
}
|
||||
return igs, namedPort, nil
|
||||
}
|
||||
|
||||
// DeleteInstanceGroup deletes the given IG by name, from all zones.
|
||||
func (i *Instances) DeleteInstanceGroup(name string) error {
|
||||
defer i.snapshotter.Delete(name)
|
||||
errs := []error{}
|
||||
|
||||
zones, err := i.ListZones()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for _, zone := range zones {
|
||||
glog.Infof("Deleting instance group %v in zone %v", name, zone)
|
||||
if err := i.cloud.DeleteInstanceGroup(name, zone); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("%v", errs)
|
||||
}
|
||||
|
||||
// list lists all instances in all zones.
|
||||
func (i *Instances) list(name string) (sets.String, error) {
|
||||
nodeNames := sets.NewString()
|
||||
zones, err := i.ListZones()
|
||||
if err != nil {
|
||||
return nodeNames, err
|
||||
}
|
||||
|
||||
for _, zone := range zones {
|
||||
instances, err := i.cloud.ListInstancesInInstanceGroup(
|
||||
name, zone, allInstances)
|
||||
if err != nil {
|
||||
return nodeNames, err
|
||||
}
|
||||
for _, ins := range instances.Items {
|
||||
// TODO: If round trips weren't so slow one would be inclided
|
||||
// to GetInstance using this url and get the name.
|
||||
parts := strings.Split(ins.Instance, "/")
|
||||
nodeNames.Insert(parts[len(parts)-1])
|
||||
}
|
||||
}
|
||||
return nodeNames, nil
|
||||
}
|
||||
|
||||
// Get returns the Instance Group by name.
|
||||
func (i *Instances) Get(name, zone string) (*compute.InstanceGroup, error) {
|
||||
ig, err := i.cloud.GetInstanceGroup(name, zone)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
i.snapshotter.Add(name, struct{}{})
|
||||
return ig, nil
|
||||
}
|
||||
|
||||
// splitNodesByZones takes a list of node names and returns a map of zone:node names.
|
||||
// It figures out the zones by asking the zoneLister.
|
||||
func (i *Instances) splitNodesByZone(names []string) map[string][]string {
|
||||
nodesByZone := map[string][]string{}
|
||||
for _, name := range names {
|
||||
zone, err := i.GetZoneForNode(name)
|
||||
if err != nil {
|
||||
glog.Errorf("Failed to get zones for %v: %v, skipping", name, err)
|
||||
continue
|
||||
}
|
||||
if _, ok := nodesByZone[zone]; !ok {
|
||||
nodesByZone[zone] = []string{}
|
||||
}
|
||||
nodesByZone[zone] = append(nodesByZone[zone], name)
|
||||
}
|
||||
return nodesByZone
|
||||
}
|
||||
|
||||
// Add adds the given instances to the appropriately zoned Instance Group.
|
||||
func (i *Instances) Add(groupName string, names []string) error {
|
||||
errs := []error{}
|
||||
for zone, nodeNames := range i.splitNodesByZone(names) {
|
||||
glog.V(1).Infof("Adding nodes %v to %v in zone %v", nodeNames, groupName, zone)
|
||||
if err := i.cloud.AddInstancesToInstanceGroup(groupName, zone, nodeNames); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("%v", errs)
|
||||
}
|
||||
|
||||
// Remove removes the given instances from the appropriately zoned Instance Group.
|
||||
func (i *Instances) Remove(groupName string, names []string) error {
|
||||
errs := []error{}
|
||||
for zone, nodeNames := range i.splitNodesByZone(names) {
|
||||
glog.V(1).Infof("Adding nodes %v to %v in zone %v", nodeNames, groupName, zone)
|
||||
if err := i.cloud.RemoveInstancesFromInstanceGroup(groupName, zone, nodeNames); err != nil {
|
||||
errs = append(errs, err)
|
||||
}
|
||||
}
|
||||
if len(errs) == 0 {
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("%v", errs)
|
||||
}
|
||||
|
||||
// Sync syncs kubernetes instances with the instances in the instance group.
|
||||
func (i *Instances) Sync(nodes []string) (err error) {
|
||||
glog.V(4).Infof("Syncing nodes %v", nodes)
|
||||
|
||||
defer func() {
|
||||
// The node pool is only responsible for syncing nodes to instance
|
||||
// groups. It never creates/deletes, so if an instance groups is
|
||||
// not found there's nothing it can do about it anyway. Most cases
|
||||
// this will happen because the backend pool has deleted the instance
|
||||
// group, however if it happens because a user deletes the IG by mistake
|
||||
// we should just wait till the backend pool fixes it.
|
||||
if utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
glog.Infof("Node pool encountered a 404, ignoring: %v", err)
|
||||
err = nil
|
||||
}
|
||||
}()
|
||||
|
||||
pool := i.snapshotter.Snapshot()
|
||||
for igName := range pool {
|
||||
gceNodes := sets.NewString()
|
||||
gceNodes, err = i.list(igName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
kubeNodes := sets.NewString(nodes...)
|
||||
|
||||
// A node deleted via kubernetes could still exist as a gce vm. We don't
|
||||
// want to route requests to it. Similarly, a node added to kubernetes
|
||||
// needs to get added to the instance group so we do route requests to it.
|
||||
|
||||
removeNodes := gceNodes.Difference(kubeNodes).List()
|
||||
addNodes := kubeNodes.Difference(gceNodes).List()
|
||||
if len(removeNodes) != 0 {
|
||||
if err = i.Remove(
|
||||
igName, gceNodes.Difference(kubeNodes).List()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
if len(addNodes) != 0 {
|
||||
if err = i.Add(
|
||||
igName, kubeNodes.Difference(gceNodes).List()); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
81
controllers/gce/instances/instances_test.go
Normal file
81
controllers/gce/instances/instances_test.go
Normal file
|
@ -0,0 +1,81 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package instances
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
const defaultZone = "default-zone"
|
||||
|
||||
func newNodePool(f *FakeInstanceGroups, zone string) NodePool {
|
||||
pool := NewNodePool(f)
|
||||
pool.Init(&FakeZoneLister{[]string{zone}})
|
||||
return pool
|
||||
}
|
||||
|
||||
func TestNodePoolSync(t *testing.T) {
|
||||
f := NewFakeInstanceGroups(sets.NewString(
|
||||
[]string{"n1", "n2"}...))
|
||||
pool := newNodePool(f, defaultZone)
|
||||
pool.AddInstanceGroup("test", 80)
|
||||
|
||||
// KubeNodes: n1
|
||||
// GCENodes: n1, n2
|
||||
// Remove n2 from the instance group.
|
||||
|
||||
f.calls = []int{}
|
||||
kubeNodes := sets.NewString([]string{"n1"}...)
|
||||
pool.Sync(kubeNodes.List())
|
||||
if f.instances.Len() != kubeNodes.Len() || !kubeNodes.IsSuperset(f.instances) {
|
||||
t.Fatalf("%v != %v", kubeNodes, f.instances)
|
||||
}
|
||||
|
||||
// KubeNodes: n1, n2
|
||||
// GCENodes: n1
|
||||
// Try to add n2 to the instance group.
|
||||
|
||||
f = NewFakeInstanceGroups(sets.NewString([]string{"n1"}...))
|
||||
pool = newNodePool(f, defaultZone)
|
||||
pool.AddInstanceGroup("test", 80)
|
||||
|
||||
f.calls = []int{}
|
||||
kubeNodes = sets.NewString([]string{"n1", "n2"}...)
|
||||
pool.Sync(kubeNodes.List())
|
||||
if f.instances.Len() != kubeNodes.Len() ||
|
||||
!kubeNodes.IsSuperset(f.instances) {
|
||||
t.Fatalf("%v != %v", kubeNodes, f.instances)
|
||||
}
|
||||
|
||||
// KubeNodes: n1, n2
|
||||
// GCENodes: n1, n2
|
||||
// Do nothing.
|
||||
|
||||
f = NewFakeInstanceGroups(sets.NewString([]string{"n1", "n2"}...))
|
||||
pool = newNodePool(f, defaultZone)
|
||||
pool.AddInstanceGroup("test", 80)
|
||||
|
||||
f.calls = []int{}
|
||||
kubeNodes = sets.NewString([]string{"n1", "n2"}...)
|
||||
pool.Sync(kubeNodes.List())
|
||||
if len(f.calls) != 0 {
|
||||
t.Fatalf(
|
||||
"Did not expect any calls, got %+v", f.calls)
|
||||
}
|
||||
}
|
56
controllers/gce/instances/interfaces.go
Normal file
56
controllers/gce/instances/interfaces.go
Normal file
|
@ -0,0 +1,56 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package instances
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
)
|
||||
|
||||
// zoneLister manages lookups for GCE instance groups/instances to zones.
|
||||
type zoneLister interface {
|
||||
ListZones() ([]string, error)
|
||||
GetZoneForNode(name string) (string, error)
|
||||
}
|
||||
|
||||
// NodePool is an interface to manage a pool of kubernetes nodes synced with vm instances in the cloud
|
||||
// through the InstanceGroups interface. It handles zones opaquely using the zoneLister.
|
||||
type NodePool interface {
|
||||
Init(zl zoneLister)
|
||||
|
||||
// The following 2 methods operate on instance groups.
|
||||
AddInstanceGroup(name string, port int64) ([]*compute.InstanceGroup, *compute.NamedPort, error)
|
||||
DeleteInstanceGroup(name string) error
|
||||
|
||||
// TODO: Refactor for modularity
|
||||
Add(groupName string, nodeNames []string) error
|
||||
Remove(groupName string, nodeNames []string) error
|
||||
Sync(nodeNames []string) error
|
||||
Get(name, zone string) (*compute.InstanceGroup, error)
|
||||
}
|
||||
|
||||
// InstanceGroups is an interface for managing gce instances groups, and the instances therein.
|
||||
type InstanceGroups interface {
|
||||
GetInstanceGroup(name, zone string) (*compute.InstanceGroup, error)
|
||||
CreateInstanceGroup(name, zone string) (*compute.InstanceGroup, error)
|
||||
DeleteInstanceGroup(name, zone string) error
|
||||
|
||||
// TODO: Refactor for modulatiry.
|
||||
ListInstancesInInstanceGroup(name, zone string, state string) (*compute.InstanceGroupsListInstances, error)
|
||||
AddInstancesToInstanceGroup(name, zone string, instanceNames []string) error
|
||||
RemoveInstancesFromInstanceGroup(name, zone string, instanceName []string) error
|
||||
AddPortToInstanceGroup(ig *compute.InstanceGroup, port int64) (*compute.NamedPort, error)
|
||||
}
|
450
controllers/gce/loadbalancers/fakes.go
Normal file
450
controllers/gce/loadbalancers/fakes.go
Normal file
|
@ -0,0 +1,450 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loadbalancers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
var testIPManager = testIP{}
|
||||
|
||||
type testIP struct {
|
||||
start int
|
||||
}
|
||||
|
||||
func (t *testIP) ip() string {
|
||||
t.start++
|
||||
return fmt.Sprintf("0.0.0.%v", t.start)
|
||||
}
|
||||
|
||||
// Loadbalancer fakes
|
||||
|
||||
// FakeLoadBalancers is a type that fakes out the loadbalancer interface.
|
||||
type FakeLoadBalancers struct {
|
||||
Fw []*compute.ForwardingRule
|
||||
Um []*compute.UrlMap
|
||||
Tp []*compute.TargetHttpProxy
|
||||
Tps []*compute.TargetHttpsProxy
|
||||
IP []*compute.Address
|
||||
Certs []*compute.SslCertificate
|
||||
name string
|
||||
}
|
||||
|
||||
// TODO: There is some duplication between these functions and the name mungers in
|
||||
// loadbalancer file.
|
||||
func (f *FakeLoadBalancers) fwName(https bool) string {
|
||||
if https {
|
||||
return fmt.Sprintf("%v-%v", httpsForwardingRulePrefix, f.name)
|
||||
}
|
||||
return fmt.Sprintf("%v-%v", forwardingRulePrefix, f.name)
|
||||
}
|
||||
|
||||
func (f *FakeLoadBalancers) umName() string {
|
||||
return fmt.Sprintf("%v-%v", urlMapPrefix, f.name)
|
||||
}
|
||||
|
||||
func (f *FakeLoadBalancers) tpName(https bool) string {
|
||||
if https {
|
||||
return fmt.Sprintf("%v-%v", targetHTTPSProxyPrefix, f.name)
|
||||
}
|
||||
return fmt.Sprintf("%v-%v", targetProxyPrefix, f.name)
|
||||
}
|
||||
|
||||
// String is the string method for FakeLoadBalancers.
|
||||
func (f *FakeLoadBalancers) String() string {
|
||||
msg := fmt.Sprintf(
|
||||
"Loadbalancer %v,\nforwarding rules:\n", f.name)
|
||||
for _, fw := range f.Fw {
|
||||
msg += fmt.Sprintf("\t%v\n", fw.Name)
|
||||
}
|
||||
msg += fmt.Sprintf("Target proxies\n")
|
||||
for _, tp := range f.Tp {
|
||||
msg += fmt.Sprintf("\t%v\n", tp.Name)
|
||||
}
|
||||
msg += fmt.Sprintf("UrlMaps\n")
|
||||
for _, um := range f.Um {
|
||||
msg += fmt.Sprintf("%v\n", um.Name)
|
||||
msg += fmt.Sprintf("\tHost Rules:\n")
|
||||
for _, hostRule := range um.HostRules {
|
||||
msg += fmt.Sprintf("\t\t%v\n", hostRule)
|
||||
}
|
||||
msg += fmt.Sprintf("\tPath Matcher:\n")
|
||||
for _, pathMatcher := range um.PathMatchers {
|
||||
msg += fmt.Sprintf("\t\t%v\n", pathMatcher.Name)
|
||||
for _, pathRule := range pathMatcher.PathRules {
|
||||
msg += fmt.Sprintf("\t\t\t%+v\n", pathRule)
|
||||
}
|
||||
}
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
// Forwarding Rule fakes
|
||||
|
||||
// GetGlobalForwardingRule returns a fake forwarding rule.
|
||||
func (f *FakeLoadBalancers) GetGlobalForwardingRule(name string) (*compute.ForwardingRule, error) {
|
||||
for i := range f.Fw {
|
||||
if f.Fw[i].Name == name {
|
||||
return f.Fw[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Forwarding rule %v not found", name)
|
||||
}
|
||||
|
||||
// CreateGlobalForwardingRule fakes forwarding rule creation.
|
||||
func (f *FakeLoadBalancers) CreateGlobalForwardingRule(proxyLink, ip, name, portRange string) (*compute.ForwardingRule, error) {
|
||||
if ip == "" {
|
||||
ip = fmt.Sprintf(testIPManager.ip())
|
||||
}
|
||||
rule := &compute.ForwardingRule{
|
||||
Name: name,
|
||||
IPAddress: ip,
|
||||
Target: proxyLink,
|
||||
PortRange: portRange,
|
||||
IPProtocol: "TCP",
|
||||
SelfLink: name,
|
||||
}
|
||||
f.Fw = append(f.Fw, rule)
|
||||
return rule, nil
|
||||
}
|
||||
|
||||
// SetProxyForGlobalForwardingRule fakes setting a global forwarding rule.
|
||||
func (f *FakeLoadBalancers) SetProxyForGlobalForwardingRule(fw *compute.ForwardingRule, proxyLink string) error {
|
||||
for i := range f.Fw {
|
||||
if f.Fw[i].Name == fw.Name {
|
||||
f.Fw[i].Target = proxyLink
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// DeleteGlobalForwardingRule fakes deleting a global forwarding rule.
|
||||
func (f *FakeLoadBalancers) DeleteGlobalForwardingRule(name string) error {
|
||||
fw := []*compute.ForwardingRule{}
|
||||
for i := range f.Fw {
|
||||
if f.Fw[i].Name != name {
|
||||
fw = append(fw, f.Fw[i])
|
||||
}
|
||||
}
|
||||
f.Fw = fw
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetForwardingRulesWithIPs returns all forwarding rules that match the given ips.
|
||||
func (f *FakeLoadBalancers) GetForwardingRulesWithIPs(ip []string) (fwRules []*compute.ForwardingRule) {
|
||||
ipSet := sets.NewString(ip...)
|
||||
for i := range f.Fw {
|
||||
if ipSet.Has(f.Fw[i].IPAddress) {
|
||||
fwRules = append(fwRules, f.Fw[i])
|
||||
}
|
||||
}
|
||||
return fwRules
|
||||
}
|
||||
|
||||
// UrlMaps fakes
|
||||
|
||||
// GetUrlMap fakes getting url maps from the cloud.
|
||||
func (f *FakeLoadBalancers) GetUrlMap(name string) (*compute.UrlMap, error) {
|
||||
for i := range f.Um {
|
||||
if f.Um[i].Name == name {
|
||||
return f.Um[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Url Map %v not found", name)
|
||||
}
|
||||
|
||||
// CreateUrlMap fakes url-map creation.
|
||||
func (f *FakeLoadBalancers) CreateUrlMap(backend *compute.BackendService, name string) (*compute.UrlMap, error) {
|
||||
urlMap := &compute.UrlMap{
|
||||
Name: name,
|
||||
DefaultService: backend.SelfLink,
|
||||
SelfLink: f.umName(),
|
||||
}
|
||||
f.Um = append(f.Um, urlMap)
|
||||
return urlMap, nil
|
||||
}
|
||||
|
||||
// UpdateUrlMap fakes updating url-maps.
|
||||
func (f *FakeLoadBalancers) UpdateUrlMap(urlMap *compute.UrlMap) (*compute.UrlMap, error) {
|
||||
for i := range f.Um {
|
||||
if f.Um[i].Name == urlMap.Name {
|
||||
f.Um[i] = urlMap
|
||||
return urlMap, nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// DeleteUrlMap fakes url-map deletion.
|
||||
func (f *FakeLoadBalancers) DeleteUrlMap(name string) error {
|
||||
um := []*compute.UrlMap{}
|
||||
for i := range f.Um {
|
||||
if f.Um[i].Name != name {
|
||||
um = append(um, f.Um[i])
|
||||
}
|
||||
}
|
||||
f.Um = um
|
||||
return nil
|
||||
}
|
||||
|
||||
// TargetProxies fakes
|
||||
|
||||
// GetTargetHttpProxy fakes getting target http proxies from the cloud.
|
||||
func (f *FakeLoadBalancers) GetTargetHttpProxy(name string) (*compute.TargetHttpProxy, error) {
|
||||
for i := range f.Tp {
|
||||
if f.Tp[i].Name == name {
|
||||
return f.Tp[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Targetproxy %v not found", name)
|
||||
}
|
||||
|
||||
// CreateTargetHttpProxy fakes creating a target http proxy.
|
||||
func (f *FakeLoadBalancers) CreateTargetHttpProxy(urlMap *compute.UrlMap, name string) (*compute.TargetHttpProxy, error) {
|
||||
proxy := &compute.TargetHttpProxy{
|
||||
Name: name,
|
||||
UrlMap: urlMap.SelfLink,
|
||||
SelfLink: name,
|
||||
}
|
||||
f.Tp = append(f.Tp, proxy)
|
||||
return proxy, nil
|
||||
}
|
||||
|
||||
// DeleteTargetHttpProxy fakes deleting a target http proxy.
|
||||
func (f *FakeLoadBalancers) DeleteTargetHttpProxy(name string) error {
|
||||
tp := []*compute.TargetHttpProxy{}
|
||||
for i := range f.Tp {
|
||||
if f.Tp[i].Name != name {
|
||||
tp = append(tp, f.Tp[i])
|
||||
}
|
||||
}
|
||||
f.Tp = tp
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetUrlMapForTargetHttpProxy fakes setting an url-map for a target http proxy.
|
||||
func (f *FakeLoadBalancers) SetUrlMapForTargetHttpProxy(proxy *compute.TargetHttpProxy, urlMap *compute.UrlMap) error {
|
||||
for i := range f.Tp {
|
||||
if f.Tp[i].Name == proxy.Name {
|
||||
f.Tp[i].UrlMap = urlMap.SelfLink
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// TargetHttpsProxy fakes
|
||||
|
||||
// GetTargetHttpsProxy fakes getting target http proxies from the cloud.
|
||||
func (f *FakeLoadBalancers) GetTargetHttpsProxy(name string) (*compute.TargetHttpsProxy, error) {
|
||||
for i := range f.Tps {
|
||||
if f.Tps[i].Name == name {
|
||||
return f.Tps[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Targetproxy %v not found", name)
|
||||
}
|
||||
|
||||
// CreateTargetHttpsProxy fakes creating a target http proxy.
|
||||
func (f *FakeLoadBalancers) CreateTargetHttpsProxy(urlMap *compute.UrlMap, cert *compute.SslCertificate, name string) (*compute.TargetHttpsProxy, error) {
|
||||
proxy := &compute.TargetHttpsProxy{
|
||||
Name: name,
|
||||
UrlMap: urlMap.SelfLink,
|
||||
SslCertificates: []string{cert.SelfLink},
|
||||
SelfLink: name,
|
||||
}
|
||||
f.Tps = append(f.Tps, proxy)
|
||||
return proxy, nil
|
||||
}
|
||||
|
||||
// DeleteTargetHttpsProxy fakes deleting a target http proxy.
|
||||
func (f *FakeLoadBalancers) DeleteTargetHttpsProxy(name string) error {
|
||||
tp := []*compute.TargetHttpsProxy{}
|
||||
for i := range f.Tps {
|
||||
if f.Tps[i].Name != name {
|
||||
tp = append(tp, f.Tps[i])
|
||||
}
|
||||
}
|
||||
f.Tps = tp
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetUrlMapForTargetHttpsProxy fakes setting an url-map for a target http proxy.
|
||||
func (f *FakeLoadBalancers) SetUrlMapForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, urlMap *compute.UrlMap) error {
|
||||
for i := range f.Tps {
|
||||
if f.Tps[i].Name == proxy.Name {
|
||||
f.Tps[i].UrlMap = urlMap.SelfLink
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetSslCertificateForTargetHttpsProxy fakes out setting certificates.
|
||||
func (f *FakeLoadBalancers) SetSslCertificateForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, SSLCert *compute.SslCertificate) error {
|
||||
found := false
|
||||
for i := range f.Tps {
|
||||
if f.Tps[i].Name == proxy.Name {
|
||||
f.Tps[i].SslCertificates = []string{SSLCert.SelfLink}
|
||||
found = true
|
||||
}
|
||||
}
|
||||
if !found {
|
||||
return fmt.Errorf("Failed to find proxy %v", proxy.Name)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// UrlMap fakes
|
||||
|
||||
// CheckURLMap checks the URL map.
|
||||
func (f *FakeLoadBalancers) CheckURLMap(t *testing.T, l7 *L7, expectedMap map[string]utils.FakeIngressRuleValueMap) {
|
||||
um, err := f.GetUrlMap(l7.um.Name)
|
||||
if err != nil || um == nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
// Check the default backend
|
||||
var d string
|
||||
if h, ok := expectedMap[utils.DefaultBackendKey]; ok {
|
||||
if d, ok = h[utils.DefaultBackendKey]; ok {
|
||||
delete(h, utils.DefaultBackendKey)
|
||||
}
|
||||
delete(expectedMap, utils.DefaultBackendKey)
|
||||
}
|
||||
// The urlmap should have a default backend, and each path matcher.
|
||||
if d != "" && l7.um.DefaultService != d {
|
||||
t.Fatalf("Expected default backend %v found %v",
|
||||
d, l7.um.DefaultService)
|
||||
}
|
||||
|
||||
for _, matcher := range l7.um.PathMatchers {
|
||||
var hostname string
|
||||
// There's a 1:1 mapping between pathmatchers and hosts
|
||||
for _, hostRule := range l7.um.HostRules {
|
||||
if matcher.Name == hostRule.PathMatcher {
|
||||
if len(hostRule.Hosts) != 1 {
|
||||
t.Fatalf("Unexpected hosts in hostrules %+v", hostRule)
|
||||
}
|
||||
if d != "" && matcher.DefaultService != d {
|
||||
t.Fatalf("Expected default backend %v found %v",
|
||||
d, matcher.DefaultService)
|
||||
}
|
||||
hostname = hostRule.Hosts[0]
|
||||
break
|
||||
}
|
||||
}
|
||||
// These are all pathrules for a single host, found above
|
||||
for _, rule := range matcher.PathRules {
|
||||
if len(rule.Paths) != 1 {
|
||||
t.Fatalf("Unexpected rule in pathrules %+v", rule)
|
||||
}
|
||||
pathRule := rule.Paths[0]
|
||||
if hostMap, ok := expectedMap[hostname]; !ok {
|
||||
t.Fatalf("Expected map for host %v: %v", hostname, hostMap)
|
||||
} else if svc, ok := expectedMap[hostname][pathRule]; !ok {
|
||||
t.Fatalf("Expected rule %v in host map", pathRule)
|
||||
} else if svc != rule.Service {
|
||||
t.Fatalf("Expected service %v found %v", svc, rule.Service)
|
||||
}
|
||||
delete(expectedMap[hostname], pathRule)
|
||||
if len(expectedMap[hostname]) == 0 {
|
||||
delete(expectedMap, hostname)
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(expectedMap) != 0 {
|
||||
t.Fatalf("Untranslated entries %+v", expectedMap)
|
||||
}
|
||||
}
|
||||
|
||||
// Static IP fakes
|
||||
|
||||
// ReserveGlobalStaticIP fakes out static IP reservation.
|
||||
func (f *FakeLoadBalancers) ReserveGlobalStaticIP(name, IPAddress string) (*compute.Address, error) {
|
||||
ip := &compute.Address{
|
||||
Name: name,
|
||||
Address: IPAddress,
|
||||
}
|
||||
f.IP = append(f.IP, ip)
|
||||
return ip, nil
|
||||
}
|
||||
|
||||
// GetGlobalStaticIP fakes out static IP retrieval.
|
||||
func (f *FakeLoadBalancers) GetGlobalStaticIP(name string) (*compute.Address, error) {
|
||||
for i := range f.IP {
|
||||
if f.IP[i].Name == name {
|
||||
return f.IP[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Static IP %v not found", name)
|
||||
}
|
||||
|
||||
// DeleteGlobalStaticIP fakes out static IP deletion.
|
||||
func (f *FakeLoadBalancers) DeleteGlobalStaticIP(name string) error {
|
||||
ip := []*compute.Address{}
|
||||
for i := range f.IP {
|
||||
if f.IP[i].Name != name {
|
||||
ip = append(ip, f.IP[i])
|
||||
}
|
||||
}
|
||||
f.IP = ip
|
||||
return nil
|
||||
}
|
||||
|
||||
// SslCertificate fakes
|
||||
|
||||
// GetSslCertificate fakes out getting ssl certs.
|
||||
func (f *FakeLoadBalancers) GetSslCertificate(name string) (*compute.SslCertificate, error) {
|
||||
for i := range f.Certs {
|
||||
if f.Certs[i].Name == name {
|
||||
return f.Certs[i], nil
|
||||
}
|
||||
}
|
||||
return nil, fmt.Errorf("Cert %v not found", name)
|
||||
}
|
||||
|
||||
// CreateSslCertificate fakes out certificate creation.
|
||||
func (f *FakeLoadBalancers) CreateSslCertificate(cert *compute.SslCertificate) (*compute.SslCertificate, error) {
|
||||
cert.SelfLink = cert.Name
|
||||
f.Certs = append(f.Certs, cert)
|
||||
return cert, nil
|
||||
}
|
||||
|
||||
// DeleteSslCertificate fakes out certificate deletion.
|
||||
func (f *FakeLoadBalancers) DeleteSslCertificate(name string) error {
|
||||
certs := []*compute.SslCertificate{}
|
||||
for i := range f.Certs {
|
||||
if f.Certs[i].Name != name {
|
||||
certs = append(certs, f.Certs[i])
|
||||
}
|
||||
}
|
||||
f.Certs = certs
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewFakeLoadBalancers creates a fake cloud client. Name is the name
|
||||
// inserted into the selfLink of the associated resources for testing.
|
||||
// eg: forwardingRule.SelfLink == k8-fw-name.
|
||||
func NewFakeLoadBalancers(name string) *FakeLoadBalancers {
|
||||
return &FakeLoadBalancers{
|
||||
Fw: []*compute.ForwardingRule{},
|
||||
name: name,
|
||||
}
|
||||
}
|
74
controllers/gce/loadbalancers/interfaces.go
Normal file
74
controllers/gce/loadbalancers/interfaces.go
Normal file
|
@ -0,0 +1,74 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loadbalancers
|
||||
|
||||
import (
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
)
|
||||
|
||||
// LoadBalancers is an interface for managing all the gce resources needed by L7
|
||||
// loadbalancers. We don't have individual pools for each of these resources
|
||||
// because none of them are usable (or acquirable) stand-alone, unlinke backends
|
||||
// and instance groups. The dependency graph:
|
||||
// ForwardingRule -> UrlMaps -> TargetProxies
|
||||
type LoadBalancers interface {
|
||||
// Forwarding Rules
|
||||
GetGlobalForwardingRule(name string) (*compute.ForwardingRule, error)
|
||||
CreateGlobalForwardingRule(proxyLink, ip, name, portRange string) (*compute.ForwardingRule, error)
|
||||
DeleteGlobalForwardingRule(name string) error
|
||||
SetProxyForGlobalForwardingRule(fw *compute.ForwardingRule, proxy string) error
|
||||
|
||||
// UrlMaps
|
||||
GetUrlMap(name string) (*compute.UrlMap, error)
|
||||
CreateUrlMap(backend *compute.BackendService, name string) (*compute.UrlMap, error)
|
||||
UpdateUrlMap(urlMap *compute.UrlMap) (*compute.UrlMap, error)
|
||||
DeleteUrlMap(name string) error
|
||||
|
||||
// TargetProxies
|
||||
GetTargetHttpProxy(name string) (*compute.TargetHttpProxy, error)
|
||||
CreateTargetHttpProxy(urlMap *compute.UrlMap, name string) (*compute.TargetHttpProxy, error)
|
||||
DeleteTargetHttpProxy(name string) error
|
||||
SetUrlMapForTargetHttpProxy(proxy *compute.TargetHttpProxy, urlMap *compute.UrlMap) error
|
||||
|
||||
// TargetHttpsProxies
|
||||
GetTargetHttpsProxy(name string) (*compute.TargetHttpsProxy, error)
|
||||
CreateTargetHttpsProxy(urlMap *compute.UrlMap, SSLCerts *compute.SslCertificate, name string) (*compute.TargetHttpsProxy, error)
|
||||
DeleteTargetHttpsProxy(name string) error
|
||||
SetUrlMapForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, urlMap *compute.UrlMap) error
|
||||
SetSslCertificateForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, SSLCerts *compute.SslCertificate) error
|
||||
|
||||
// SslCertificates
|
||||
GetSslCertificate(name string) (*compute.SslCertificate, error)
|
||||
CreateSslCertificate(certs *compute.SslCertificate) (*compute.SslCertificate, error)
|
||||
DeleteSslCertificate(name string) error
|
||||
|
||||
// Static IP
|
||||
ReserveGlobalStaticIP(name, IPAddress string) (*compute.Address, error)
|
||||
GetGlobalStaticIP(name string) (*compute.Address, error)
|
||||
DeleteGlobalStaticIP(name string) error
|
||||
}
|
||||
|
||||
// LoadBalancerPool is an interface to manage the cloud resources associated
|
||||
// with a gce loadbalancer.
|
||||
type LoadBalancerPool interface {
|
||||
Get(name string) (*L7, error)
|
||||
Add(ri *L7RuntimeInfo) error
|
||||
Delete(name string) error
|
||||
Sync(ri []*L7RuntimeInfo) error
|
||||
GC(names []string) error
|
||||
Shutdown() error
|
||||
}
|
884
controllers/gce/loadbalancers/loadbalancers.go
Normal file
884
controllers/gce/loadbalancers/loadbalancers.go
Normal file
|
@ -0,0 +1,884 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loadbalancers
|
||||
|
||||
import (
|
||||
"crypto/md5"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"reflect"
|
||||
"strings"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/backends"
|
||||
"k8s.io/contrib/ingress/controllers/gce/storage"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
const (
|
||||
|
||||
// The gce api uses the name of a path rule to match a host rule.
|
||||
hostRulePrefix = "host"
|
||||
|
||||
// DefaultHost is the host used if none is specified. It is a valid value
|
||||
// for the "Host" field recognized by GCE.
|
||||
DefaultHost = "*"
|
||||
|
||||
// DefaultPath is the path used if none is specified. It is a valid path
|
||||
// recognized by GCE.
|
||||
DefaultPath = "/*"
|
||||
|
||||
// A single target proxy/urlmap/forwarding rule is created per loadbalancer.
|
||||
// Tagged with the namespace/name of the Ingress.
|
||||
// TODO: Move the namer to its own package out of utils and move the prefix
|
||||
// with it. Currently the construction of the loadbalancer resources names
|
||||
// are split between the namer and the loadbalancers package.
|
||||
targetProxyPrefix = "k8s-tp"
|
||||
targetHTTPSProxyPrefix = "k8s-tps"
|
||||
sslCertPrefix = "k8s-ssl"
|
||||
forwardingRulePrefix = "k8s-fw"
|
||||
httpsForwardingRulePrefix = "k8s-fws"
|
||||
urlMapPrefix = "k8s-um"
|
||||
httpDefaultPortRange = "80-80"
|
||||
httpsDefaultPortRange = "443-443"
|
||||
)
|
||||
|
||||
// L7s implements LoadBalancerPool.
|
||||
type L7s struct {
|
||||
cloud LoadBalancers
|
||||
snapshotter storage.Snapshotter
|
||||
// TODO: Remove this field and always ask the BackendPool using the NodePort.
|
||||
glbcDefaultBackend *compute.BackendService
|
||||
defaultBackendPool backends.BackendPool
|
||||
defaultBackendNodePort int64
|
||||
namer *utils.Namer
|
||||
}
|
||||
|
||||
// NewLoadBalancerPool returns a new loadbalancer pool.
|
||||
// - cloud: implements LoadBalancers. Used to sync L7 loadbalancer resources
|
||||
// with the cloud.
|
||||
// - defaultBackendPool: a BackendPool used to manage the GCE BackendService for
|
||||
// the default backend.
|
||||
// - defaultBackendNodePort: The nodePort of the Kubernetes service representing
|
||||
// the default backend.
|
||||
func NewLoadBalancerPool(
|
||||
cloud LoadBalancers,
|
||||
defaultBackendPool backends.BackendPool,
|
||||
defaultBackendNodePort int64, namer *utils.Namer) LoadBalancerPool {
|
||||
return &L7s{cloud, storage.NewInMemoryPool(), nil, defaultBackendPool, defaultBackendNodePort, namer}
|
||||
}
|
||||
|
||||
func (l *L7s) create(ri *L7RuntimeInfo) (*L7, error) {
|
||||
// Lazily create a default backend so we don't tax users who don't care
|
||||
// about Ingress by consuming 1 of their 3 GCE BackendServices. This
|
||||
// BackendService is deleted when there are no more Ingresses, either
|
||||
// through Sync or Shutdown.
|
||||
if l.glbcDefaultBackend == nil {
|
||||
err := l.defaultBackendPool.Add(l.defaultBackendNodePort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
l.glbcDefaultBackend, err = l.defaultBackendPool.Get(l.defaultBackendNodePort)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return &L7{
|
||||
runtimeInfo: ri,
|
||||
Name: l.namer.LBName(ri.Name),
|
||||
cloud: l.cloud,
|
||||
glbcDefaultBackend: l.glbcDefaultBackend,
|
||||
namer: l.namer,
|
||||
sslCert: nil,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Get returns the loadbalancer by name.
|
||||
func (l *L7s) Get(name string) (*L7, error) {
|
||||
name = l.namer.LBName(name)
|
||||
lb, exists := l.snapshotter.Get(name)
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("Loadbalancer %v not in pool", name)
|
||||
}
|
||||
return lb.(*L7), nil
|
||||
}
|
||||
|
||||
// Add gets or creates a loadbalancer.
|
||||
// If the loadbalancer already exists, it checks that its edges are valid.
|
||||
func (l *L7s) Add(ri *L7RuntimeInfo) (err error) {
|
||||
name := l.namer.LBName(ri.Name)
|
||||
|
||||
lb, _ := l.Get(name)
|
||||
if lb == nil {
|
||||
glog.Infof("Creating l7 %v", name)
|
||||
lb, err = l.create(ri)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if !reflect.DeepEqual(lb.runtimeInfo, ri) {
|
||||
glog.Infof("LB %v runtime info changed, old %+v new %+v", lb.Name, lb.runtimeInfo, ri)
|
||||
lb.runtimeInfo = ri
|
||||
}
|
||||
}
|
||||
// Add the lb to the pool, in case we create an UrlMap but run out
|
||||
// of quota in creating the ForwardingRule we still need to cleanup
|
||||
// the UrlMap during GC.
|
||||
defer l.snapshotter.Add(name, lb)
|
||||
|
||||
// Why edge hop for the create?
|
||||
// The loadbalancer is a fictitious resource, it doesn't exist in gce. To
|
||||
// make it exist we need to create a collection of gce resources, done
|
||||
// through the edge hop.
|
||||
if err := lb.edgeHop(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete deletes a loadbalancer by name.
|
||||
func (l *L7s) Delete(name string) error {
|
||||
name = l.namer.LBName(name)
|
||||
lb, err := l.Get(name)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
glog.Infof("Deleting lb %v", name)
|
||||
if err := lb.Cleanup(); err != nil {
|
||||
return err
|
||||
}
|
||||
l.snapshotter.Delete(name)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Sync loadbalancers with the given runtime info from the controller.
|
||||
func (l *L7s) Sync(lbs []*L7RuntimeInfo) error {
|
||||
glog.V(3).Infof("Creating loadbalancers %+v", lbs)
|
||||
|
||||
// The default backend is completely managed by the l7 pool.
|
||||
// This includes recreating it if it's deleted, or fixing broken links.
|
||||
if err := l.defaultBackendPool.Add(l.defaultBackendNodePort); err != nil {
|
||||
return err
|
||||
}
|
||||
// create new loadbalancers, perform an edge hop for existing
|
||||
for _, ri := range lbs {
|
||||
if err := l.Add(ri); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Tear down the default backend when there are no more loadbalancers
|
||||
// because the cluster could go down anytime and we'd leak it otherwise.
|
||||
if len(lbs) == 0 {
|
||||
if err := l.defaultBackendPool.Delete(l.defaultBackendNodePort); err != nil {
|
||||
return err
|
||||
}
|
||||
l.glbcDefaultBackend = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GC garbage collects loadbalancers not in the input list.
|
||||
func (l *L7s) GC(names []string) error {
|
||||
knownLoadBalancers := sets.NewString()
|
||||
for _, n := range names {
|
||||
knownLoadBalancers.Insert(l.namer.LBName(n))
|
||||
}
|
||||
pool := l.snapshotter.Snapshot()
|
||||
|
||||
// Delete unknown loadbalancers
|
||||
for name := range pool {
|
||||
if knownLoadBalancers.Has(name) {
|
||||
continue
|
||||
}
|
||||
glog.V(3).Infof("GCing loadbalancer %v", name)
|
||||
if err := l.Delete(name); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Shutdown logs whether or not the pool is empty.
|
||||
func (l *L7s) Shutdown() error {
|
||||
if err := l.GC([]string{}); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.defaultBackendPool.Shutdown(); err != nil {
|
||||
return err
|
||||
}
|
||||
glog.Infof("Loadbalancer pool shutdown.")
|
||||
return nil
|
||||
}
|
||||
|
||||
// TLSCerts encapsulates .pem encoded TLS information.
|
||||
type TLSCerts struct {
|
||||
// Key is private key.
|
||||
Key string
|
||||
// Cert is a public key.
|
||||
Cert string
|
||||
// Chain is a certificate chain.
|
||||
Chain string
|
||||
}
|
||||
|
||||
// L7RuntimeInfo is info passed to this module from the controller runtime.
|
||||
type L7RuntimeInfo struct {
|
||||
// Name is the name of a loadbalancer.
|
||||
Name string
|
||||
// IP is the desired ip of the loadbalancer, eg from a staticIP.
|
||||
IP string
|
||||
// TLS are the tls certs to use in termination.
|
||||
TLS *TLSCerts
|
||||
// AllowHTTP will not setup :80, if TLS is nil and AllowHTTP is set,
|
||||
// no loadbalancer is created.
|
||||
AllowHTTP bool
|
||||
// The name of a Global Static IP. If specified, the IP associated with
|
||||
// this name is used in the Forwarding Rules for this loadbalancer.
|
||||
StaticIPName string
|
||||
}
|
||||
|
||||
// L7 represents a single L7 loadbalancer.
|
||||
type L7 struct {
|
||||
Name string
|
||||
// runtimeInfo is non-cloudprovider information passed from the controller.
|
||||
runtimeInfo *L7RuntimeInfo
|
||||
// cloud is an interface to manage loadbalancers in the GCE cloud.
|
||||
cloud LoadBalancers
|
||||
// um is the UrlMap associated with this L7.
|
||||
um *compute.UrlMap
|
||||
// tp is the TargetHTTPProxy associated with this L7.
|
||||
tp *compute.TargetHttpProxy
|
||||
// tps is the TargetHTTPSProxy associated with this L7.
|
||||
tps *compute.TargetHttpsProxy
|
||||
// fw is the GlobalForwardingRule that points to the TargetHTTPProxy.
|
||||
fw *compute.ForwardingRule
|
||||
// fws is the GlobalForwardingRule that points to the TargetHTTPSProxy.
|
||||
fws *compute.ForwardingRule
|
||||
// ip is the static-ip associated with both GlobalForwardingRules.
|
||||
ip *compute.Address
|
||||
// sslCert is the ssl cert associated with the targetHTTPSProxy.
|
||||
// TODO: Make this a custom type that contains crt+key
|
||||
sslCert *compute.SslCertificate
|
||||
// oldSSLCert is the certificate that used to be hooked up to the
|
||||
// targetHTTPSProxy. We can't update a cert in place, so we need
|
||||
// to create - update - delete and storing the old cert in a field
|
||||
// prevents leakage if there's a failure along the way.
|
||||
oldSSLCert *compute.SslCertificate
|
||||
// glbcDefaultBacked is the backend to use if no path rules match.
|
||||
// TODO: Expose this to users.
|
||||
glbcDefaultBackend *compute.BackendService
|
||||
// namer is used to compute names of the various sub-components of an L7.
|
||||
namer *utils.Namer
|
||||
}
|
||||
|
||||
func (l *L7) checkUrlMap(backend *compute.BackendService) (err error) {
|
||||
if l.glbcDefaultBackend == nil {
|
||||
return fmt.Errorf("Cannot create urlmap without default backend.")
|
||||
}
|
||||
urlMapName := l.namer.Truncate(fmt.Sprintf("%v-%v", urlMapPrefix, l.Name))
|
||||
urlMap, _ := l.cloud.GetUrlMap(urlMapName)
|
||||
if urlMap != nil {
|
||||
glog.V(3).Infof("Url map %v already exists", urlMap.Name)
|
||||
l.um = urlMap
|
||||
return nil
|
||||
}
|
||||
|
||||
glog.Infof("Creating url map %v for backend %v", urlMapName, l.glbcDefaultBackend.Name)
|
||||
urlMap, err = l.cloud.CreateUrlMap(l.glbcDefaultBackend, urlMapName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.um = urlMap
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) checkProxy() (err error) {
|
||||
if l.um == nil {
|
||||
return fmt.Errorf("Cannot create proxy without urlmap.")
|
||||
}
|
||||
proxyName := l.namer.Truncate(fmt.Sprintf("%v-%v", targetProxyPrefix, l.Name))
|
||||
proxy, _ := l.cloud.GetTargetHttpProxy(proxyName)
|
||||
if proxy == nil {
|
||||
glog.Infof("Creating new http proxy for urlmap %v", l.um.Name)
|
||||
proxy, err = l.cloud.CreateTargetHttpProxy(l.um, proxyName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.tp = proxy
|
||||
return nil
|
||||
}
|
||||
if !utils.CompareLinks(proxy.UrlMap, l.um.SelfLink) {
|
||||
glog.Infof("Proxy %v has the wrong url map, setting %v overwriting %v",
|
||||
proxy.Name, l.um, proxy.UrlMap)
|
||||
if err := l.cloud.SetUrlMapForTargetHttpProxy(proxy, l.um); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.tp = proxy
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) deleteOldSSLCert() (err error) {
|
||||
if l.oldSSLCert == nil || l.sslCert == nil || l.oldSSLCert.Name == l.sslCert.Name {
|
||||
return nil
|
||||
}
|
||||
glog.Infof("Cleaning up old SSL Certificate %v, current name %v", l.oldSSLCert.Name, l.sslCert.Name)
|
||||
if err := l.cloud.DeleteSslCertificate(l.oldSSLCert.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.oldSSLCert = nil
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) checkSSLCert() (err error) {
|
||||
// TODO: Currently, GCE only supports a single certificate per static IP
|
||||
// so we don't need to bother with disambiguation. Naming the cert after
|
||||
// the loadbalancer is a simplification.
|
||||
|
||||
ingCert := l.runtimeInfo.TLS.Cert
|
||||
ingKey := l.runtimeInfo.TLS.Key
|
||||
|
||||
// The name of the cert for this lb flip-flops between these 2 on
|
||||
// every certificate update. We don't append the index at the end so we're
|
||||
// sure it isn't truncated.
|
||||
// TODO: Clean this code up into a ring buffer.
|
||||
primaryCertName := l.namer.Truncate(fmt.Sprintf("%v-%v", sslCertPrefix, l.Name))
|
||||
secondaryCertName := l.namer.Truncate(fmt.Sprintf("%v-%d-%v", sslCertPrefix, 1, l.Name))
|
||||
certName := primaryCertName
|
||||
if l.sslCert != nil {
|
||||
certName = l.sslCert.Name
|
||||
}
|
||||
cert, _ := l.cloud.GetSslCertificate(certName)
|
||||
|
||||
// PrivateKey is write only, so compare certs alone. We're assuming that
|
||||
// no one will change just the key. We can remember the key and compare,
|
||||
// but a bug could end up leaking it, which feels worse.
|
||||
if cert == nil || ingCert != cert.Certificate {
|
||||
|
||||
certChanged := cert != nil && (ingCert != cert.Certificate)
|
||||
if certChanged {
|
||||
if certName == primaryCertName {
|
||||
certName = secondaryCertName
|
||||
} else {
|
||||
certName = primaryCertName
|
||||
}
|
||||
}
|
||||
|
||||
glog.Infof("Creating new sslCertificates %v for %v", l.Name, certName)
|
||||
cert, err = l.cloud.CreateSslCertificate(&compute.SslCertificate{
|
||||
Name: certName,
|
||||
Certificate: ingCert,
|
||||
PrivateKey: ingKey,
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
// Save the current cert for cleanup after we update the target proxy.
|
||||
l.oldSSLCert = l.sslCert
|
||||
}
|
||||
|
||||
l.sslCert = cert
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) checkHttpsProxy() (err error) {
|
||||
if l.sslCert == nil {
|
||||
glog.V(3).Infof("No SSL certificates for %v, will not create HTTPS proxy.", l.Name)
|
||||
return nil
|
||||
}
|
||||
if l.um == nil {
|
||||
return fmt.Errorf("No UrlMap for %v, will not create HTTPS proxy.", l.Name)
|
||||
}
|
||||
proxyName := l.namer.Truncate(fmt.Sprintf("%v-%v", targetHTTPSProxyPrefix, l.Name))
|
||||
proxy, _ := l.cloud.GetTargetHttpsProxy(proxyName)
|
||||
if proxy == nil {
|
||||
glog.Infof("Creating new https proxy for urlmap %v", l.um.Name)
|
||||
proxy, err = l.cloud.CreateTargetHttpsProxy(l.um, l.sslCert, proxyName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.tps = proxy
|
||||
return nil
|
||||
}
|
||||
if !utils.CompareLinks(proxy.UrlMap, l.um.SelfLink) {
|
||||
glog.Infof("Https proxy %v has the wrong url map, setting %v overwriting %v",
|
||||
proxy.Name, l.um, proxy.UrlMap)
|
||||
if err := l.cloud.SetUrlMapForTargetHttpsProxy(proxy, l.um); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
cert := proxy.SslCertificates[0]
|
||||
if !utils.CompareLinks(cert, l.sslCert.SelfLink) {
|
||||
glog.Infof("Https proxy %v has the wrong ssl certs, setting %v overwriting %v",
|
||||
proxy.Name, l.sslCert.SelfLink, cert)
|
||||
if err := l.cloud.SetSslCertificateForTargetHttpsProxy(proxy, l.sslCert); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
glog.V(3).Infof("Created target https proxy %v", proxy.Name)
|
||||
l.tps = proxy
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) checkForwardingRule(name, proxyLink, ip, portRange string) (fw *compute.ForwardingRule, err error) {
|
||||
fw, _ = l.cloud.GetGlobalForwardingRule(name)
|
||||
if fw != nil && (ip != "" && fw.IPAddress != ip || fw.PortRange != portRange) {
|
||||
glog.Warningf("Recreating forwarding rule %v(%v), so it has %v(%v)",
|
||||
fw.IPAddress, fw.PortRange, ip, portRange)
|
||||
if err := l.cloud.DeleteGlobalForwardingRule(name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
fw = nil
|
||||
}
|
||||
if fw == nil {
|
||||
parts := strings.Split(proxyLink, "/")
|
||||
glog.Infof("Creating forwarding rule for proxy %v and ip %v:%v", parts[len(parts)-1:], ip, portRange)
|
||||
fw, err = l.cloud.CreateGlobalForwardingRule(proxyLink, ip, name, portRange)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
// TODO: If the port range and protocol don't match, recreate the rule
|
||||
if utils.CompareLinks(fw.Target, proxyLink) {
|
||||
glog.V(3).Infof("Forwarding rule %v already exists", fw.Name)
|
||||
} else {
|
||||
glog.Infof("Forwarding rule %v has the wrong proxy, setting %v overwriting %v",
|
||||
fw.Name, fw.Target, proxyLink)
|
||||
if err := l.cloud.SetProxyForGlobalForwardingRule(fw, proxyLink); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return fw, nil
|
||||
}
|
||||
|
||||
// getEffectiveIP returns a string with the IP to use in the HTTP and HTTPS
|
||||
// forwarding rules, and a boolean indicating if this is an IP the controller
|
||||
// should manage or not.
|
||||
func (l *L7) getEffectiveIP() (string, bool) {
|
||||
|
||||
// A note on IP management:
|
||||
// User specifies a different IP on startup:
|
||||
// - We create a forwarding rule with the given IP.
|
||||
// - If this ip doesn't exist in GCE, we create another one in the hope
|
||||
// that they will rectify it later on.
|
||||
// - In the happy case, no static ip is created or deleted by this controller.
|
||||
// Controller allocates a staticIP/ephemeralIP, but user changes it:
|
||||
// - We still delete the old static IP, but only when we tear down the
|
||||
// Ingress in Cleanup(). Till then the static IP stays around, but
|
||||
// the forwarding rules get deleted/created with the new IP.
|
||||
// - There will be a period of downtime as we flip IPs.
|
||||
// User specifies the same static IP to 2 Ingresses:
|
||||
// - GCE will throw a 400, and the controller will keep trying to use
|
||||
// the IP in the hope that the user manually resolves the conflict
|
||||
// or deletes/modifies the Ingress.
|
||||
// TODO: Handle the last case better.
|
||||
|
||||
if l.runtimeInfo.StaticIPName != "" {
|
||||
// Existing static IPs allocated to forwarding rules will get orphaned
|
||||
// till the Ingress is torn down.
|
||||
if ip, err := l.cloud.GetGlobalStaticIP(l.runtimeInfo.StaticIPName); err != nil || ip == nil {
|
||||
glog.Warningf("The given static IP name %v doesn't translate to an existing global static IP, ignoring it and allocating a new IP: %v",
|
||||
l.runtimeInfo.StaticIPName, err)
|
||||
} else {
|
||||
return ip.Address, false
|
||||
}
|
||||
}
|
||||
if l.ip != nil {
|
||||
return l.ip.Address, true
|
||||
}
|
||||
return "", true
|
||||
}
|
||||
|
||||
func (l *L7) checkHttpForwardingRule() (err error) {
|
||||
if l.tp == nil {
|
||||
return fmt.Errorf("Cannot create forwarding rule without proxy.")
|
||||
}
|
||||
name := l.namer.Truncate(fmt.Sprintf("%v-%v", forwardingRulePrefix, l.Name))
|
||||
address, _ := l.getEffectiveIP()
|
||||
fw, err := l.checkForwardingRule(name, l.tp.SelfLink, address, httpDefaultPortRange)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.fw = fw
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) checkHttpsForwardingRule() (err error) {
|
||||
if l.tps == nil {
|
||||
glog.V(3).Infof("No https target proxy for %v, not created https forwarding rule", l.Name)
|
||||
return nil
|
||||
}
|
||||
name := l.namer.Truncate(fmt.Sprintf("%v-%v", httpsForwardingRulePrefix, l.Name))
|
||||
address, _ := l.getEffectiveIP()
|
||||
fws, err := l.checkForwardingRule(name, l.tps.SelfLink, address, httpsDefaultPortRange)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.fws = fws
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkStaticIP reserves a static IP allocated to the Forwarding Rule.
|
||||
func (l *L7) checkStaticIP() (err error) {
|
||||
if l.fw == nil || l.fw.IPAddress == "" {
|
||||
return fmt.Errorf("Will not create static IP without a forwarding rule.")
|
||||
}
|
||||
// Don't manage staticIPs if the user has specified an IP.
|
||||
if address, manageStaticIP := l.getEffectiveIP(); !manageStaticIP {
|
||||
glog.V(3).Infof("Not managing user specified static IP %v", address)
|
||||
return nil
|
||||
}
|
||||
staticIPName := l.namer.Truncate(fmt.Sprintf("%v-%v", forwardingRulePrefix, l.Name))
|
||||
ip, _ := l.cloud.GetGlobalStaticIP(staticIPName)
|
||||
if ip == nil {
|
||||
glog.Infof("Creating static ip %v", staticIPName)
|
||||
ip, err = l.cloud.ReserveGlobalStaticIP(staticIPName, l.fw.IPAddress)
|
||||
if err != nil {
|
||||
if utils.IsHTTPErrorCode(err, http.StatusConflict) ||
|
||||
utils.IsHTTPErrorCode(err, http.StatusBadRequest) {
|
||||
glog.V(3).Infof("IP %v(%v) is already reserved, assuming it is OK to use.",
|
||||
l.fw.IPAddress, staticIPName)
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.ip = ip
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) edgeHop() error {
|
||||
if err := l.checkUrlMap(l.glbcDefaultBackend); err != nil {
|
||||
return err
|
||||
}
|
||||
if l.runtimeInfo.AllowHTTP {
|
||||
if err := l.edgeHopHttp(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
// Defer promoting an emphemral to a static IP till it's really needed.
|
||||
if l.runtimeInfo.AllowHTTP && l.runtimeInfo.TLS != nil {
|
||||
if err := l.checkStaticIP(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
if l.runtimeInfo.TLS != nil {
|
||||
glog.V(3).Infof("Edge hopping https for %v", l.Name)
|
||||
if err := l.edgeHopHttps(); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) edgeHopHttp() error {
|
||||
if err := l.checkProxy(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.checkHttpForwardingRule(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (l *L7) edgeHopHttps() error {
|
||||
if err := l.checkSSLCert(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.checkHttpsProxy(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.checkHttpsForwardingRule(); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := l.deleteOldSSLCert(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetIP returns the ip associated with the forwarding rule for this l7.
|
||||
func (l *L7) GetIP() string {
|
||||
if l.fw != nil {
|
||||
return l.fw.IPAddress
|
||||
}
|
||||
if l.fws != nil {
|
||||
return l.fws.IPAddress
|
||||
}
|
||||
return ""
|
||||
}
|
||||
|
||||
// getNameForPathMatcher returns a name for a pathMatcher based on the given host rule.
|
||||
// The host rule can be a regex, the path matcher name used to associate the 2 cannot.
|
||||
func getNameForPathMatcher(hostRule string) string {
|
||||
hasher := md5.New()
|
||||
hasher.Write([]byte(hostRule))
|
||||
return fmt.Sprintf("%v%v", hostRulePrefix, hex.EncodeToString(hasher.Sum(nil)))
|
||||
}
|
||||
|
||||
// UpdateUrlMap translates the given hostname: endpoint->port mapping into a gce url map.
|
||||
//
|
||||
// HostRule: Conceptually contains all PathRules for a given host.
|
||||
// PathMatcher: Associates a path rule with a host rule. Mostly an optimization.
|
||||
// PathRule: Maps a single path regex to a backend.
|
||||
//
|
||||
// The GCE url map allows multiple hosts to share url->backend mappings without duplication, eg:
|
||||
// Host: foo(PathMatcher1), bar(PathMatcher1,2)
|
||||
// PathMatcher1:
|
||||
// /a -> b1
|
||||
// /b -> b2
|
||||
// PathMatcher2:
|
||||
// /c -> b1
|
||||
// This leads to a lot of complexity in the common case, where all we want is a mapping of
|
||||
// host->{/path: backend}.
|
||||
//
|
||||
// Consider some alternatives:
|
||||
// 1. Using a single backend per PathMatcher:
|
||||
// Host: foo(PathMatcher1,3) bar(PathMatcher1,2,3)
|
||||
// PathMatcher1:
|
||||
// /a -> b1
|
||||
// PathMatcher2:
|
||||
// /c -> b1
|
||||
// PathMatcher3:
|
||||
// /b -> b2
|
||||
// 2. Using a single host per PathMatcher:
|
||||
// Host: foo(PathMatcher1)
|
||||
// PathMatcher1:
|
||||
// /a -> b1
|
||||
// /b -> b2
|
||||
// Host: bar(PathMatcher2)
|
||||
// PathMatcher2:
|
||||
// /a -> b1
|
||||
// /b -> b2
|
||||
// /c -> b1
|
||||
// In the context of kubernetes services, 2 makes more sense, because we
|
||||
// rarely want to lookup backends (service:nodeport). When a service is
|
||||
// deleted, we need to find all host PathMatchers that have the backend
|
||||
// and remove the mapping. When a new path is added to a host (happens
|
||||
// more frequently than service deletion) we just need to lookup the 1
|
||||
// pathmatcher of the host.
|
||||
func (l *L7) UpdateUrlMap(ingressRules utils.GCEURLMap) error {
|
||||
if l.um == nil {
|
||||
return fmt.Errorf("Cannot add url without an urlmap.")
|
||||
}
|
||||
glog.V(3).Infof("Updating urlmap for l7 %v", l.Name)
|
||||
|
||||
// All UrlMaps must have a default backend. If the Ingress has a default
|
||||
// backend, it applies to all host rules as well as to the urlmap itself.
|
||||
// If it doesn't the urlmap might have a stale default, so replace it with
|
||||
// glbc's default backend.
|
||||
defaultBackend := ingressRules.GetDefaultBackend()
|
||||
if defaultBackend != nil {
|
||||
l.um.DefaultService = defaultBackend.SelfLink
|
||||
} else {
|
||||
l.um.DefaultService = l.glbcDefaultBackend.SelfLink
|
||||
}
|
||||
glog.V(3).Infof("Updating url map %+v", ingressRules)
|
||||
|
||||
// Every update replaces the entire urlmap.
|
||||
// TODO: when we have multiple loadbalancers point to a single gce url map
|
||||
// this needs modification. For now, there is a 1:1 mapping of urlmaps to
|
||||
// Ingresses, so if the given Ingress doesn't have a host rule we should
|
||||
// delete the path to that backend.
|
||||
l.um.HostRules = []*compute.HostRule{}
|
||||
l.um.PathMatchers = []*compute.PathMatcher{}
|
||||
|
||||
for hostname, urlToBackend := range ingressRules {
|
||||
// Create a host rule
|
||||
// Create a path matcher
|
||||
// Add all given endpoint:backends to pathRules in path matcher
|
||||
pmName := getNameForPathMatcher(hostname)
|
||||
l.um.HostRules = append(l.um.HostRules, &compute.HostRule{
|
||||
Hosts: []string{hostname},
|
||||
PathMatcher: pmName,
|
||||
})
|
||||
|
||||
pathMatcher := &compute.PathMatcher{
|
||||
Name: pmName,
|
||||
DefaultService: l.um.DefaultService,
|
||||
PathRules: []*compute.PathRule{},
|
||||
}
|
||||
|
||||
// Longest prefix wins. For equal rules, first hit wins, i.e the second
|
||||
// /foo rule when the first is deleted.
|
||||
for expr, be := range urlToBackend {
|
||||
pathMatcher.PathRules = append(
|
||||
pathMatcher.PathRules, &compute.PathRule{Paths: []string{expr}, Service: be.SelfLink})
|
||||
}
|
||||
l.um.PathMatchers = append(l.um.PathMatchers, pathMatcher)
|
||||
}
|
||||
um, err := l.cloud.UpdateUrlMap(l.um)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
l.um = um
|
||||
return nil
|
||||
}
|
||||
|
||||
// Cleanup deletes resources specific to this l7 in the right order.
|
||||
// forwarding rule -> target proxy -> url map
|
||||
// This leaves backends and health checks, which are shared across loadbalancers.
|
||||
func (l *L7) Cleanup() error {
|
||||
if l.fw != nil {
|
||||
glog.Infof("Deleting global forwarding rule %v", l.fw.Name)
|
||||
if err := l.cloud.DeleteGlobalForwardingRule(l.fw.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.fw = nil
|
||||
}
|
||||
if l.fws != nil {
|
||||
glog.Infof("Deleting global forwarding rule %v", l.fws.Name)
|
||||
if err := l.cloud.DeleteGlobalForwardingRule(l.fws.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
l.fws = nil
|
||||
}
|
||||
}
|
||||
if l.ip != nil {
|
||||
glog.Infof("Deleting static IP %v(%v)", l.ip.Name, l.ip.Address)
|
||||
if err := l.cloud.DeleteGlobalStaticIP(l.ip.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
l.ip = nil
|
||||
}
|
||||
}
|
||||
if l.tps != nil {
|
||||
glog.Infof("Deleting target https proxy %v", l.tps.Name)
|
||||
if err := l.cloud.DeleteTargetHttpsProxy(l.tps.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.tps = nil
|
||||
}
|
||||
if l.sslCert != nil {
|
||||
glog.Infof("Deleting sslcert %v", l.sslCert.Name)
|
||||
if err := l.cloud.DeleteSslCertificate(l.sslCert.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.sslCert = nil
|
||||
}
|
||||
if l.tp != nil {
|
||||
glog.Infof("Deleting target http proxy %v", l.tp.Name)
|
||||
if err := l.cloud.DeleteTargetHttpProxy(l.tp.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.tp = nil
|
||||
}
|
||||
if l.um != nil {
|
||||
glog.Infof("Deleting url map %v", l.um.Name)
|
||||
if err := l.cloud.DeleteUrlMap(l.um.Name); err != nil {
|
||||
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
|
||||
return err
|
||||
}
|
||||
}
|
||||
l.um = nil
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// getBackendNames returns the names of backends in this L7 urlmap.
|
||||
func (l *L7) getBackendNames() []string {
|
||||
if l.um == nil {
|
||||
return []string{}
|
||||
}
|
||||
beNames := sets.NewString()
|
||||
for _, pathMatcher := range l.um.PathMatchers {
|
||||
for _, pathRule := range pathMatcher.PathRules {
|
||||
// This is gross, but the urlmap only has links to backend services.
|
||||
parts := strings.Split(pathRule.Service, "/")
|
||||
name := parts[len(parts)-1]
|
||||
if name != "" {
|
||||
beNames.Insert(name)
|
||||
}
|
||||
}
|
||||
}
|
||||
// The default Service recorded in the urlMap is a link to the backend.
|
||||
// Note that this can either be user specified, or the L7 controller's
|
||||
// global default.
|
||||
parts := strings.Split(l.um.DefaultService, "/")
|
||||
defaultBackendName := parts[len(parts)-1]
|
||||
if defaultBackendName != "" {
|
||||
beNames.Insert(defaultBackendName)
|
||||
}
|
||||
return beNames.List()
|
||||
}
|
||||
|
||||
// GetLBAnnotations returns the annotations of an l7. This includes it's current status.
|
||||
func GetLBAnnotations(l7 *L7, existing map[string]string, backendPool backends.BackendPool) map[string]string {
|
||||
if existing == nil {
|
||||
existing = map[string]string{}
|
||||
}
|
||||
backends := l7.getBackendNames()
|
||||
backendState := map[string]string{}
|
||||
for _, beName := range backends {
|
||||
backendState[beName] = backendPool.Status(beName)
|
||||
}
|
||||
jsonBackendState := "Unknown"
|
||||
b, err := json.Marshal(backendState)
|
||||
if err == nil {
|
||||
jsonBackendState = string(b)
|
||||
}
|
||||
existing[fmt.Sprintf("%v/url-map", utils.K8sAnnotationPrefix)] = l7.um.Name
|
||||
// Forwarding rule and target proxy might not exist if allowHTTP == false
|
||||
if l7.fw != nil {
|
||||
existing[fmt.Sprintf("%v/forwarding-rule", utils.K8sAnnotationPrefix)] = l7.fw.Name
|
||||
}
|
||||
if l7.tp != nil {
|
||||
existing[fmt.Sprintf("%v/target-proxy", utils.K8sAnnotationPrefix)] = l7.tp.Name
|
||||
}
|
||||
// HTTPs resources might not exist if TLS == nil
|
||||
if l7.fws != nil {
|
||||
existing[fmt.Sprintf("%v/https-forwarding-rule", utils.K8sAnnotationPrefix)] = l7.fws.Name
|
||||
}
|
||||
if l7.tps != nil {
|
||||
existing[fmt.Sprintf("%v/https-target-proxy", utils.K8sAnnotationPrefix)] = l7.tps.Name
|
||||
}
|
||||
if l7.ip != nil {
|
||||
existing[fmt.Sprintf("%v/static-ip", utils.K8sAnnotationPrefix)] = l7.ip.Name
|
||||
}
|
||||
// TODO: We really want to know *when* a backend flipped states.
|
||||
existing[fmt.Sprintf("%v/backends", utils.K8sAnnotationPrefix)] = jsonBackendState
|
||||
return existing
|
||||
}
|
||||
|
||||
// GCEResourceName retrieves the name of the gce resource created for this
|
||||
// Ingress, of the given resource type, by inspecting the map of ingress
|
||||
// annotations.
|
||||
func GCEResourceName(ingAnnotations map[string]string, resourceName string) string {
|
||||
// Even though this function is trivial, it exists to keep the annotation
|
||||
// parsing logic in a single location.
|
||||
resourceName, _ = ingAnnotations[fmt.Sprintf("%v/%v", utils.K8sAnnotationPrefix, resourceName)]
|
||||
return resourceName
|
||||
}
|
286
controllers/gce/loadbalancers/loadbalancers_test.go
Normal file
286
controllers/gce/loadbalancers/loadbalancers_test.go
Normal file
|
@ -0,0 +1,286 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package loadbalancers
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"testing"
|
||||
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"k8s.io/contrib/ingress/controllers/gce/backends"
|
||||
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
|
||||
"k8s.io/contrib/ingress/controllers/gce/instances"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/util/sets"
|
||||
)
|
||||
|
||||
const (
|
||||
testDefaultBeNodePort = int64(3000)
|
||||
defaultZone = "zone-a"
|
||||
)
|
||||
|
||||
func newFakeLoadBalancerPool(f LoadBalancers, t *testing.T) LoadBalancerPool {
|
||||
fakeBackends := backends.NewFakeBackendServices()
|
||||
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
|
||||
fakeHCs := healthchecks.NewFakeHealthChecks()
|
||||
namer := &utils.Namer{}
|
||||
healthChecker := healthchecks.NewHealthChecker(fakeHCs, "/", namer)
|
||||
healthChecker.Init(&healthchecks.FakeHealthCheckGetter{nil})
|
||||
nodePool := instances.NewNodePool(fakeIGs)
|
||||
nodePool.Init(&instances.FakeZoneLister{[]string{defaultZone}})
|
||||
backendPool := backends.NewBackendPool(
|
||||
fakeBackends, healthChecker, nodePool, namer, []int64{}, false)
|
||||
return NewLoadBalancerPool(f, backendPool, testDefaultBeNodePort, namer)
|
||||
}
|
||||
|
||||
func TestCreateHTTPLoadBalancer(t *testing.T) {
|
||||
// This should NOT create the forwarding rule and target proxy
|
||||
// associated with the HTTPS branch of this loadbalancer.
|
||||
lbInfo := &L7RuntimeInfo{Name: "test", AllowHTTP: true}
|
||||
f := NewFakeLoadBalancers(lbInfo.Name)
|
||||
pool := newFakeLoadBalancerPool(f, t)
|
||||
pool.Add(lbInfo)
|
||||
l7, err := pool.Get(lbInfo.Name)
|
||||
if err != nil || l7 == nil {
|
||||
t.Fatalf("Expected l7 not created")
|
||||
}
|
||||
um, err := f.GetUrlMap(f.umName())
|
||||
if err != nil ||
|
||||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
tp, err := f.GetTargetHttpProxy(f.tpName(false))
|
||||
if err != nil || tp.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fw, err := f.GetGlobalForwardingRule(f.fwName(false))
|
||||
if err != nil || fw.Target != tp.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateHTTPSLoadBalancer(t *testing.T) {
|
||||
// This should NOT create the forwarding rule and target proxy
|
||||
// associated with the HTTP branch of this loadbalancer.
|
||||
lbInfo := &L7RuntimeInfo{
|
||||
Name: "test",
|
||||
AllowHTTP: false,
|
||||
TLS: &TLSCerts{Key: "key", Cert: "cert"},
|
||||
}
|
||||
f := NewFakeLoadBalancers(lbInfo.Name)
|
||||
pool := newFakeLoadBalancerPool(f, t)
|
||||
pool.Add(lbInfo)
|
||||
l7, err := pool.Get(lbInfo.Name)
|
||||
if err != nil || l7 == nil {
|
||||
t.Fatalf("Expected l7 not created")
|
||||
}
|
||||
um, err := f.GetUrlMap(f.umName())
|
||||
if err != nil ||
|
||||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
tps, err := f.GetTargetHttpsProxy(f.tpName(true))
|
||||
if err != nil || tps.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fws, err := f.GetGlobalForwardingRule(f.fwName(true))
|
||||
if err != nil || fws.Target != tps.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestCreateBothLoadBalancers(t *testing.T) {
|
||||
// This should create 2 forwarding rules and target proxies
|
||||
// but they should use the same urlmap, and have the same
|
||||
// static ip.
|
||||
lbInfo := &L7RuntimeInfo{
|
||||
Name: "test",
|
||||
AllowHTTP: true,
|
||||
TLS: &TLSCerts{Key: "key", Cert: "cert"},
|
||||
}
|
||||
f := NewFakeLoadBalancers(lbInfo.Name)
|
||||
pool := newFakeLoadBalancerPool(f, t)
|
||||
pool.Add(lbInfo)
|
||||
l7, err := pool.Get(lbInfo.Name)
|
||||
if err != nil || l7 == nil {
|
||||
t.Fatalf("Expected l7 not created")
|
||||
}
|
||||
um, err := f.GetUrlMap(f.umName())
|
||||
if err != nil ||
|
||||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
tps, err := f.GetTargetHttpsProxy(f.tpName(true))
|
||||
if err != nil || tps.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
tp, err := f.GetTargetHttpProxy(f.tpName(false))
|
||||
if err != nil || tp.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fws, err := f.GetGlobalForwardingRule(f.fwName(true))
|
||||
if err != nil || fws.Target != tps.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fw, err := f.GetGlobalForwardingRule(f.fwName(false))
|
||||
if err != nil || fw.Target != tp.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
ip, err := f.GetGlobalStaticIP(f.fwName(false))
|
||||
if err != nil || ip.Address != fw.IPAddress || ip.Address != fws.IPAddress {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateUrlMap(t *testing.T) {
|
||||
um1 := utils.GCEURLMap{
|
||||
"bar.example.com": {
|
||||
"/bar2": &compute.BackendService{SelfLink: "bar2svc"},
|
||||
},
|
||||
}
|
||||
um2 := utils.GCEURLMap{
|
||||
"foo.example.com": {
|
||||
"/foo1": &compute.BackendService{SelfLink: "foo1svc"},
|
||||
"/foo2": &compute.BackendService{SelfLink: "foo2svc"},
|
||||
},
|
||||
"bar.example.com": {
|
||||
"/bar1": &compute.BackendService{SelfLink: "bar1svc"},
|
||||
},
|
||||
}
|
||||
um2.PutDefaultBackend(&compute.BackendService{SelfLink: "default"})
|
||||
|
||||
lbInfo := &L7RuntimeInfo{Name: "test", AllowHTTP: true}
|
||||
f := NewFakeLoadBalancers(lbInfo.Name)
|
||||
pool := newFakeLoadBalancerPool(f, t)
|
||||
pool.Add(lbInfo)
|
||||
l7, err := pool.Get(lbInfo.Name)
|
||||
if err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
for _, ir := range []utils.GCEURLMap{um1, um2} {
|
||||
if err := l7.UpdateUrlMap(ir); err != nil {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
// The final map doesn't contain /bar2
|
||||
expectedMap := map[string]utils.FakeIngressRuleValueMap{
|
||||
utils.DefaultBackendKey: {
|
||||
utils.DefaultBackendKey: "default",
|
||||
},
|
||||
"foo.example.com": {
|
||||
"/foo1": "foo1svc",
|
||||
"/foo2": "foo2svc",
|
||||
},
|
||||
"bar.example.com": {
|
||||
"/bar1": "bar1svc",
|
||||
},
|
||||
}
|
||||
f.CheckURLMap(t, l7, expectedMap)
|
||||
}
|
||||
|
||||
func TestNameParsing(t *testing.T) {
|
||||
clusterName := "123"
|
||||
namer := utils.NewNamer(clusterName)
|
||||
fullName := namer.Truncate(fmt.Sprintf("%v-%v", forwardingRulePrefix, namer.LBName("testlb")))
|
||||
annotationsMap := map[string]string{
|
||||
fmt.Sprintf("%v/forwarding-rule", utils.K8sAnnotationPrefix): fullName,
|
||||
}
|
||||
components := namer.ParseName(GCEResourceName(annotationsMap, "forwarding-rule"))
|
||||
t.Logf("%+v", components)
|
||||
if components.ClusterName != clusterName {
|
||||
t.Errorf("Failed to parse cluster name from %v, expected %v got %v", fullName, clusterName, components.ClusterName)
|
||||
}
|
||||
resourceName := "fw"
|
||||
if components.Resource != resourceName {
|
||||
t.Errorf("Failed to parse resource from %v, expected %v got %v", fullName, resourceName, components.Resource)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClusterNameChange(t *testing.T) {
|
||||
lbInfo := &L7RuntimeInfo{
|
||||
Name: "test",
|
||||
TLS: &TLSCerts{Key: "key", Cert: "cert"},
|
||||
}
|
||||
f := NewFakeLoadBalancers(lbInfo.Name)
|
||||
pool := newFakeLoadBalancerPool(f, t)
|
||||
pool.Add(lbInfo)
|
||||
l7, err := pool.Get(lbInfo.Name)
|
||||
if err != nil || l7 == nil {
|
||||
t.Fatalf("Expected l7 not created")
|
||||
}
|
||||
um, err := f.GetUrlMap(f.umName())
|
||||
if err != nil ||
|
||||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
tps, err := f.GetTargetHttpsProxy(f.tpName(true))
|
||||
if err != nil || tps.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fws, err := f.GetGlobalForwardingRule(f.fwName(true))
|
||||
if err != nil || fws.Target != tps.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
newName := "newName"
|
||||
namer := pool.(*L7s).namer
|
||||
namer.SetClusterName(newName)
|
||||
f.name = fmt.Sprintf("%v--%v", lbInfo.Name, newName)
|
||||
|
||||
// Now the components should get renamed with the next suffix.
|
||||
pool.Add(lbInfo)
|
||||
l7, err = pool.Get(lbInfo.Name)
|
||||
if err != nil || namer.ParseName(l7.Name).ClusterName != newName {
|
||||
t.Fatalf("Expected L7 name to change.")
|
||||
}
|
||||
um, err = f.GetUrlMap(f.umName())
|
||||
if err != nil || namer.ParseName(um.Name).ClusterName != newName {
|
||||
t.Fatalf("Expected urlmap name to change.")
|
||||
}
|
||||
if err != nil ||
|
||||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
|
||||
tps, err = f.GetTargetHttpsProxy(f.tpName(true))
|
||||
if err != nil || tps.UrlMap != um.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
fws, err = f.GetGlobalForwardingRule(f.fwName(true))
|
||||
if err != nil || fws.Target != tps.SelfLink {
|
||||
t.Fatalf("%v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestInvalidClusterNameChange(t *testing.T) {
|
||||
namer := utils.NewNamer("test--123")
|
||||
if got := namer.GetClusterName(); got != "123" {
|
||||
t.Fatalf("Expected name 123, got %v", got)
|
||||
}
|
||||
// A name with `--` should take the last token
|
||||
for _, testCase := range []struct{ newName, expected string }{
|
||||
{"foo--bar", "bar"},
|
||||
{"--", ""},
|
||||
{"", ""},
|
||||
{"foo--bar--com", "com"},
|
||||
} {
|
||||
namer.SetClusterName(testCase.newName)
|
||||
if got := namer.GetClusterName(); got != testCase.expected {
|
||||
t.Fatalf("Expected %q got %q", testCase.expected, got)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
346
controllers/gce/main.go
Normal file
346
controllers/gce/main.go
Normal file
|
@ -0,0 +1,346 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
go_flag "flag"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
flag "github.com/spf13/pflag"
|
||||
"k8s.io/contrib/ingress/controllers/gce/controller"
|
||||
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
|
||||
"k8s.io/contrib/ingress/controllers/gce/storage"
|
||||
"k8s.io/contrib/ingress/controllers/gce/utils"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
kubectl_util "k8s.io/kubernetes/pkg/kubectl/cmd/util"
|
||||
"k8s.io/kubernetes/pkg/labels"
|
||||
"k8s.io/kubernetes/pkg/util/wait"
|
||||
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
// Entrypoint of GLBC. Example invocation:
|
||||
// 1. In a pod:
|
||||
// glbc --delete-all-on-quit
|
||||
// 2. Dry run (on localhost):
|
||||
// $ kubectl proxy --api-prefix="/"
|
||||
// $ glbc --proxy="http://localhost:proxyport"
|
||||
|
||||
const (
|
||||
// lbApiPort is the port on which the loadbalancer controller serves a
|
||||
// minimal api (/healthz, /delete-all-and-quit etc).
|
||||
lbApiPort = 8081
|
||||
|
||||
// A delimiter used for clarity in naming GCE resources.
|
||||
clusterNameDelimiter = "--"
|
||||
|
||||
// Arbitrarily chosen alphanumeric character to use in constructing resource
|
||||
// names, eg: to avoid cases where we end up with a name ending in '-'.
|
||||
alphaNumericChar = "0"
|
||||
|
||||
// Current docker image version. Only used in debug logging.
|
||||
imageVersion = "glbc:0.8.0"
|
||||
|
||||
// Key used to persist UIDs to configmaps.
|
||||
uidConfigMapName = "ingress-uid"
|
||||
)
|
||||
|
||||
var (
|
||||
flags = flag.NewFlagSet(
|
||||
`gclb: gclb --runngin-in-cluster=false --default-backend-node-port=123`,
|
||||
flag.ExitOnError)
|
||||
|
||||
clusterName = flags.String("cluster-uid", controller.DefaultClusterUID,
|
||||
`Optional, used to tag cluster wide, shared loadbalancer resources such
|
||||
as instance groups. Use this flag if you'd like to continue using the
|
||||
same resources across a pod restart. Note that this does not need to
|
||||
match the name of you Kubernetes cluster, it's just an arbitrary name
|
||||
used to tag/lookup cloud resources.`)
|
||||
|
||||
inCluster = flags.Bool("running-in-cluster", true,
|
||||
`Optional, if this controller is running in a kubernetes cluster, use the
|
||||
pod secrets for creating a Kubernetes client.`)
|
||||
|
||||
// TODO: Consolidate this flag and running-in-cluster. People already use
|
||||
// the first one to mean "running in dev", unfortunately.
|
||||
useRealCloud = flags.Bool("use-real-cloud", false,
|
||||
`Optional, if set a real cloud client is created. Only matters with
|
||||
--running-in-cluster=false, i.e a real cloud is always used when this
|
||||
controller is running on a Kubernetes node.`)
|
||||
|
||||
resyncPeriod = flags.Duration("sync-period", 30*time.Second,
|
||||
`Relist and confirm cloud resources this often.`)
|
||||
|
||||
deleteAllOnQuit = flags.Bool("delete-all-on-quit", false,
|
||||
`If true, the controller will delete all Ingress and the associated
|
||||
external cloud resources as it's shutting down. Mostly used for
|
||||
testing. In normal environments the controller should only delete
|
||||
a loadbalancer if the associated Ingress is deleted.`)
|
||||
|
||||
defaultSvc = flags.String("default-backend-service", "kube-system/default-http-backend",
|
||||
`Service used to serve a 404 page for the default backend. Takes the form
|
||||
namespace/name. The controller uses the first node port of this Service for
|
||||
the default backend.`)
|
||||
|
||||
healthCheckPath = flags.String("health-check-path", "/",
|
||||
`Path used to health-check a backend service. All Services must serve
|
||||
a 200 page on this path. Currently this is only configurable globally.`)
|
||||
|
||||
watchNamespace = flags.String("watch-namespace", api.NamespaceAll,
|
||||
`Namespace to watch for Ingress/Services/Endpoints.`)
|
||||
|
||||
verbose = flags.Bool("verbose", false,
|
||||
`If true, logs are displayed at V(4), otherwise V(2).`)
|
||||
|
||||
configFilePath = flags.String("config-file-path", "",
|
||||
`Path to a file containing the gce config. If left unspecified this
|
||||
controller only works with default zones.`)
|
||||
|
||||
healthzPort = flags.Int("healthz-port", lbApiPort,
|
||||
`Port to run healthz server. Must match the health check port in yaml.`)
|
||||
)
|
||||
|
||||
func registerHandlers(lbc *controller.LoadBalancerController) {
|
||||
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
|
||||
if err := lbc.CloudClusterManager.IsHealthy(); err != nil {
|
||||
w.WriteHeader(500)
|
||||
w.Write([]byte(fmt.Sprintf("Cluster unhealthy: %v", err)))
|
||||
return
|
||||
}
|
||||
w.WriteHeader(200)
|
||||
w.Write([]byte("ok"))
|
||||
})
|
||||
http.HandleFunc("/delete-all-and-quit", func(w http.ResponseWriter, r *http.Request) {
|
||||
// TODO: Retry failures during shutdown.
|
||||
lbc.Stop(true)
|
||||
})
|
||||
|
||||
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%v", *healthzPort), nil))
|
||||
}
|
||||
|
||||
func handleSigterm(lbc *controller.LoadBalancerController, deleteAll bool) {
|
||||
// Multiple SIGTERMs will get dropped
|
||||
signalChan := make(chan os.Signal, 1)
|
||||
signal.Notify(signalChan, syscall.SIGTERM)
|
||||
<-signalChan
|
||||
glog.Infof("Received SIGTERM, shutting down")
|
||||
|
||||
// TODO: Better retires than relying on restartPolicy.
|
||||
exitCode := 0
|
||||
if err := lbc.Stop(deleteAll); err != nil {
|
||||
glog.Infof("Error during shutdown %v", err)
|
||||
exitCode = 1
|
||||
}
|
||||
glog.Infof("Exiting with %v", exitCode)
|
||||
os.Exit(exitCode)
|
||||
}
|
||||
|
||||
// main function for GLBC.
|
||||
func main() {
|
||||
// TODO: Add a healthz endpoint
|
||||
var kubeClient *client.Client
|
||||
var err error
|
||||
var clusterManager *controller.ClusterManager
|
||||
|
||||
// TODO: We can simply parse all go flags with
|
||||
// flags.AddGoFlagSet(go_flag.CommandLine)
|
||||
// but that pollutes --help output with a ton of standard go flags.
|
||||
// We only really need a binary switch from light, v(2) logging to
|
||||
// heavier debug style V(4) logging, which we use --verbose for.
|
||||
flags.Parse(os.Args)
|
||||
clientConfig := kubectl_util.DefaultClientConfig(flags)
|
||||
|
||||
// Set glog verbosity levels, unconditionally set --alsologtostderr.
|
||||
go_flag.Lookup("logtostderr").Value.Set("true")
|
||||
if *verbose {
|
||||
go_flag.Set("v", "4")
|
||||
}
|
||||
glog.Infof("Starting GLBC image: %v, cluster name %v", imageVersion, *clusterName)
|
||||
if *defaultSvc == "" {
|
||||
glog.Fatalf("Please specify --default-backend")
|
||||
}
|
||||
|
||||
// Create kubeclient
|
||||
if *inCluster {
|
||||
if kubeClient, err = client.NewInCluster(); err != nil {
|
||||
glog.Fatalf("Failed to create client: %v.", err)
|
||||
}
|
||||
} else {
|
||||
config, err := clientConfig.ClientConfig()
|
||||
if err != nil {
|
||||
glog.Fatalf("error connecting to the client: %v", err)
|
||||
}
|
||||
kubeClient, err = client.New(config)
|
||||
}
|
||||
// Wait for the default backend Service. There's no pretty way to do this.
|
||||
parts := strings.Split(*defaultSvc, "/")
|
||||
if len(parts) != 2 {
|
||||
glog.Fatalf("Default backend should take the form namespace/name: %v",
|
||||
*defaultSvc)
|
||||
}
|
||||
defaultBackendNodePort, err := getNodePort(kubeClient, parts[0], parts[1])
|
||||
if err != nil {
|
||||
glog.Fatalf("Could not configure default backend %v: %v",
|
||||
*defaultSvc, err)
|
||||
}
|
||||
|
||||
if *inCluster || *useRealCloud {
|
||||
// Create cluster manager
|
||||
namer, err := newNamer(kubeClient, *clusterName)
|
||||
if err != nil {
|
||||
glog.Fatalf("%v", err)
|
||||
}
|
||||
clusterManager, err = controller.NewClusterManager(*configFilePath, namer, defaultBackendNodePort, *healthCheckPath)
|
||||
if err != nil {
|
||||
glog.Fatalf("%v", err)
|
||||
}
|
||||
} else {
|
||||
// Create fake cluster manager
|
||||
clusterManager = controller.NewFakeClusterManager(*clusterName).ClusterManager
|
||||
}
|
||||
|
||||
// Start loadbalancer controller
|
||||
lbc, err := controller.NewLoadBalancerController(kubeClient, clusterManager, *resyncPeriod, *watchNamespace)
|
||||
if err != nil {
|
||||
glog.Fatalf("%v", err)
|
||||
}
|
||||
if clusterManager.ClusterNamer.GetClusterName() != "" {
|
||||
glog.V(3).Infof("Cluster name %+v", clusterManager.ClusterNamer.GetClusterName())
|
||||
}
|
||||
clusterManager.Init(&controller.GCETranslator{lbc})
|
||||
go registerHandlers(lbc)
|
||||
go handleSigterm(lbc, *deleteAllOnQuit)
|
||||
|
||||
lbc.Run()
|
||||
for {
|
||||
glog.Infof("Handled quit, awaiting pod deletion.")
|
||||
time.Sleep(30 * time.Second)
|
||||
}
|
||||
}
|
||||
|
||||
func newNamer(kubeClient *client.Client, clusterName string) (*utils.Namer, error) {
|
||||
name, err := getClusterUID(kubeClient, clusterName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
namer := utils.NewNamer(name)
|
||||
vault := storage.NewConfigMapVault(kubeClient, api.NamespaceSystem, uidConfigMapName)
|
||||
|
||||
// Start a goroutine to poll the cluster UID config map
|
||||
// We don't watch because we know exactly which configmap we want and this
|
||||
// controller already watches 5 other resources, so it isn't worth the cost
|
||||
// of another connection and complexity.
|
||||
go wait.Forever(func() {
|
||||
uid, found, err := vault.Get()
|
||||
existing := namer.GetClusterName()
|
||||
if found && uid != existing {
|
||||
glog.Infof("Cluster uid changed from %v -> %v", existing, uid)
|
||||
namer.SetClusterName(uid)
|
||||
} else if err != nil {
|
||||
glog.Errorf("Failed to reconcile cluster uid %v, currently set to %v", err, existing)
|
||||
}
|
||||
}, 5*time.Second)
|
||||
return namer, nil
|
||||
}
|
||||
|
||||
// getClusterUID returns the cluster UID. Rules for UID generation:
|
||||
// If the user specifies a --cluster-uid param it overwrites everything
|
||||
// else, check UID config map for a previously recorded uid
|
||||
// else, check if there are any working Ingresses
|
||||
// - remember that "" is the cluster uid
|
||||
// else, allocate a new uid
|
||||
func getClusterUID(kubeClient *client.Client, name string) (string, error) {
|
||||
cfgVault := storage.NewConfigMapVault(kubeClient, api.NamespaceSystem, uidConfigMapName)
|
||||
if name != "" {
|
||||
glog.Infof("Using user provided cluster uid %v", name)
|
||||
// Don't save the uid in the vault, so users can rollback through
|
||||
// --cluster-uid=""
|
||||
return name, nil
|
||||
}
|
||||
|
||||
existingUID, found, err := cfgVault.Get()
|
||||
if found {
|
||||
glog.Infof("Using saved cluster uid %q", existingUID)
|
||||
return existingUID, nil
|
||||
} else if err != nil {
|
||||
// This can fail because of:
|
||||
// 1. No such config map - found=false, err=nil
|
||||
// 2. No such key in config map - found=false, err=nil
|
||||
// 3. Apiserver flake - found=false, err!=nil
|
||||
// It is not safe to proceed in 3.
|
||||
return "", fmt.Errorf("Failed to retrieve current uid: %v, using %q as name", err, name)
|
||||
}
|
||||
|
||||
// Check if the cluster has an Ingress with ip
|
||||
ings, err := kubeClient.Extensions().Ingress(api.NamespaceAll).List(api.ListOptions{LabelSelector: labels.Everything()})
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
namer := utils.Namer{}
|
||||
for _, ing := range ings.Items {
|
||||
if len(ing.Status.LoadBalancer.Ingress) != 0 {
|
||||
c := namer.ParseName(loadbalancers.GCEResourceName(ing.Annotations, "forwarding-rule"))
|
||||
if c.ClusterName != "" {
|
||||
return c.ClusterName, cfgVault.Put(c.ClusterName)
|
||||
}
|
||||
glog.Infof("Found a working Ingress, assuming uid is empty string")
|
||||
return "", cfgVault.Put("")
|
||||
}
|
||||
}
|
||||
|
||||
// Allocate new uid
|
||||
f, err := os.Open("/dev/urandom")
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
defer f.Close()
|
||||
b := make([]byte, 8)
|
||||
if _, err := f.Read(b); err != nil {
|
||||
return "", err
|
||||
}
|
||||
uid := fmt.Sprintf("%x", b)
|
||||
return uid, cfgVault.Put(uid)
|
||||
}
|
||||
|
||||
// getNodePort waits for the Service, and returns it's first node port.
|
||||
func getNodePort(client *client.Client, ns, name string) (nodePort int64, err error) {
|
||||
var svc *api.Service
|
||||
glog.V(3).Infof("Waiting for %v/%v", ns, name)
|
||||
wait.Poll(1*time.Second, 5*time.Minute, func() (bool, error) {
|
||||
svc, err = client.Services(ns).Get(name)
|
||||
if err != nil {
|
||||
return false, nil
|
||||
}
|
||||
for _, p := range svc.Spec.Ports {
|
||||
if p.NodePort != 0 {
|
||||
nodePort = int64(p.NodePort)
|
||||
glog.V(3).Infof("Node port %v", nodePort)
|
||||
break
|
||||
}
|
||||
}
|
||||
return true, nil
|
||||
})
|
||||
return
|
||||
}
|
82
controllers/gce/rc.yaml
Normal file
82
controllers/gce/rc.yaml
Normal file
|
@ -0,0 +1,82 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
# This must match the --default-backend-service argument of the l7 lb
|
||||
# controller and is required because GCE mandates a default backend.
|
||||
name: default-http-backend
|
||||
labels:
|
||||
k8s-app: glbc
|
||||
spec:
|
||||
# The default backend must be of type NodePort.
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
k8s-app: glbc
|
||||
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: l7-lb-controller
|
||||
labels:
|
||||
k8s-app: glbc
|
||||
version: v0.6.2
|
||||
spec:
|
||||
# There should never be more than 1 controller alive simultaneously.
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: glbc
|
||||
version: v0.6.2
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: glbc
|
||||
version: v0.6.2
|
||||
name: glbc
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 600
|
||||
containers:
|
||||
- name: default-http-backend
|
||||
# Any image is permissable as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: gcr.io/google_containers/defaultbackend:1.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
- image: gcr.io/google_containers/glbc:0.8.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8081
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
name: l7-lb-controller
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
args:
|
||||
- --default-backend-service=default/default-http-backend
|
||||
- --sync-period=300s
|
177
controllers/gce/storage/configmaps.go
Normal file
177
controllers/gce/storage/configmaps.go
Normal file
|
@ -0,0 +1,177 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package storage
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/golang/glog"
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/api/errors"
|
||||
"k8s.io/kubernetes/pkg/client/cache"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
)
|
||||
|
||||
// UIDVault stores UIDs.
|
||||
type UIDVault interface {
|
||||
Get() (string, bool, error)
|
||||
Put(string) error
|
||||
Delete() error
|
||||
}
|
||||
|
||||
// uidDataKey is the key used in config maps to store the UID.
|
||||
const uidDataKey = "uid"
|
||||
|
||||
// ConfigMapVault stores cluster UIDs in config maps.
|
||||
// It's a layer on top of ConfigMapStore that just implements the utils.uidVault
|
||||
// interface.
|
||||
type ConfigMapVault struct {
|
||||
ConfigMapStore cache.Store
|
||||
namespace string
|
||||
name string
|
||||
}
|
||||
|
||||
// Get retrieves the cluster UID from the cluster config map.
|
||||
// If this method returns an error, it's guaranteed to be apiserver flake.
|
||||
// If the error is a not found error it sets the boolean to false and
|
||||
// returns and error of nil instead.
|
||||
func (c *ConfigMapVault) Get() (string, bool, error) {
|
||||
key := fmt.Sprintf("%v/%v", c.namespace, c.name)
|
||||
item, found, err := c.ConfigMapStore.GetByKey(key)
|
||||
if err != nil || !found {
|
||||
return "", false, err
|
||||
}
|
||||
cfg := item.(*api.ConfigMap)
|
||||
if k, ok := cfg.Data[uidDataKey]; ok {
|
||||
return k, true, nil
|
||||
}
|
||||
return "", false, fmt.Errorf("Found config map %v but it doesn't contain uid key: %+v", key, cfg.Data)
|
||||
}
|
||||
|
||||
// Put stores the given UID in the cluster config map.
|
||||
func (c *ConfigMapVault) Put(uid string) error {
|
||||
apiObj := &api.ConfigMap{
|
||||
ObjectMeta: api.ObjectMeta{
|
||||
Name: c.name,
|
||||
Namespace: c.namespace,
|
||||
},
|
||||
Data: map[string]string{uidDataKey: uid},
|
||||
}
|
||||
cfgMapKey := fmt.Sprintf("%v/%v", c.namespace, c.name)
|
||||
|
||||
item, exists, err := c.ConfigMapStore.GetByKey(cfgMapKey)
|
||||
if err == nil && exists {
|
||||
data := item.(*api.ConfigMap).Data
|
||||
if k, ok := data[uidDataKey]; ok && k == uid {
|
||||
return nil
|
||||
} else if ok {
|
||||
glog.Infof("Configmap %v has key %v but wrong value %v, updating", cfgMapKey, k, uid)
|
||||
}
|
||||
|
||||
if err := c.ConfigMapStore.Update(apiObj); err != nil {
|
||||
return fmt.Errorf("Failed to update %v: %v", cfgMapKey, err)
|
||||
}
|
||||
} else if err := c.ConfigMapStore.Add(apiObj); err != nil {
|
||||
return fmt.Errorf("Failed to add %v: %v", cfgMapKey, err)
|
||||
}
|
||||
glog.Infof("Successfully stored uid %q in config map %v", uid, cfgMapKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
// Delete deletes the cluster UID storing config map.
|
||||
func (c *ConfigMapVault) Delete() error {
|
||||
cfgMapKey := fmt.Sprintf("%v/%v", c.namespace, c.name)
|
||||
item, _, err := c.ConfigMapStore.GetByKey(cfgMapKey)
|
||||
if err == nil {
|
||||
return c.ConfigMapStore.Delete(item)
|
||||
}
|
||||
glog.Warningf("Couldn't find item %v in vault, unable to delete", cfgMapKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
// NewConfigMapVault creates a config map client.
|
||||
// This client is essentially meant to abstract out the details of
|
||||
// configmaps and the API, and just store/retrieve a single value, the cluster uid.
|
||||
func NewConfigMapVault(c *client.Client, uidNs, uidConfigMapName string) *ConfigMapVault {
|
||||
return &ConfigMapVault{NewConfigMapStore(c), uidNs, uidConfigMapName}
|
||||
}
|
||||
|
||||
// NewFakeConfigMapVault is an implementation of the ConfigMapStore that doesn't
|
||||
// persist configmaps. Only used in testing.
|
||||
func NewFakeConfigMapVault(ns, name string) *ConfigMapVault {
|
||||
return &ConfigMapVault{cache.NewStore(cache.MetaNamespaceKeyFunc), ns, name}
|
||||
}
|
||||
|
||||
// ConfigMapStore wraps the store interface. Implementations usually persist
|
||||
// contents of the store transparently.
|
||||
type ConfigMapStore interface {
|
||||
cache.Store
|
||||
}
|
||||
|
||||
// ApiServerConfigMapStore only services Add and GetByKey from apiserver.
|
||||
// TODO: Implement all the other store methods and make this a write
|
||||
// through cache.
|
||||
type ApiServerConfigMapStore struct {
|
||||
ConfigMapStore
|
||||
client *client.Client
|
||||
}
|
||||
|
||||
// Add adds the given config map to the apiserver's store.
|
||||
func (a *ApiServerConfigMapStore) Add(obj interface{}) error {
|
||||
cfg := obj.(*api.ConfigMap)
|
||||
_, err := a.client.ConfigMaps(cfg.Namespace).Create(cfg)
|
||||
return err
|
||||
}
|
||||
|
||||
// Update updates the existing config map object.
|
||||
func (a *ApiServerConfigMapStore) Update(obj interface{}) error {
|
||||
cfg := obj.(*api.ConfigMap)
|
||||
_, err := a.client.ConfigMaps(cfg.Namespace).Update(cfg)
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete deletes the existing config map object.
|
||||
func (a *ApiServerConfigMapStore) Delete(obj interface{}) error {
|
||||
cfg := obj.(*api.ConfigMap)
|
||||
return a.client.ConfigMaps(cfg.Namespace).Delete(cfg.Name)
|
||||
}
|
||||
|
||||
// GetByKey returns the config map for a given key.
|
||||
// The key must take the form namespace/name.
|
||||
func (a *ApiServerConfigMapStore) GetByKey(key string) (item interface{}, exists bool, err error) {
|
||||
nsName := strings.Split(key, "/")
|
||||
if len(nsName) != 2 {
|
||||
return nil, false, fmt.Errorf("Failed to get key %v, unexpecte format, expecting ns/name", key)
|
||||
}
|
||||
ns, name := nsName[0], nsName[1]
|
||||
cfg, err := a.client.ConfigMaps(ns).Get(name)
|
||||
if err != nil {
|
||||
// Translate not found errors to found=false, err=nil
|
||||
if errors.IsNotFound(err) {
|
||||
return nil, false, nil
|
||||
}
|
||||
return nil, false, err
|
||||
}
|
||||
return cfg, true, nil
|
||||
}
|
||||
|
||||
// NewConfigMapStore returns a config map store capable of persisting updates
|
||||
// to apiserver.
|
||||
func NewConfigMapStore(c *client.Client) ConfigMapStore {
|
||||
return &ApiServerConfigMapStore{ConfigMapStore: cache.NewStore(cache.MetaNamespaceKeyFunc), client: c}
|
||||
}
|
54
controllers/gce/storage/configmaps_test.go
Normal file
54
controllers/gce/storage/configmaps_test.go
Normal file
|
@ -0,0 +1,54 @@
|
|||
/*
|
||||
Copyright 2016 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package storage
|
||||
|
||||
import (
|
||||
"testing"
|
||||
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
)
|
||||
|
||||
func TestConfigMapUID(t *testing.T) {
|
||||
vault := NewFakeConfigMapVault(api.NamespaceSystem, "ingress-uid")
|
||||
uid := ""
|
||||
k, exists, err := vault.Get()
|
||||
if exists {
|
||||
t.Errorf("Got a key from an empyt vault")
|
||||
}
|
||||
vault.Put(uid)
|
||||
k, exists, err = vault.Get()
|
||||
if !exists || err != nil {
|
||||
t.Errorf("Failed to retrieve value from vault")
|
||||
}
|
||||
if k != "" {
|
||||
t.Errorf("Failed to store empty string as a key in the vault")
|
||||
}
|
||||
vault.Put("newuid")
|
||||
k, exists, err = vault.Get()
|
||||
if !exists || err != nil {
|
||||
t.Errorf("Failed to retrieve value from vault")
|
||||
}
|
||||
if k != "newuid" {
|
||||
t.Errorf("Failed to modify uid")
|
||||
}
|
||||
if err := vault.Delete(); err != nil {
|
||||
t.Errorf("Failed to delete uid %v", err)
|
||||
}
|
||||
if uid, exists, _ := vault.Get(); exists {
|
||||
t.Errorf("Found uid %v, expected none", uid)
|
||||
}
|
||||
}
|
30
controllers/gce/storage/doc.go
Normal file
30
controllers/gce/storage/doc.go
Normal file
|
@ -0,0 +1,30 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// Storage backends used by the Ingress controller.
|
||||
// Ingress controllers require their own storage for the following reasons:
|
||||
// 1. There is only so much information we can pack into 64 chars allowed
|
||||
// by GCE for resource names.
|
||||
// 2. An Ingress controller cannot assume total control over a project, in
|
||||
// fact in a majority of cases (ubernetes, tests, multiple gke clusters in
|
||||
// same project) there *will* be multiple controllers in a project.
|
||||
// 3. If the Ingress controller pod is killed, an Ingress is deleted while
|
||||
// the pod is down, and then the controller is re-scheduled on another node,
|
||||
// it will leak resources. Note that this will happen today because
|
||||
// the only implemented storage backend is InMemoryPool.
|
||||
// 4. Listing from cloudproviders is really slow.
|
||||
|
||||
package storage
|
145
controllers/gce/storage/pools.go
Normal file
145
controllers/gce/storage/pools.go
Normal file
|
@ -0,0 +1,145 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package storage
|
||||
|
||||
import (
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/golang/glog"
|
||||
"k8s.io/kubernetes/pkg/client/cache"
|
||||
"k8s.io/kubernetes/pkg/util/wait"
|
||||
)
|
||||
|
||||
// Snapshotter is an interface capable of providing a consistent snapshot of
|
||||
// the underlying storage implementation of a pool. It does not guarantee
|
||||
// thread safety of snapshots, so they should be treated as read only unless
|
||||
// the implementation specifies otherwise.
|
||||
type Snapshotter interface {
|
||||
Snapshot() map[string]interface{}
|
||||
cache.ThreadSafeStore
|
||||
}
|
||||
|
||||
// InMemoryPool is used as a cache for cluster resource pools.
|
||||
type InMemoryPool struct {
|
||||
cache.ThreadSafeStore
|
||||
}
|
||||
|
||||
// Snapshot returns a read only copy of the k:v pairs in the store.
|
||||
// Caller beware: Violates traditional snapshot guarantees.
|
||||
func (p *InMemoryPool) Snapshot() map[string]interface{} {
|
||||
snap := map[string]interface{}{}
|
||||
for _, key := range p.ListKeys() {
|
||||
if item, ok := p.Get(key); ok {
|
||||
snap[key] = item
|
||||
}
|
||||
}
|
||||
return snap
|
||||
}
|
||||
|
||||
// NewInMemoryPool creates an InMemoryPool.
|
||||
func NewInMemoryPool() *InMemoryPool {
|
||||
return &InMemoryPool{
|
||||
cache.NewThreadSafeStore(cache.Indexers{}, cache.Indices{})}
|
||||
}
|
||||
|
||||
type keyFunc func(interface{}) (string, error)
|
||||
|
||||
type cloudLister interface {
|
||||
List() ([]interface{}, error)
|
||||
}
|
||||
|
||||
// CloudListingPool wraps InMemoryPool but relists from the cloud periodically.
|
||||
type CloudListingPool struct {
|
||||
// A lock to protect against concurrent mutation of the pool
|
||||
lock sync.Mutex
|
||||
// The pool that is re-populated via re-list from cloud, and written to
|
||||
// from controller
|
||||
*InMemoryPool
|
||||
// An interface that lists objects from the cloud.
|
||||
lister cloudLister
|
||||
// A function capable of producing a key for a given object.
|
||||
// This key must match the key used to store the same object in the user of
|
||||
// this cache.
|
||||
keyGetter keyFunc
|
||||
}
|
||||
|
||||
// ReplenishPool lists through the cloudLister and inserts into the pool. This
|
||||
// is especially useful in scenarios like deleting an Ingress while the
|
||||
// controller is restarting. As long as the resource exists in the shared
|
||||
// memory pool, it is visible to the caller and they can take corrective
|
||||
// actions, eg: backend pool deletes backends with non-matching node ports
|
||||
// in its sync method.
|
||||
func (c *CloudListingPool) ReplenishPool() {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
glog.V(4).Infof("Replenishing pool")
|
||||
|
||||
// We must list with the lock, because the controller also lists through
|
||||
// Snapshot(). It's ok if the controller takes a snpshot, we list, we
|
||||
// delete, because we have delete based on the most recent state. Worst
|
||||
// case we thrash. It's not ok if we list, the controller lists and
|
||||
// creates a backend, and we delete that backend based on stale state.
|
||||
items, err := c.lister.List()
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to list: %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
for i := range items {
|
||||
key, err := c.keyGetter(items[i])
|
||||
if err != nil {
|
||||
glog.V(4).Infof("CloudListingPool: %v", err)
|
||||
continue
|
||||
}
|
||||
c.InMemoryPool.Add(key, items[i])
|
||||
}
|
||||
}
|
||||
|
||||
// Snapshot just snapshots the underlying pool.
|
||||
func (c *CloudListingPool) Snapshot() map[string]interface{} {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
return c.InMemoryPool.Snapshot()
|
||||
}
|
||||
|
||||
// Add simply adds to the underlying pool.
|
||||
func (c *CloudListingPool) Add(key string, obj interface{}) {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
c.InMemoryPool.Add(key, obj)
|
||||
}
|
||||
|
||||
// Delete just deletes from underlying pool.
|
||||
func (c *CloudListingPool) Delete(key string) {
|
||||
c.lock.Lock()
|
||||
defer c.lock.Unlock()
|
||||
c.InMemoryPool.Delete(key)
|
||||
}
|
||||
|
||||
// NewCloudListingPool replenishes the InMemoryPool through a background
|
||||
// goroutine that lists from the given cloudLister.
|
||||
func NewCloudListingPool(k keyFunc, lister cloudLister, relistPeriod time.Duration) *CloudListingPool {
|
||||
cl := &CloudListingPool{
|
||||
InMemoryPool: NewInMemoryPool(),
|
||||
lister: lister,
|
||||
keyGetter: k,
|
||||
}
|
||||
glog.V(4).Infof("Starting pool replenish goroutine")
|
||||
go wait.Until(cl.ReplenishPool, relistPeriod, make(chan struct{}))
|
||||
return cl
|
||||
}
|
21
controllers/gce/utils/doc.go
Normal file
21
controllers/gce/utils/doc.go
Normal file
|
@ -0,0 +1,21 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// utils contains odd structs, constants etc that don't fit cleanly into any
|
||||
// sub-module because they're shared. Ideally this module wouldn't exist, but
|
||||
// sharing these odd bits reduces margin for error.
|
||||
|
||||
package utils
|
316
controllers/gce/utils/utils.go
Normal file
316
controllers/gce/utils/utils.go
Normal file
|
@ -0,0 +1,316 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package utils
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/golang/glog"
|
||||
compute "google.golang.org/api/compute/v1"
|
||||
"google.golang.org/api/googleapi"
|
||||
"regexp"
|
||||
)
|
||||
|
||||
const (
|
||||
// Add used to record additions in a sync pool.
|
||||
Add = iota
|
||||
// Remove used to record removals from a sync pool.
|
||||
Remove
|
||||
// Sync used to record syncs of a sync pool.
|
||||
Sync
|
||||
// Get used to record Get from a sync pool.
|
||||
Get
|
||||
// Create used to recrod creations in a sync pool.
|
||||
Create
|
||||
// Update used to record updates in a sync pool.
|
||||
Update
|
||||
// Delete used to record deltions from a sync pool.
|
||||
Delete
|
||||
// AddInstances used to record a call to AddInstances.
|
||||
AddInstances
|
||||
// RemoveInstances used to record a call to RemoveInstances.
|
||||
RemoveInstances
|
||||
|
||||
// This allows sharing of backends across loadbalancers.
|
||||
backendPrefix = "k8s-be"
|
||||
backendRegex = "k8s-be-([0-9]+).*"
|
||||
|
||||
// Prefix used for instance groups involved in L7 balancing.
|
||||
igPrefix = "k8s-ig"
|
||||
|
||||
// Suffix used in the l7 firewall rule. There is currently only one.
|
||||
// Note that this name is used by the cloudprovider lib that inserts its
|
||||
// own k8s-fw prefix.
|
||||
globalFirewallSuffix = "l7"
|
||||
|
||||
// A delimiter used for clarity in naming GCE resources.
|
||||
clusterNameDelimiter = "--"
|
||||
|
||||
// Arbitrarily chosen alphanumeric character to use in constructing resource
|
||||
// names, eg: to avoid cases where we end up with a name ending in '-'.
|
||||
alphaNumericChar = "0"
|
||||
|
||||
// Names longer than this are truncated, because of GCE restrictions.
|
||||
nameLenLimit = 62
|
||||
|
||||
// DefaultBackendKey is the key used to transmit the defaultBackend through
|
||||
// a urlmap. It's not a valid subdomain, and it is a catch all path.
|
||||
// TODO: Find a better way to transmit this, once we've decided on default
|
||||
// backend semantics (i.e do we want a default per host, per lb etc).
|
||||
DefaultBackendKey = "DefaultBackend"
|
||||
|
||||
// K8sAnnotationPrefix is the prefix used in annotations used to record
|
||||
// debug information in the Ingress annotations.
|
||||
K8sAnnotationPrefix = "ingress.kubernetes.io"
|
||||
)
|
||||
|
||||
// Namer handles centralized naming for the cluster.
|
||||
type Namer struct {
|
||||
clusterName string
|
||||
nameLock sync.Mutex
|
||||
}
|
||||
|
||||
// NewNamer creates a new namer.
|
||||
func NewNamer(clusterName string) *Namer {
|
||||
namer := &Namer{}
|
||||
namer.SetClusterName(clusterName)
|
||||
return namer
|
||||
}
|
||||
|
||||
// NameComponents is a struct representing the components of a a GCE resource
|
||||
// name constructed by the namer. The format of such a name is:
|
||||
// k8s-resource-<metadata, eg port>--uid
|
||||
type NameComponents struct {
|
||||
ClusterName, Resource, Metadata string
|
||||
}
|
||||
|
||||
// SetClusterName sets the UID/name of this cluster.
|
||||
func (n *Namer) SetClusterName(name string) {
|
||||
n.nameLock.Lock()
|
||||
defer n.nameLock.Unlock()
|
||||
if strings.Contains(name, clusterNameDelimiter) {
|
||||
tokens := strings.Split(name, clusterNameDelimiter)
|
||||
glog.Warningf("Given name %v contains %v, taking last token in: %+v", name, clusterNameDelimiter, tokens)
|
||||
name = tokens[len(tokens)-1]
|
||||
}
|
||||
glog.Infof("Changing cluster name from %v to %v", n.clusterName, name)
|
||||
n.clusterName = name
|
||||
}
|
||||
|
||||
// GetClusterName returns the UID/name of this cluster.
|
||||
func (n *Namer) GetClusterName() string {
|
||||
n.nameLock.Lock()
|
||||
defer n.nameLock.Unlock()
|
||||
return n.clusterName
|
||||
}
|
||||
|
||||
// Truncate truncates the given key to a GCE length limit.
|
||||
func (n *Namer) Truncate(key string) string {
|
||||
if len(key) > nameLenLimit {
|
||||
// GCE requires names to end with an albhanumeric, but allows characters
|
||||
// like '-', so make sure the trucated name ends legally.
|
||||
return fmt.Sprintf("%v%v", key[:nameLenLimit], alphaNumericChar)
|
||||
}
|
||||
return key
|
||||
}
|
||||
|
||||
func (n *Namer) decorateName(name string) string {
|
||||
clusterName := n.GetClusterName()
|
||||
if clusterName == "" {
|
||||
return name
|
||||
}
|
||||
return n.Truncate(fmt.Sprintf("%v%v%v", name, clusterNameDelimiter, clusterName))
|
||||
}
|
||||
|
||||
// ParseName parses the name of a resource generated by the namer.
|
||||
func (n *Namer) ParseName(name string) *NameComponents {
|
||||
l := strings.Split(name, clusterNameDelimiter)
|
||||
var uid, resource string
|
||||
if len(l) >= 2 {
|
||||
uid = l[len(l)-1]
|
||||
}
|
||||
c := strings.Split(name, "-")
|
||||
if len(c) >= 2 {
|
||||
resource = c[1]
|
||||
}
|
||||
return &NameComponents{
|
||||
ClusterName: uid,
|
||||
Resource: resource,
|
||||
}
|
||||
}
|
||||
|
||||
// NameBelongsToCluster checks if a given name is tagged with this cluster's UID.
|
||||
func (n *Namer) NameBelongsToCluster(name string) bool {
|
||||
if !strings.HasPrefix(name, "k8s-") {
|
||||
glog.V(4).Infof("%v not part of cluster", name)
|
||||
return false
|
||||
}
|
||||
parts := strings.Split(name, clusterNameDelimiter)
|
||||
clusterName := n.GetClusterName()
|
||||
if len(parts) == 1 {
|
||||
if clusterName == "" {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
if len(parts) > 2 {
|
||||
glog.Warningf("Too many parts to name %v, ignoring", name)
|
||||
return false
|
||||
}
|
||||
return parts[1] == clusterName
|
||||
}
|
||||
|
||||
// BeName constructs the name for a backend.
|
||||
func (n *Namer) BeName(port int64) string {
|
||||
return n.decorateName(fmt.Sprintf("%v-%d", backendPrefix, port))
|
||||
}
|
||||
|
||||
// BePort retrieves the port from the given backend name.
|
||||
func (n *Namer) BePort(beName string) (string, error) {
|
||||
r, err := regexp.Compile(backendRegex)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
match := r.FindStringSubmatch(beName)
|
||||
if len(match) < 2 {
|
||||
return "", fmt.Errorf("Unable to lookup port for %v", beName)
|
||||
}
|
||||
_, err = strconv.Atoi(match[1])
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("Unexpected regex match: %v", beName)
|
||||
}
|
||||
return match[1], nil
|
||||
}
|
||||
|
||||
// IGName constructs the name for an Instance Group.
|
||||
func (n *Namer) IGName() string {
|
||||
// Currently all ports are added to a single instance group.
|
||||
return n.decorateName(igPrefix)
|
||||
}
|
||||
|
||||
// FrSuffix constructs the glbc specific suffix for the FirewallRule.
|
||||
func (n *Namer) FrSuffix() string {
|
||||
clusterName := n.GetClusterName()
|
||||
// The entire cluster only needs a single firewall rule.
|
||||
if clusterName == "" {
|
||||
return globalFirewallSuffix
|
||||
}
|
||||
return n.Truncate(fmt.Sprintf("%v%v%v", globalFirewallSuffix, clusterNameDelimiter, clusterName))
|
||||
}
|
||||
|
||||
// FrName constructs the full firewall rule name, this is the name assigned by
|
||||
// the cloudprovider lib + suffix from glbc, so we don't mix this rule with a
|
||||
// rule created for L4 loadbalancing.
|
||||
func (n *Namer) FrName(suffix string) string {
|
||||
return fmt.Sprintf("k8s-fw-%s", suffix)
|
||||
}
|
||||
|
||||
// LBName constructs a loadbalancer name from the given key. The key is usually
|
||||
// the namespace/name of a Kubernetes Ingress.
|
||||
func (n *Namer) LBName(key string) string {
|
||||
// TODO: Pipe the clusterName through, for now it saves code churn to just
|
||||
// grab it globally, especially since we haven't decided how to handle
|
||||
// namespace conflicts in the Ubernetes context.
|
||||
parts := strings.Split(key, clusterNameDelimiter)
|
||||
scrubbedName := strings.Replace(key, "/", "-", -1)
|
||||
clusterName := n.GetClusterName()
|
||||
if clusterName == "" || parts[len(parts)-1] == clusterName {
|
||||
return scrubbedName
|
||||
}
|
||||
return n.Truncate(fmt.Sprintf("%v%v%v", scrubbedName, clusterNameDelimiter, clusterName))
|
||||
}
|
||||
|
||||
// GCEURLMap is a nested map of hostname->path regex->backend
|
||||
type GCEURLMap map[string]map[string]*compute.BackendService
|
||||
|
||||
// GetDefaultBackend performs a destructive read and returns the default
|
||||
// backend of the urlmap.
|
||||
func (g GCEURLMap) GetDefaultBackend() *compute.BackendService {
|
||||
var d *compute.BackendService
|
||||
var exists bool
|
||||
if h, ok := g[DefaultBackendKey]; ok {
|
||||
if d, exists = h[DefaultBackendKey]; exists {
|
||||
delete(h, DefaultBackendKey)
|
||||
}
|
||||
delete(g, DefaultBackendKey)
|
||||
}
|
||||
return d
|
||||
}
|
||||
|
||||
// String implements the string interface for the GCEURLMap.
|
||||
func (g GCEURLMap) String() string {
|
||||
msg := ""
|
||||
for host, um := range g {
|
||||
msg += fmt.Sprintf("%v\n", host)
|
||||
for url, be := range um {
|
||||
msg += fmt.Sprintf("\t%v: ", url)
|
||||
if be == nil {
|
||||
msg += fmt.Sprintf("No backend\n")
|
||||
} else {
|
||||
msg += fmt.Sprintf("%v\n", be.Name)
|
||||
}
|
||||
}
|
||||
}
|
||||
return msg
|
||||
}
|
||||
|
||||
// PutDefaultBackend performs a destructive write replacing the
|
||||
// default backend of the url map with the given backend.
|
||||
func (g GCEURLMap) PutDefaultBackend(d *compute.BackendService) {
|
||||
g[DefaultBackendKey] = map[string]*compute.BackendService{
|
||||
DefaultBackendKey: d,
|
||||
}
|
||||
}
|
||||
|
||||
// IsHTTPErrorCode checks if the given error matches the given HTTP Error code.
|
||||
// For this to work the error must be a googleapi Error.
|
||||
func IsHTTPErrorCode(err error, code int) bool {
|
||||
apiErr, ok := err.(*googleapi.Error)
|
||||
return ok && apiErr.Code == code
|
||||
}
|
||||
|
||||
// CompareLinks returns true if the 2 self links are equal.
|
||||
func CompareLinks(l1, l2 string) bool {
|
||||
// TODO: These can be partial links
|
||||
return l1 == l2 && l1 != ""
|
||||
}
|
||||
|
||||
// FakeIngressRuleValueMap is a convenience type used by multiple submodules
|
||||
// that share the same testing methods.
|
||||
type FakeIngressRuleValueMap map[string]string
|
||||
|
||||
// DefaultHealthCheckTemplate simply returns the default health check template.
|
||||
func DefaultHealthCheckTemplate(port int64) *compute.HttpHealthCheck {
|
||||
return &compute.HttpHealthCheck{
|
||||
Port: port,
|
||||
// Empty string is used as a signal to the caller to use the appropriate
|
||||
// default.
|
||||
RequestPath: "",
|
||||
Description: "Default kubernetes L7 Loadbalancing health check.",
|
||||
// How often to health check.
|
||||
CheckIntervalSec: 1,
|
||||
// How long to wait before claiming failure of a health check.
|
||||
TimeoutSec: 1,
|
||||
// Number of healthchecks to pass for a vm to be deemed healthy.
|
||||
HealthyThreshold: 1,
|
||||
// Number of healthchecks to fail before the vm is deemed unhealthy.
|
||||
UnhealthyThreshold: 10,
|
||||
}
|
||||
}
|
18
controllers/nginx-alpha/Dockerfile
Normal file
18
controllers/nginx-alpha/Dockerfile
Normal file
|
@ -0,0 +1,18 @@
|
|||
# Copyright 2015 The Kubernetes Authors. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM gcr.io/google_containers/nginx
|
||||
COPY controller /
|
||||
COPY default.conf /etc/nginx/nginx.conf
|
||||
CMD ["/controller"]
|
17
controllers/nginx-alpha/Makefile
Normal file
17
controllers/nginx-alpha/Makefile
Normal file
|
@ -0,0 +1,17 @@
|
|||
all: push
|
||||
|
||||
# 0.0 shouldn't clobber any release builds
|
||||
TAG = 0.0
|
||||
PREFIX = gcr.io/google_containers/nginx-ingress
|
||||
|
||||
controller: controller.go
|
||||
CGO_ENABLED=0 GOOS=linux godep go build -a -installsuffix cgo -ldflags '-w' -o controller ./controller.go
|
||||
|
||||
container: controller
|
||||
docker build -t $(PREFIX):$(TAG) .
|
||||
|
||||
push: container
|
||||
gcloud docker push $(PREFIX):$(TAG)
|
||||
|
||||
clean:
|
||||
rm -f controller
|
116
controllers/nginx-alpha/README.md
Normal file
116
controllers/nginx-alpha/README.md
Normal file
|
@ -0,0 +1,116 @@
|
|||
# Nginx Ingress Controller
|
||||
|
||||
This is a simple nginx Ingress controller. Expect it to grow up. See [Ingress controller documentation](../README.md) for details on how it works.
|
||||
|
||||
## Deploying the controller
|
||||
|
||||
Deploying the controller is as easy as creating the RC in this directory. Having done so you can test it with the following echoheaders application:
|
||||
|
||||
```yaml
|
||||
# 3 Services for the 3 endpoints of the Ingress
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-x
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
nodePort: 30301
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-default
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
nodePort: 30302
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders-y
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
nodePort: 30284
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
# A single RC matching all Services
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders
|
||||
image: gcr.io/google_containers/echoserver:1.0
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
# An Ingress with 2 hosts and 3 endpoints
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echomap
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheaders-x
|
||||
servicePort: 80
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: echoheaders-y
|
||||
servicePort: 80
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheaders-x
|
||||
servicePort: 80
|
||||
```
|
||||
You should be able to access the Services on the public IP of the node the nginx pod lands on.
|
||||
|
||||
## Wishlist
|
||||
|
||||
* SSL/TLS
|
||||
* Production ready options
|
||||
* Dynamic adding backends
|
||||
* Varied loadbalancing algorithms
|
||||
|
||||
... this list goes on. If you feel you know nginx better than I do, please contribute.
|
||||
|
95
controllers/nginx-alpha/controller.go
Normal file
95
controllers/nginx-alpha/controller.go
Normal file
|
@ -0,0 +1,95 @@
|
|||
/*
|
||||
Copyright 2015 The Kubernetes Authors.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"log"
|
||||
"os"
|
||||
"os/exec"
|
||||
"reflect"
|
||||
"text/template"
|
||||
|
||||
"k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/apis/extensions"
|
||||
client "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
"k8s.io/kubernetes/pkg/util/flowcontrol"
|
||||
)
|
||||
|
||||
const (
|
||||
nginxConf = `
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
http {
|
||||
# http://nginx.org/en/docs/http/ngx_http_core_module.html
|
||||
types_hash_max_size 2048;
|
||||
server_names_hash_max_size 512;
|
||||
server_names_hash_bucket_size 64;
|
||||
|
||||
{{range $ing := .Items}}
|
||||
{{range $rule := $ing.Spec.Rules}}
|
||||
server {
|
||||
listen 80;
|
||||
server_name {{$rule.Host}};
|
||||
{{ range $path := $rule.HTTP.Paths }}
|
||||
location {{$path.Path}} {
|
||||
proxy_set_header Host $host;
|
||||
proxy_pass http://{{$path.Backend.ServiceName}}.{{$ing.Namespace}}.svc.cluster.local:{{$path.Backend.ServicePort}};
|
||||
}{{end}}
|
||||
}{{end}}{{end}}
|
||||
}`
|
||||
)
|
||||
|
||||
func shellOut(cmd string) {
|
||||
out, err := exec.Command("sh", "-c", cmd).CombinedOutput()
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to execute %v: %v, err: %v", cmd, string(out), err)
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
var ingClient client.IngressInterface
|
||||
if kubeClient, err := client.NewInCluster(); err != nil {
|
||||
log.Fatalf("Failed to create client: %v.", err)
|
||||
} else {
|
||||
ingClient = kubeClient.Extensions().Ingress(api.NamespaceAll)
|
||||
}
|
||||
tmpl, _ := template.New("nginx").Parse(nginxConf)
|
||||
rateLimiter := flowcontrol.NewTokenBucketRateLimiter(0.1, 1)
|
||||
known := &extensions.IngressList{}
|
||||
|
||||
// Controller loop
|
||||
shellOut("nginx")
|
||||
for {
|
||||
rateLimiter.Accept()
|
||||
ingresses, err := ingClient.List(api.ListOptions{})
|
||||
if err != nil {
|
||||
log.Printf("Error retrieving ingresses: %v", err)
|
||||
continue
|
||||
}
|
||||
if reflect.DeepEqual(ingresses.Items, known.Items) {
|
||||
continue
|
||||
}
|
||||
known = ingresses
|
||||
if w, err := os.Create("/etc/nginx/nginx.conf"); err != nil {
|
||||
log.Fatalf("Failed to open %v: %v", nginxConf, err)
|
||||
} else if err := tmpl.Execute(w, ingresses); err != nil {
|
||||
log.Fatalf("Failed to write template %v", err)
|
||||
}
|
||||
shellOut("nginx -s reload")
|
||||
}
|
||||
}
|
4
controllers/nginx-alpha/default.conf
Normal file
4
controllers/nginx-alpha/default.conf
Normal file
|
@ -0,0 +1,4 @@
|
|||
# A very simple nginx configuration file that forces nginx to start as a daemon.
|
||||
events {}
|
||||
http {}
|
||||
daemon on;
|
22
controllers/nginx-alpha/rc.yaml
Normal file
22
controllers/nginx-alpha/rc.yaml
Normal file
|
@ -0,0 +1,22 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress
|
||||
labels:
|
||||
app: nginx-ingress
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
app: nginx-ingress
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx-ingress
|
||||
spec:
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress:0.1
|
||||
imagePullPolicy: Always
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
1
controllers/nginx/.gitignore
vendored
Normal file
1
controllers/nginx/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
nginx-ingress-controller
|
56
controllers/nginx/Changelog.md
Normal file
56
controllers/nginx/Changelog.md
Normal file
|
@ -0,0 +1,56 @@
|
|||
Changelog
|
||||
|
||||
### 0.8.3
|
||||
|
||||
- [X] [#1450](https://github.com/kubernetes/contrib/pull/1450) Check for errors in nginx template
|
||||
- [ ] [#1498](https://github.com/kubernetes/contrib/pull/1498) Refactoring of template handling
|
||||
- [X] [#1467](https://github.com/kubernetes/contrib/pull/1467) Use ClientConfig to configure connection
|
||||
- [X] [#1575](https://github.com/kubernetes/contrib/pull/1575) Update nginx to 1.11.3
|
||||
|
||||
### 0.8.2
|
||||
|
||||
- [X] [#1336](https://github.com/kubernetes/contrib/pull/1336) Add annotation to skip ingress rule
|
||||
- [X] [#1338](https://github.com/kubernetes/contrib/pull/1338) Add HTTPS default backend
|
||||
- [X] [#1351](https://github.com/kubernetes/contrib/pull/1351) Avoid generation of invalid ssl certificates
|
||||
- [X] [#1379](https://github.com/kubernetes/contrib/pull/1379) improve nginx performance
|
||||
- [X] [#1350](https://github.com/kubernetes/contrib/pull/1350) Improve performance (listen backlog=net.core.somaxconn)
|
||||
- [X] [#1384](https://github.com/kubernetes/contrib/pull/1384) Unset Authorization header when proxying
|
||||
- [X] [#1398](https://github.com/kubernetes/contrib/pull/1398) Mitigate HTTPoxy Vulnerability
|
||||
|
||||
### 0.8.1
|
||||
|
||||
- [X] [#1317](https://github.com/kubernetes/contrib/pull/1317) Fix duplicated real_ip_header
|
||||
- [X] [#1315](https://github.com/kubernetes/contrib/pull/1315) Addresses #1314
|
||||
|
||||
### 0.8
|
||||
|
||||
- [X] [#1063](https://github.com/kubernetes/contrib/pull/1063) watches referenced tls secrets
|
||||
- [X] [#850](https://github.com/kubernetes/contrib/pull/850) adds configurable SSL redirect nginx controller
|
||||
- [X] [#1136](https://github.com/kubernetes/contrib/pull/1136) Fix nginx rewrite rule order
|
||||
- [X] [#1144](https://github.com/kubernetes/contrib/pull/1144) Add cidr whitelist support
|
||||
- [X] [#1230](https://github.com/kubernetes/contrib/pull/1130) Improve docs and examples
|
||||
- [X] [#1258](https://github.com/kubernetes/contrib/pull/1258) Avoid sync without a reachable
|
||||
- [X] [#1235](https://github.com/kubernetes/contrib/pull/1235) Fix stats by country in nginx status page
|
||||
- [X] [#1236](https://github.com/kubernetes/contrib/pull/1236) Update nginx to add dynamic TLS records and spdy
|
||||
- [X] [#1238](https://github.com/kubernetes/contrib/pull/1238) Add support for dynamic TLS records and spdy
|
||||
- [X] [#1239](https://github.com/kubernetes/contrib/pull/1239) Add support for conditional log of urls
|
||||
- [X] [#1253](https://github.com/kubernetes/contrib/pull/1253) Use delayed queue
|
||||
- [X] [#1296](https://github.com/kubernetes/contrib/pull/1296) Fix formatting
|
||||
- [X] [#1299](https://github.com/kubernetes/contrib/pull/1299) Fix formatting
|
||||
|
||||
### 0.7
|
||||
|
||||
- [X] [#898](https://github.com/kubernetes/contrib/pull/898) reorder locations. Location / must be the last one to avoid errors routing to subroutes
|
||||
- [X] [#946](https://github.com/kubernetes/contrib/pull/946) Add custom authentication (Basic or Digest) to ingress rules
|
||||
- [X] [#926](https://github.com/kubernetes/contrib/pull/926) Custom errors should be optional
|
||||
- [X] [#1002](https://github.com/kubernetes/contrib/pull/1002) Use k8s probes (disable NGINX checks)
|
||||
- [X] [#962](https://github.com/kubernetes/contrib/pull/962) Make optional http2
|
||||
- [X] [#1054](https://github.com/kubernetes/contrib/pull/1054) force reload if some certificate change
|
||||
- [X] [#958](https://github.com/kubernetes/contrib/pull/958) update NGINX to 1.11.0 and add digest module
|
||||
- [X] [#960](https://github.com/kubernetes/contrib/issues/960) https://trac.nginx.org/nginx/changeset/ce94f07d50826fcc8d48f046fe19d59329420fdb/nginx
|
||||
- [X] [#1057](https://github.com/kubernetes/contrib/pull/1057) Remove loadBalancer ip on shutdown
|
||||
- [X] [#1079](https://github.com/kubernetes/contrib/pull/1079) path rewrite
|
||||
- [X] [#1093](https://github.com/kubernetes/contrib/pull/1093) rate limiting
|
||||
- [X] [#1102](https://github.com/kubernetes/contrib/pull/1102) geolocation of traffic in stats
|
||||
- [X] [#884](https://github.com/kubernetes/contrib/issues/884) support services running ssl
|
||||
- [X] [#930](https://github.com/kubernetes/contrib/issues/930) detect changes in configuration configmaps
|
33
controllers/nginx/Dockerfile
Normal file
33
controllers/nginx/Dockerfile
Normal file
|
@ -0,0 +1,33 @@
|
|||
# Copyright 2015 The Kubernetes Authors. All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM gcr.io/google_containers/nginx-slim:0.9
|
||||
|
||||
RUN DEBIAN_FRONTEND=noninteractive apt-get update && apt-get install -y \
|
||||
diffutils \
|
||||
ssl-cert \
|
||||
--no-install-recommends \
|
||||
&& rm -rf /var/lib/apt/lists/* \
|
||||
&& make-ssl-cert generate-default-snakeoil --force-overwrite
|
||||
|
||||
COPY nginx-ingress-controller /
|
||||
COPY nginx.tmpl /etc/nginx/template/nginx.tmpl
|
||||
COPY nginx.tmpl /etc/nginx/nginx.tmpl
|
||||
COPY default.conf /etc/nginx/nginx.conf
|
||||
|
||||
COPY lua /etc/nginx/lua/
|
||||
|
||||
WORKDIR /
|
||||
|
||||
CMD ["/nginx-ingress-controller"]
|
25
controllers/nginx/Makefile
Normal file
25
controllers/nginx/Makefile
Normal file
|
@ -0,0 +1,25 @@
|
|||
all: push
|
||||
|
||||
# 0.0 shouldn't clobber any release builds
|
||||
TAG = 0.8.3
|
||||
PREFIX = gcr.io/google_containers/nginx-ingress-controller
|
||||
|
||||
REPO_INFO=$(shell git config --get remote.origin.url)
|
||||
|
||||
ifndef VERSION
|
||||
VERSION := git-$(shell git rev-parse --short HEAD)
|
||||
endif
|
||||
|
||||
controller: controller.go clean
|
||||
CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -ldflags \
|
||||
"-s -w -X main.version=${VERSION} -X main.gitRepo=${REPO_INFO}" \
|
||||
-o nginx-ingress-controller
|
||||
|
||||
container: controller
|
||||
docker build -t $(PREFIX):$(TAG) .
|
||||
|
||||
push: container
|
||||
gcloud docker push $(PREFIX):$(TAG)
|
||||
|
||||
clean:
|
||||
rm -f nginx-ingress-controller
|
462
controllers/nginx/README.md
Normal file
462
controllers/nginx/README.md
Normal file
|
@ -0,0 +1,462 @@
|
|||
# Nginx Ingress Controller
|
||||
|
||||
This is an nginx Ingress controller that uses [ConfigMap](https://github.com/kubernetes/kubernetes/blob/master/docs/design/configmap.md) to store the nginx configuration. See [Ingress controller documentation](../README.md) for details on how it works.
|
||||
|
||||
## Contents
|
||||
* [Conventions](#conventions)
|
||||
* [Requirements](#what-it-provides)
|
||||
* [Dry running](#dry-running-the-ingress-controller)
|
||||
* [Deployment](#deployment)
|
||||
* [HTTP](#http)
|
||||
* [HTTPS](#https)
|
||||
* [Default SSL Certificate](#default-ssl-certificate)
|
||||
* [HTTPS enforcement](#server-side-https-enforcement)
|
||||
* [HSTS](#http-strict-transport-security)
|
||||
* [Kube-Lego](#automated-certificate-management-with-kube-lego)
|
||||
* [TCP Services](#exposing-tcp-services)
|
||||
* [UDP Services](#exposing-udp-services)
|
||||
* [Proxy Protocol](#proxy-protocol)
|
||||
* [NGINX customization](configuration.md)
|
||||
* [NGINX status page](#nginx-status-page)
|
||||
* [Running multiple ingress controllers](#running-multiple-ingress-controllers)
|
||||
* [Running on Cloudproviders](#running-on-cloudproviders)
|
||||
* [Disabling NGINX ingress controller](#disabling-nginx-ingress-controller)
|
||||
* [Log format](#log-format)
|
||||
* [Local cluster](#local-cluster)
|
||||
* [Debug & Troubleshooting](#troubleshooting)
|
||||
* [Why endpoints and not services?](#why-endpoints-and-not-services)
|
||||
* [Limitations](#limitations)
|
||||
* [NGINX Notes](#nginx-notes)
|
||||
|
||||
## Conventions
|
||||
|
||||
Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:
|
||||
`openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout $(KEY) -out $(CERT) -subj "/CN=$(HOST)/O=$(HOST)"`
|
||||
and create the secret via `kubectl create secret tls --key file --cert file`
|
||||
|
||||
|
||||
|
||||
## Requirements
|
||||
- Default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
|
||||
|
||||
|
||||
## Dry running the Ingress controller
|
||||
|
||||
Before deploying the controller to production you might want to run it outside the cluster and observe it.
|
||||
|
||||
```console
|
||||
$ make controller
|
||||
$ mkdir /etc/nginx-ssl
|
||||
$ ./nginx-ingress-controller --running-in-cluster=false --default-backend-service=kube-system/default-http-backend
|
||||
```
|
||||
|
||||
## Deployment
|
||||
|
||||
First create a default backend:
|
||||
```
|
||||
$ kubectl create -f examples/default-backend.yaml
|
||||
$ kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
|
||||
```
|
||||
|
||||
Loadbalancers are created via a ReplicationController or Daemonset:
|
||||
|
||||
```
|
||||
$ kubectl create -f examples/default/rc-default.yaml
|
||||
```
|
||||
|
||||
## HTTP
|
||||
|
||||
First we need to deploy some application to publish. To keep this simple we will use the [echoheaders app](https://github.com/kubernetes/contrib/blob/master/ingress/echoheaders/echo-app.yaml) that just returns information about the http request as output
|
||||
```
|
||||
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
|
||||
```
|
||||
|
||||
Now we expose the same application in two different services (so we can create different Ingress rules)
|
||||
```
|
||||
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
|
||||
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-y
|
||||
```
|
||||
|
||||
Next we create a couple of Ingress rules
|
||||
```
|
||||
kubectl create -f examples/ingress.yaml
|
||||
```
|
||||
|
||||
we check that ingress rules are defined:
|
||||
```
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
echomap -
|
||||
foo.bar.com
|
||||
/foo echoheaders-x:80
|
||||
bar.baz.com
|
||||
/bar echoheaders-y:80
|
||||
/foo echoheaders-x:80
|
||||
```
|
||||
|
||||
Before the deploy of the Ingress controller we need a default backend [404-server](https://github.com/kubernetes/contrib/tree/master/404-server)
|
||||
```
|
||||
kubectl create -f examples/default-backend.yaml
|
||||
kubectl expose rc default-http-backend --port=80 --target-port=8080 --name=default-http-backend
|
||||
```
|
||||
|
||||
Check NGINX it is running with the defined Ingress rules:
|
||||
|
||||
```
|
||||
$ LBIP=$(kubectl get node `kubectl get po -l name=nginx-ingress-lb --template '{{range .items}}{{.spec.nodeName}}{{end}}'` --template '{{range $i, $n := .status.addresses}}{{if eq $n.type "ExternalIP"}}{{$n.address}}{{end}}{{end}}')
|
||||
$ curl $LBIP/foo -H 'Host: foo.bar.com'
|
||||
```
|
||||
|
||||
## HTTPS
|
||||
|
||||
You can secure an Ingress by specifying a secret that contains a TLS private key and certificate. Currently the Ingress only supports a single TLS port, 443, and assumes TLS termination. This controller supports SNI. The TLS secret must contain keys named tls.crt and tls.key that contain the certificate and private key to use for TLS, eg:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
data:
|
||||
tls.crt: base64 encoded cert
|
||||
tls.key: base64 encoded key
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: testsecret
|
||||
namespace: default
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
Referencing this secret in an Ingress will tell the Ingress controller to secure the channel from the client to the loadbalancer using TLS:
|
||||
|
||||
```
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: no-rules-map
|
||||
spec:
|
||||
tls:
|
||||
secretName: testsecret
|
||||
backend:
|
||||
serviceName: s1
|
||||
servicePort: 80
|
||||
```
|
||||
Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.
|
||||
|
||||
Check the [example](examples/tls/README.md)
|
||||
|
||||
### Default SSL Certificate
|
||||
|
||||
NGINX provides the option [server name](http://nginx.org/en/docs/http/server_names.html) as a catch-all in case of requests that do not match one of the configured server names. This configuration works without issues for HTTP traffic. In case of HTTPS NGINX requires a certificate. For this reason the Ingress controller provides the flag `--default-ssl-certificate`. The secret behind this flag contains the default certificate to be used in the mentioned case.
|
||||
If this flag is not provided NGINX will use a self signed certificate.
|
||||
|
||||
Running without the flag `--default-ssl-certificate`:
|
||||
|
||||
```
|
||||
$ curl -v https://10.2.78.7:443 -k
|
||||
* Rebuilt URL to: https://10.2.78.7:443/
|
||||
* Trying 10.2.78.4...
|
||||
* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0)
|
||||
* ALPN, offering http/1.1
|
||||
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
|
||||
* successfully set certificate verify locations:
|
||||
* CAfile: /etc/ssl/certs/ca-certificates.crt
|
||||
CApath: /etc/ssl/certs
|
||||
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Server hello (2):
|
||||
* TLSv1.2 (IN), TLS handshake, Certificate (11):
|
||||
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
|
||||
* TLSv1.2 (IN), TLS handshake, Server finished (14):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
|
||||
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (OUT), TLS handshake, Finished (20):
|
||||
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Finished (20):
|
||||
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
|
||||
* ALPN, server accepted to use http/1.1
|
||||
* Server certificate:
|
||||
* subject: CN=foo.bar.com
|
||||
* start date: Apr 13 00:50:56 2016 GMT
|
||||
* expire date: Apr 13 00:50:56 2017 GMT
|
||||
* issuer: CN=foo.bar.com
|
||||
* SSL certificate verify result: self signed certificate (18), continuing anyway.
|
||||
> GET / HTTP/1.1
|
||||
> Host: 10.2.78.7
|
||||
> User-Agent: curl/7.47.1
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 404 Not Found
|
||||
< Server: nginx/1.11.1
|
||||
< Date: Thu, 21 Jul 2016 15:38:46 GMT
|
||||
< Content-Type: text/html
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
|
||||
<
|
||||
<span>The page you're looking for could not be found.</span>
|
||||
|
||||
* Connection #0 to host 10.2.78.7 left intact
|
||||
```
|
||||
|
||||
Specifying `--default-ssl-certificate=default/foo-tls`:
|
||||
|
||||
```
|
||||
core@localhost ~ $ curl -v https://10.2.78.7:443 -k
|
||||
* Rebuilt URL to: https://10.2.78.7:443/
|
||||
* Trying 10.2.78.7...
|
||||
* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0)
|
||||
* ALPN, offering http/1.1
|
||||
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
|
||||
* successfully set certificate verify locations:
|
||||
* CAfile: /etc/ssl/certs/ca-certificates.crt
|
||||
CApath: /etc/ssl/certs
|
||||
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Server hello (2):
|
||||
* TLSv1.2 (IN), TLS handshake, Certificate (11):
|
||||
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
|
||||
* TLSv1.2 (IN), TLS handshake, Server finished (14):
|
||||
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
|
||||
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (OUT), TLS handshake, Finished (20):
|
||||
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
|
||||
* TLSv1.2 (IN), TLS handshake, Finished (20):
|
||||
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
|
||||
* ALPN, server accepted to use http/1.1
|
||||
* Server certificate:
|
||||
* subject: CN=foo.bar.com
|
||||
* start date: Apr 13 00:50:56 2016 GMT
|
||||
* expire date: Apr 13 00:50:56 2017 GMT
|
||||
* issuer: CN=foo.bar.com
|
||||
* SSL certificate verify result: self signed certificate (18), continuing anyway.
|
||||
> GET / HTTP/1.1
|
||||
> Host: 10.2.78.7
|
||||
> User-Agent: curl/7.47.1
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 404 Not Found
|
||||
< Server: nginx/1.11.1
|
||||
< Date: Mon, 18 Jul 2016 21:02:59 GMT
|
||||
< Content-Type: text/html
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload
|
||||
<
|
||||
<span>The page you're looking for could not be found.</span>
|
||||
|
||||
* Connection #0 to host 10.2.78.7 left intact
|
||||
```
|
||||
|
||||
|
||||
### Server-side HTTPS enforcement
|
||||
|
||||
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you can use `ssl-redirect: "false"` in the NGINX config map.
|
||||
|
||||
To configure this feature for specific ingress resources, you can use the `ingress.kubernetes.io/ssl-redirect: "false"` annotation in the particular resource.
|
||||
|
||||
|
||||
### HTTP Strict Transport Security
|
||||
|
||||
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
|
||||
|
||||
By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule.
|
||||
|
||||
To disable this behavior use `hsts=false` in the NGINX config map.
|
||||
|
||||
|
||||
### Automated Certificate Management with Kube-Lego
|
||||
|
||||
[Kube-Lego] automatically requests missing certificates or expired from
|
||||
[Let's Encrypt] by monitoring ingress resources and its referenced secrets. To
|
||||
enable this for an ingress resource you have to add an annotation:
|
||||
|
||||
```
|
||||
kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
|
||||
```
|
||||
|
||||
To setup Kube-Lego you can take a look at this [full example]. The first
|
||||
version to fully support Kube-Lego is nginx Ingress controller 0.8.
|
||||
|
||||
[full example]:https://github.com/jetstack/kube-lego/tree/master/examples
|
||||
[Kube-Lego]:https://github.com/jetstack/kube-lego
|
||||
[Let's Encrypt]:https://letsencrypt.org
|
||||
|
||||
## Exposing TCP services
|
||||
|
||||
Ingress does not support TCP services (yet). For this reason this Ingress controller uses the flag `--tcp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>`
|
||||
It is possible to use a number or the name of the port.
|
||||
|
||||
The next example shows how to expose the service `example-go` running in the namespace `default` in the port `8080` using the port `9000`
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: tcp-configmap-example
|
||||
data:
|
||||
9000: "default/example-go:8080"
|
||||
```
|
||||
|
||||
|
||||
Please check the [tcp services](examples/tcp/README.md) example
|
||||
|
||||
## Exposing UDP services
|
||||
|
||||
Since 1.9.13 NGINX provides [UDP Load Balancing](https://www.nginx.com/blog/announcing-udp-load-balancing/).
|
||||
|
||||
Ingress does not support UDP services (yet). For this reason this Ingress controller uses the flag `--udp-services-configmap` to point to an existing config map where the key is the external port to use and the value is `<namespace/service name>:<service port>`
|
||||
It is possible to use a number or the name of the port.
|
||||
|
||||
The next example shows how to expose the service `kube-dns` running in the namespace `kube-system` in the port `53` using the port `53`
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: udp-configmap-example
|
||||
data:
|
||||
53: "kube-system/kube-dns:53"
|
||||
```
|
||||
|
||||
|
||||
Please check the [udp services](examples/udp/README.md) example
|
||||
|
||||
## Proxy Protocol
|
||||
|
||||
If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP addresses. To prevent this you could use the [Proxy Protocol](http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt) for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
|
||||
|
||||
Amongst others [ELBs in AWS](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) and [HAProxy](http://www.haproxy.org/) support Proxy Protocol.
|
||||
|
||||
Please check the [proxy-protocol](examples/proxy-protocol/) example
|
||||
|
||||
|
||||
### Custom errors
|
||||
|
||||
In case of an error in a request the body of the response is obtained from the `default backend`. Each request to the default backend includes two headers:
|
||||
- `X-Code` indicates the HTTP code
|
||||
- `X-Format` the value of the `Accept` header
|
||||
|
||||
Using this two headers is possible to use a custom backend service like [this one](https://github.com/aledbf/contrib/tree/nginx-debug-server/Ingress/images/nginx-error-server) that inspect each request and returns a custom error page with the format expected by the client. Please check the example [custom-errors](examples/custom-errors/README.md)
|
||||
|
||||
### NGINX status page
|
||||
|
||||
The ngx_http_stub_status_module module provides access to basic status information. This is the default module active in the url `/nginx_status`.
|
||||
This controller provides an alternative to this module using [nginx-module-vts](https://github.com/vozlt/nginx-module-vts) third party module.
|
||||
To use this module just provide a config map with the key `enable-vts-status=true`. The URL is exposed in the port 8080.
|
||||
Please check the example `example/rc-default.yaml`
|
||||
|
||||

|
||||
|
||||
To extract the information in JSON format the module provides a custom URL: `/nginx_status/format/json`
|
||||
|
||||
### Running multiple ingress controllers
|
||||
|
||||
If you're running multiple ingress controllers, or running on a cloudprovider that natively handles
|
||||
ingress, you need to specify the annotation `kubernetes.io/ingress.class: "nginx"` in all ingresses
|
||||
that you would like this controller to claim. Not specifying the annotation will lead to multiple
|
||||
ingress controllers claiming the same ingress. Specifying the wrong value will result in all ingress
|
||||
controllers ignoring the ingress. Multiple ingress controllers running in the same cluster was not
|
||||
supported in Kubernetes versions < 1.3.
|
||||
|
||||
### Running on Cloudproviders
|
||||
|
||||
If you're running this ingress controller on a cloudprovider, you should assume the provider also has a native
|
||||
Ingress controller and specify the ingress.class annotation as indicated in this section.
|
||||
In addition to this, you will need to add a firewall rule for each port this controller is listening on, i.e :80 and :443.
|
||||
|
||||
### Disabling NGINX ingress controller
|
||||
|
||||
Setting the annotation `kubernetes.io/ingress.class` to any value other than "nginx" or the empty string, will force the NGINX Ingress controller to ignore your Ingress. Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
|
||||
|
||||
### Log format
|
||||
|
||||
The default configuration uses a custom logging format to add additional information about upstreams
|
||||
|
||||
```
|
||||
log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - '
|
||||
'[$proxy_add_x_forwarded_for] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" '
|
||||
'$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status';
|
||||
```
|
||||
|
||||
Sources:
|
||||
- [upstream variables](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#variables)
|
||||
- [embedded variables](http://nginx.org/en/docs/http/ngx_http_core_module.html#variables)
|
||||
|
||||
Description:
|
||||
- `$proxy_protocol_addr`: if PROXY protocol is enabled
|
||||
- `$remote_addr`: if PROXY protocol is disabled (default)
|
||||
- `$proxy_add_x_forwarded_for`: the `X-Forwarded-For` client request header field with the $remote_addr variable appended to it, separated by a comma
|
||||
- `$remote_user`: user name supplied with the Basic authentication
|
||||
- `$time_local`: local time in the Common Log Format
|
||||
- `$request`: full original request line
|
||||
- `$status`: response status
|
||||
- `$body_bytes_sent`: number of bytes sent to a client, not counting the response header
|
||||
- `$http_referer`: value of the Referer header
|
||||
- `$http_user_agent`: value of User-Agent header
|
||||
- `$request_length`: request length (including request line, header, and request body)
|
||||
- `$request_time`: time elapsed since the first bytes were read from the client
|
||||
- `$proxy_upstream_name`: name of the upstream. The format is `upstream-<namespace>-<service name>-<service port>`
|
||||
- `$upstream_addr`: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas
|
||||
- `$upstream_response_length`: keeps the length of the response obtained from the upstream server
|
||||
- `$upstream_response_time`: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution
|
||||
- `$upstream_status`: keeps status code of the response obtained from the upstream server
|
||||
|
||||
### Local cluster
|
||||
|
||||
Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/running-locally.md) for more details.
|
||||
|
||||
Use of `hostNetwork: true` in the ingress controller is required to falls back at localhost:8080 for the apiserver if every other client creation check fails (eg: service account not present, kubeconfig doesn't exist, no master env vars...)
|
||||
|
||||
|
||||
### Debug & Troubleshooting
|
||||
|
||||
Using the flag `--v=XX` it is possible to increase the level of logging.
|
||||
In particular:
|
||||
- `--v=2` shows details using `diff` about the changes in the configuration in nginx
|
||||
|
||||
```
|
||||
I0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf
|
||||
I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000
|
||||
+++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000
|
||||
@@ -235,7 +235,6 @@
|
||||
|
||||
upstream default-echoheadersx {
|
||||
least_conn;
|
||||
- server 10.2.112.124:5000;
|
||||
server 10.2.208.50:5000;
|
||||
|
||||
}
|
||||
I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading...
|
||||
```
|
||||
|
||||
- `--v=3` shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
|
||||
- `--v=5` configures NGINX in [debug mode](http://nginx.org/en/docs/debugging_log.html)
|
||||
|
||||
|
||||
|
||||
*These issues were encountered in past versions of Kubernetes:*
|
||||
|
||||
[1.2.0-alpha7 deployment](https://github.com/kubernetes/kubernetes/blob/master/docs/getting-started-guides/docker.md):
|
||||
|
||||
* make setup-files.sh file in hypercube does not provide 10.0.0.1 IP to make-ca-certs, resulting in CA certs that are issued to the external cluster IP address rather then 10.0.0.1 -> this results in nginx-third-party-lb appearing to get stuck at "Utils.go:177 - Waiting for default/default-http-backend" in the docker logs. Kubernetes will eventually kill the container before nginx-third-party-lb times out with a message indicating that the CA certificate issuer is invalid (wrong ip), to verify this add zeros to the end of initialDelaySeconds and timeoutSeconds and reload the RC, and docker will log this error before kubernetes kills the container.
|
||||
* To fix the above, setup-files.sh must be patched before the cluster is inited (refer to https://github.com/kubernetes/kubernetes/pull/21504)
|
||||
|
||||
|
||||
### Limitations
|
||||
|
||||
- Ingress rules for TLS require the definition of the field `host`
|
||||
|
||||
|
||||
### Why endpoints and not services
|
||||
|
||||
The NGINX ingress controller does not uses [Services](http://kubernetes.io/docs/user-guide/services) to route traffic to the pods. Instead it uses the Endpoints API in order to bypass [kube-proxy](http://kubernetes.io/docs/admin/kube-proxy/) to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
|
||||
|
||||
|
||||
### NGINX notes
|
||||
|
||||
Since `gcr.io/google_containers/nginx-slim:0.8` NGINX contains the next patches:
|
||||
- Dynamic TLS record size [nginx__dynamic_tls_records.patch](https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency/)
|
||||
NGINX provides the parameter `ssl_buffer_size` to adjust the size of the buffer. Default value in NGINX is 16KB. The ingress controller changes the default to 4KB. This improves the [TLS Time To First Byte (TTTFB)](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) but the size is fixed. This patches adapts the size of the buffer to the content is being served helping to improve the perceived latency.
|
||||
|
||||
- Add SPDY support back to Nginx with HTTP/2 [nginx_1_9_15_http2_spdy.patch](https://github.com/cloudflare/sslconfig/pull/36)
|
||||
At the same NGINX introduced HTTP/2 support for SPDY was removed. This patch add support for SPDY without compromising HTTP/2 support using the Application-Layer Protocol Negotiation (ALPN) or Next Protocol Negotiation (NPN) Transport Layer Security (TLS) extension to negotiate what protocol the server and client support
|
||||
```
|
||||
openssl s_client -servername www.my-site.com -connect www.my-site.com:443 -nextprotoneg ''
|
||||
CONNECTED(00000003)
|
||||
Protocols advertised by server: h2, spdy/3.1, http/1.1
|
||||
```
|
364
controllers/nginx/configuration.md
Normal file
364
controllers/nginx/configuration.md
Normal file
|
@ -0,0 +1,364 @@
|
|||
## Contents
|
||||
* [Customizing NGINX](#customizing-nginx)
|
||||
* [Custom NGINX configuration](#custom-nginx-configuration)
|
||||
* [Custom NGINX template](#custom-nginx-template)
|
||||
* [Annotations](#annotations)
|
||||
* [Custom NGINX upstream checks](#custom-nginx-upstream-checks)
|
||||
* [Authentication](#authentication)
|
||||
* [Rewrite](#rewrite)
|
||||
* [Rate limiting](#rate-limiting)
|
||||
* [Secure backends](#secure-backends)
|
||||
* [Whitelist source range](#whitelist-source-range)
|
||||
* [Allowed parameters in configuration config map](#allowed-parameters-in-configuration-configmap)
|
||||
* [Default configuration options](#default-configuration-options)
|
||||
* [Websockets](#websockets)
|
||||
* [Optimizing TLS Time To First Byte (TTTFB)](#optimizing-tls-time-to-first-byte-tttfb)
|
||||
* [Retries in no idempotent methods](#retries-in-no-idempotent-methods)
|
||||
|
||||
|
||||
### Customizing nginx
|
||||
|
||||
there are 3 ways to customize nginx
|
||||
|
||||
1. config map: create a stand alone config map, use this if you want a different global configuration
|
||||
2. annoations: [annotate the ingress](#annotations), use this if you want a specific configuration for the site defined in the ingress rule
|
||||
3. custom template: when is required a specific setting like [open_file_cache](http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache), custom [log_format](http://nginx.org/en/docs/http/ngx_http_log_module.html#log_format), adjust [listen](http://nginx.org/en/docs/http/ngx_http_core_module.html#listen) options as `rcvbuf` or when is not possible to change an through the config map
|
||||
|
||||
|
||||
#### Custom NGINX configuration
|
||||
|
||||
It's possible to customize the defaults in NGINX using a config map.
|
||||
|
||||
Please check the [custom configuration](examples/custom-configuration/README.md) example
|
||||
|
||||
#### Annotations
|
||||
|
||||
The following annotations are supported:
|
||||
|
||||
|Name |type|
|
||||
|---------------------------|------|
|
||||
|[ingress.kubernetes.io/add-base-url](#rewrite)|true or false|
|
||||
|[ingress.kubernetes.io/auth-realm](#authentication)|string|
|
||||
|[ingress.kubernetes.io/auth-secret](#authentication)|string|
|
||||
|[ingress.kubernetes.io/auth-type](#authentication)|basic or digest|
|
||||
|[ingress.kubernetes.io/auth-url](#external-authentication)|string|
|
||||
|[ingress.kubernetes.io/limit-connections](#rate-limiting)|number|
|
||||
|[ingress.kubernetes.io/limit-rps](#rate-limiting)|number|
|
||||
|[ingress.kubernetes.io/rewrite-target](#rewrite)|URI|
|
||||
|[ingress.kubernetes.io/secure-backends](#secure-backends)|true or false|
|
||||
|[ingress.kubernetes.io/ssl-redirect](#server-side-https-enforcement-through-redirect)|true or false|
|
||||
|[ingress.kubernetes.io/upstream-max-fails](#custom-nginx-upstream-checks)|number|
|
||||
|[ingress.kubernetes.io/upstream-fail-timeout](#custom-nginx-upstream-checks)|number|
|
||||
|[ingress.kubernetes.io/whitelist-source-range](#whitelist-source-range)|CIDR|
|
||||
|
||||
|
||||
|
||||
#### Custom NGINX template
|
||||
|
||||
The NGINX template is located in the file `/etc/nginx/template/nginx.tmpl`. Mounting a volume is possible to use a custom version.
|
||||
Use the [custom-template](examples/custom-template/README.md) example as a guide
|
||||
|
||||
**Please note the template is tied to the go code. Be sure to no change names in the variable `$cfg`**
|
||||
|
||||
To know more about the template please check the [Go template package](https://golang.org/pkg/text/template/)
|
||||
Additionally to the built-in functions provided by the go package this were added:
|
||||
- empty: returns true if the specified parameter (string) is empty
|
||||
- contains: [strings.Contains](https://golang.org/pkg/strings/#Contains)
|
||||
- hasPrefix: [strings.HasPrefix](https://golang.org/pkg/strings/#Contains)
|
||||
- hasSuffix: [strings.HasSuffix](https://golang.org/pkg/strings/#HasSuffix)
|
||||
- toUpper: [strings.ToUpper](https://golang.org/pkg/strings/#ToUpper)
|
||||
- toLower: [strings.ToLower](https://golang.org/pkg/strings/#ToLower)
|
||||
- buildLocation: helper to build the NGINX Location section in each server
|
||||
- buildProxyPass: builds the reverse proxy configuration
|
||||
- buildRateLimitZones: helper to build all the required rate limit zones
|
||||
- buildRateLimit: helper to build a limit zone inside a location if contains a rate limit annotation
|
||||
|
||||
|
||||
### Custom NGINX upstream checks
|
||||
|
||||
NGINX exposes some flags in the [upstream configuration](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that enables the configuration of each server in the upstream. The ingress controller allows custom `max_fails` and `fail_timeout` parameters in a global context using `upstream-max-fails` or `upstream-fail-timeout` in the NGINX config map or in a particular Ingress rule. It defaults to 0. This means NGINX will respect the `readinessProbe`, if is defined. If there is no probe, NGINX will not mark a server inside an upstream down.
|
||||
|
||||
**With the default values NGINX will not health check your backends, and whenever the endpoints controller notices a readiness probe failure that pod's ip will be removed from the list of endpoints, causing nginx to also remove it from the upstreams.**
|
||||
|
||||
To use custom values in an Ingress rule define this annotations:
|
||||
|
||||
`ingress.kubernetes.io/upstream-max-fails`: number of unsuccessful attempts to communicate with the server that should happen in the duration set by the fail_timeout parameter to consider the server unavailable
|
||||
|
||||
`ingress.kubernetes.io/upstream-fail-timeout`: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should happen to consider the server unavailable. Also the period of time the server will be considered unavailable.
|
||||
|
||||
**Important:**
|
||||
The upstreams are shared. i.e. Ingress rule using the same service will use the same upstream.
|
||||
This means only one of the rules should define annotations to configure the upstream servers
|
||||
|
||||
|
||||
Please check the [custom upstream check](examples/custom-upstream-check/README.md) example
|
||||
|
||||
|
||||
### Authentication
|
||||
|
||||
Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the the key `auth`
|
||||
|
||||
The annotations are:
|
||||
|
||||
```
|
||||
ingress.kubernetes.io/auth-type:[basic|digest]
|
||||
```
|
||||
|
||||
Indicates the [HTTP Authentication Type: Basic or Digest Access Authentication](https://tools.ietf.org/html/rfc2617).
|
||||
|
||||
```
|
||||
ingress.kubernetes.io/auth-secret:secretName
|
||||
```
|
||||
|
||||
Name of the secret that contains the usernames and passwords with access to the `path/s` defined in the Ingress Rule.
|
||||
The secret must be created in the same namespace than the Ingress rule
|
||||
|
||||
```
|
||||
ingress.kubernetes.io/auth-realm:"realm string"
|
||||
```
|
||||
|
||||
Please check the [auth](examples/auth/README.md) example
|
||||
|
||||
|
||||
### External Authentication
|
||||
|
||||
To use an existing service that provides authentication the Ingress rule can be annotated with `ingress.kubernetes.io/auth-url` to indicate the URL where the HTTP request should be sent.
|
||||
Additionally is possible to set `ingress.kubernetes.io/auth-method` to specify the HTTP method to use (GET or POST) and `ingress.kubernetes.io/auth-send-body` to true or false (default).
|
||||
|
||||
```
|
||||
ingress.kubernetes.io/auth-url:"URL to the authentication service"
|
||||
```
|
||||
|
||||
Please check the [external-auth](examples/external-auth/README.md) example
|
||||
|
||||
|
||||
### Rewrite
|
||||
|
||||
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
|
||||
Set the annotation `ingress.kubernetes.io/rewrite-target` to the path expected by the service.
|
||||
|
||||
If the application contains relative links is possible to add an additional annotation `ingress.kubernetes.io/add-base-url` that will append a `base` tag in the header of the returned HTML from the backend.
|
||||
|
||||
|
||||
Please check the [rewrite](examples/rewrite/README.md) example
|
||||
|
||||
|
||||
### Rate limiting
|
||||
|
||||
The annotations `ingress.kubernetes.io/limit-connections` and `ingress.kubernetes.io/limit-rps` allows the creation of a limit in the connections that can be opened by a single client IP address. This can be use to mitigate [DDoS Attacks](https://www.nginx.com/blog/mitigating-ddos-attacks-with-nginx-and-nginx-plus)
|
||||
|
||||
`ingress.kubernetes.io/limit-connections`: number of concurrent allowed connections from a single IP address
|
||||
|
||||
`ingress.kubernetes.io/limit-rps`: number of allowed connections per second from a single IP address
|
||||
|
||||
|
||||
Is possible to specify both annotation in the same Ingress rule. If you specify both annotations in a single Ingress rule, limit-rps takes precedence
|
||||
|
||||
|
||||
### Secure upstreams
|
||||
|
||||
By default NGINX uses `http` to reach the services. Adding the annotation `ingress.kubernetes.io/secure-backends: "true"` in the ingress rule changes the protocol to `https`.
|
||||
|
||||
|
||||
### Whitelist source range
|
||||
|
||||
You can specify the allowed client ip source ranges through the `ingress.kubernetes.io/whitelist-source-range` annotation, eg; `10.0.0.0/24,172.10.0.1`
|
||||
For a global restriction (any URL) is possible to use `whitelist-source-range` in the NGINX config map
|
||||
|
||||
*Note:* adding an annotation overrides any global restriction
|
||||
|
||||
Please check the [whitelist](examples/whitelist/README.md) example
|
||||
|
||||
|
||||
|
||||
### **Allowed parameters in configuration config map:**
|
||||
|
||||
**body-size:** Sets the maximum allowed size of the client request body. See NGINX [client_max_body_size](http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size)
|
||||
|
||||
|
||||
**custom-http-errors:** Enables which HTTP codes should be passed for processing with the [error_page directive](http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page)
|
||||
Setting at least one code this also enables [proxy_intercept_errors](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors) (required to process error_page)
|
||||
For instance setting `custom-http-errors: 404,415`
|
||||
|
||||
|
||||
**enable-sticky-sessions:** Enables sticky sessions using cookies. This is provided by [nginx-sticky-module-ng](https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng) module
|
||||
|
||||
|
||||
**enable-vts-status:** Allows the replacement of the default status page with a third party module named [nginx-module-vts](https://github.com/vozlt/nginx-module-vts)
|
||||
|
||||
|
||||
**error-log-level:** Configures the logging level of errors. Log levels above are listed in the order of increasing severity
|
||||
http://nginx.org/en/docs/ngx_core_module.html#error_log
|
||||
|
||||
|
||||
**retry-non-idempotent:** Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server.
|
||||
The previous behavior can be restored using the value "true"
|
||||
|
||||
|
||||
**hsts:** Enables or disables the header HSTS in servers running SSL.
|
||||
HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
|
||||
https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
|
||||
https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
|
||||
|
||||
|
||||
**hsts-include-subdomains:** Enables or disables the use of HSTS in all the subdomains of the servername
|
||||
|
||||
|
||||
**hsts-max-age:** Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.
|
||||
|
||||
|
||||
**keep-alive:** Sets the time during which a keep-alive client connection will stay open on the server side.
|
||||
The zero value disables keep-alive client connections
|
||||
http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
|
||||
|
||||
|
||||
**max-worker-connections:** Sets the maximum number of simultaneous connections that can be opened by each [worker process](http://nginx.org/en/docs/ngx_core_module.html#worker_connections)
|
||||
|
||||
|
||||
**proxy-connect-timeout:** Sets the timeout for [establishing a connection with a proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_connect_timeout). It should be noted that this timeout cannot usually exceed 75 seconds.
|
||||
|
||||
|
||||
**proxy-read-timeout:** Sets the timeout in seconds for [reading a response from the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_read_timeout). The timeout is set only between two successive read operations, not for the transmission of the whole response
|
||||
|
||||
|
||||
**proxy-send-timeout:** Sets the timeout in seconds for [transmitting a request to the proxied server](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_send_timeout). The timeout is set only between two successive write operations, not for the transmission of the whole request.
|
||||
|
||||
|
||||
**proxy-buffer-size:** Sets the size of the buffer used for [reading the first part of the response](http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffer_size) received from the proxied server. This part usually contains a small response header.`
|
||||
|
||||
|
||||
**resolver:** Configures name servers used to [resolve](http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver) names of upstream servers into addresses
|
||||
|
||||
|
||||
**server-name-hash-max-size:** Sets the maximum size of the [server names hash tables](http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_max_size) used in server names, map directive’s values, MIME types, names of request header strings, etc.
|
||||
http://nginx.org/en/docs/hash.html
|
||||
|
||||
|
||||
**server-name-hash-bucket-size:** Sets the size of the bucker for the server names hash tables
|
||||
http://nginx.org/en/docs/hash.html
|
||||
http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
|
||||
|
||||
**ssl-buffer-size:** Sets the size of the [SSL buffer](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) used for sending data.
|
||||
4k helps NGINX to improve TLS Time To First Byte (TTTFB)
|
||||
https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/
|
||||
|
||||
**ssl-ciphers:** Sets the [ciphers](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers) list to enable. The ciphers are specified in the format understood by the OpenSSL library
|
||||
The default cipher list is: `ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA`
|
||||
|
||||
|
||||
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority.
|
||||
The recommendation above prioritizes algorithms that provide perfect [forward secrecy](https://wiki.mozilla.org/Security/Server_Side_TLS#Forward_Secrecy)
|
||||
|
||||
Please check the [Mozilla SSL Configuration Generator](https://mozilla.github.io/server-side-tls/ssl-config-generator/)
|
||||
|
||||
|
||||
**ssl-protocols:** Sets the [SSL protocols](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols) to use.
|
||||
The default is: `TLSv1 TLSv1.1 TLSv1.2`
|
||||
|
||||
TLSv1 is enabled to allow old clients like:
|
||||
- [IE 8-10 / Win 7](https://www.ssllabs.com/ssltest/viewClient.html?name=IE&version=8-10&platform=Win%207&key=113)
|
||||
- [Java 7u25](https://www.ssllabs.com/ssltest/viewClient.html?name=Java&version=7u25&key=26)
|
||||
|
||||
If you dont need to support this clients please remove TLSv1
|
||||
|
||||
|
||||
Please check the result of the configuration using `https://ssllabs.com/ssltest/analyze.html` or `https://testssl.sh`
|
||||
|
||||
|
||||
**ssl-dh-param:** sets the Base64 string that contains Diffie-Hellman key to help with "Perfect Forward Secrecy"
|
||||
https://www.openssl.org/docs/manmaster/apps/dhparam.html
|
||||
https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
|
||||
http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
|
||||
|
||||
|
||||
**ssl-session-cache:** Enables or disables the use of shared [SSL cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) among worker processes.
|
||||
|
||||
|
||||
**ssl-session-cache-size:** Sets the size of the [SSL shared session cache](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_cache) between all worker processes.
|
||||
|
||||
|
||||
**ssl-session-tickets:** Enables or disables session resumption through [TLS session tickets](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_tickets)
|
||||
|
||||
|
||||
**ssl-session-timeout:** Sets the time during which a client may [reuse the session](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_session_timeout) parameters stored in a cache.
|
||||
|
||||
|
||||
**ssl-redirect:** Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule)
|
||||
Default is true
|
||||
|
||||
|
||||
**upstream-max-fails:** Sets the number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) that should happen in the duration set by the fail_timeout parameter to consider the server unavailable
|
||||
|
||||
|
||||
**upstream-fail-timeout:** Sets the time during which the specified number of unsuccessful attempts to communicate with the [server](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream) should happen to consider the server unavailable
|
||||
|
||||
|
||||
**use-proxy-protocol:** Enables or disables the use of the [PROXY protocol](https://www.nginx.com/resources/admin-guide/proxy-protocol/) to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAproxy and Amazon Elastic Load Balancer (ELB).
|
||||
|
||||
|
||||
**use-gzip:** Enables or disables the use of the nginx module that compresses responses using the ["gzip" module](http://nginx.org/en/docs/http/ngx_http_gzip_module.html)
|
||||
The default mime type list to compress is: `application/atom+xml application/javascript aplication/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component`
|
||||
|
||||
**use-http2:** Enables or disables the [HTTP/2](http://nginx.org/en/docs/http/ngx_http_v2_module.html) support in secure connections
|
||||
|
||||
|
||||
**gzip-types:** Sets the MIME types in addition to "text/html" to compress. The special value "*"" matches any MIME type.
|
||||
Responses with the "text/html" type are always compressed if `use-gzip` is enabled
|
||||
|
||||
|
||||
**worker-processes:** Sets the number of [worker processes](http://nginx.org/en/docs/ngx_core_module.html#worker_processes). By default "auto" means number of available CPU cores
|
||||
|
||||
|
||||
### Default configuration options
|
||||
|
||||
The next table shows the options, the default value and a description
|
||||
|
||||
|name |default|
|
||||
|---------------------------|------|
|
||||
|body-size|1m|
|
||||
|custom-http-errors|" "|
|
||||
|enable-sticky-sessions|"false"|
|
||||
|enable-vts-status|"false"|
|
||||
|error-log-level|notice|
|
||||
|gzip-types||
|
||||
|hsts|"true"|
|
||||
|hsts-include-subdomains|"true"|
|
||||
|hsts-max-age|"15724800"|
|
||||
|keep-alive|"75"|
|
||||
|max-worker-connections|"16384"|
|
||||
|proxy-connect-timeout|"5"|
|
||||
|proxy-read-timeout|"60"|
|
||||
|proxy-real-ip-cidr|0.0.0.0/0|
|
||||
|proxy-send-timeout|"60"|
|
||||
|retry-non-idempotent|"false"|
|
||||
|server-name-hash-bucket-size|"64"|
|
||||
|server-name-hash-max-size|"512"|
|
||||
|ssl-buffer-size|4k|
|
||||
|ssl-ciphers||
|
||||
|ssl-protocols|TLSv1 TLSv1.1 TLSv1.2|
|
||||
|ssl-session-cache|"true"|
|
||||
|ssl-session-cache-size|10m|
|
||||
|ssl-session-tickets|"true"|
|
||||
|ssl-session-timeout|10m|
|
||||
|use-gzip|"true"|
|
||||
|use-http2|"true"|
|
||||
|vts-status-zone-size|10m|
|
||||
|worker-processes|<number of CPUs>|
|
||||
|
||||
|
||||
### Websockets
|
||||
|
||||
Support for websockets is provided by NGINX OOTB. No special configuration required.
|
||||
|
||||
The only requirement to avoid the close of connections is the increase of the values of `proxy-read-timeout` and `proxy-send-timeout`. The default value of this settings is `30 seconds`.
|
||||
A more adequate value to support websockets is a value higher than one hour (`3600`)
|
||||
|
||||
|
||||
#### Optimizing TLS Time To First Byte (TTTFB)
|
||||
|
||||
NGINX provides the configuration option [ssl_buffer_size](http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size) to allow the optimization of the TLS record size. This improves the [Time To First Byte](https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/) (TTTFB). The default value in the Ingress controller is `4k` (nginx default is `16k`);
|
||||
|
||||
#### Retries in no idempotent methods
|
||||
|
||||
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
|
||||
The previous behavior can be restored using `retry-non-idempotent=true` in the configuration config map
|
1244
controllers/nginx/controller.go
Normal file
1244
controllers/nginx/controller.go
Normal file
File diff suppressed because it is too large
Load diff
6
controllers/nginx/default.conf
Normal file
6
controllers/nginx/default.conf
Normal file
|
@ -0,0 +1,6 @@
|
|||
# A very simple nginx configuration file that forces nginx to start.
|
||||
pid /run/nginx.pid;
|
||||
|
||||
events {}
|
||||
http {}
|
||||
daemon off;
|
8
controllers/nginx/examples/README.md
Normal file
8
controllers/nginx/examples/README.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
|
||||
All the examples references the services `echoheaders-x` and `echoheaders-y`
|
||||
|
||||
```
|
||||
kubectl run echoheaders --image=gcr.io/google_containers/echoserver:1.4 --replicas=1 --port=8080
|
||||
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
|
||||
kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x
|
||||
```
|
126
controllers/nginx/examples/auth/README.md
Normal file
126
controllers/nginx/examples/auth/README.md
Normal file
|
@ -0,0 +1,126 @@
|
|||
|
||||
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`.
|
||||
|
||||
|
||||
```
|
||||
$ htpasswd -c auth foo
|
||||
New password: <bar>
|
||||
New password:
|
||||
Re-type new password:
|
||||
Adding password for user foo
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl create secret generic basic-auth --from-file=auth
|
||||
secret "basic-auth" created
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl get secret basic-auth -o yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: basic-auth
|
||||
namespace: default
|
||||
type: Opaque
|
||||
```
|
||||
|
||||
```
|
||||
echo "
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress-with-auth
|
||||
annotations:
|
||||
# type of authentication
|
||||
ingress.kubernetes.io/auth-type: basic
|
||||
# name of the secret that contains the user/password definitions
|
||||
ingress.kubernetes.io/auth-secret: basic-auth
|
||||
# message to display with an appropiate context why the authentication is required
|
||||
ingress.kubernetes.io/auth-realm: "Authentication Required - foo"
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
```
|
||||
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
|
||||
* Trying 10.2.29.4...
|
||||
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: foo.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 401 Unauthorized
|
||||
< Server: nginx/1.10.0
|
||||
< Date: Wed, 11 May 2016 05:27:23 GMT
|
||||
< Content-Type: text/html
|
||||
< Content-Length: 195
|
||||
< Connection: keep-alive
|
||||
< WWW-Authenticate: Basic realm="Authentication Required - foo"
|
||||
<
|
||||
<html>
|
||||
<head><title>401 Authorization Required</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>401 Authorization Required</h1></center>
|
||||
<hr><center>nginx/1.10.0</center>
|
||||
</body>
|
||||
</html>
|
||||
* Connection #0 to host 10.2.29.4 left intact
|
||||
```
|
||||
|
||||
```
|
||||
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'
|
||||
* Trying 10.2.29.4...
|
||||
* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0)
|
||||
* Server auth using Basic with user 'foo'
|
||||
> GET / HTTP/1.1
|
||||
> Host: foo.bar.com
|
||||
> Authorization: Basic Zm9vOmJhcg==
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.10.0
|
||||
< Date: Wed, 11 May 2016 06:05:26 GMT
|
||||
< Content-Type: text/plain
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Vary: Accept-Encoding
|
||||
<
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.29.4
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://foo.bar.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
authorization=Basic Zm9vOmJhcg==
|
||||
connection=close
|
||||
host=foo.bar.com
|
||||
user-agent=curl/7.43.0
|
||||
x-forwarded-for=10.2.29.1
|
||||
x-forwarded-host=foo.bar.com
|
||||
x-forwarded-port=80
|
||||
x-forwarded-proto=http
|
||||
x-real-ip=10.2.29.1
|
||||
BODY:
|
||||
* Connection #0 to host 10.2.29.4 left intact
|
||||
-no body in request-
|
||||
```
|
57
controllers/nginx/examples/custom-configuration/README.md
Normal file
57
controllers/nginx/examples/custom-configuration/README.md
Normal file
|
@ -0,0 +1,57 @@
|
|||
The next command shows the defaults:
|
||||
```
|
||||
$ ./nginx-third-party-lb --dump-nginx—configuration
|
||||
Example of ConfigMap to customize NGINX configuration:
|
||||
data:
|
||||
body-size: 1m
|
||||
error-log-level: info
|
||||
gzip-types: application/atom+xml application/javascript application/json application/rss+xml
|
||||
application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json
|
||||
application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon
|
||||
text/css text/plain text/x-component
|
||||
hts-include-subdomains: "true"
|
||||
hts-max-age: "15724800"
|
||||
keep-alive: "75"
|
||||
max-worker-connections: "16384"
|
||||
proxy-connect-timeout: "30"
|
||||
proxy-read-timeout: "30"
|
||||
proxy-real-ip-cidr: 0.0.0.0/0
|
||||
proxy-send-timeout: "30"
|
||||
server-name-hash-bucket-size: "64"
|
||||
server-name-hash-max-size: "512"
|
||||
ssl-buffer-size: 4k
|
||||
ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
|
||||
ssl-protocols: TLSv1 TLSv1.1 TLSv1.2
|
||||
ssl-session-cache: "true"
|
||||
ssl-session-cache-size: 10m
|
||||
ssl-session-tickets: "true"
|
||||
ssl-session-timeout: 10m
|
||||
use-gzip: "true"
|
||||
use-hts: "true"
|
||||
worker-processes: "8"
|
||||
metadata:
|
||||
name: custom-name
|
||||
namespace: a-valid-namespace
|
||||
```
|
||||
|
||||
For instance, if we want to change the timeouts we need to create a ConfigMap:
|
||||
```
|
||||
$ cat nginx-load-balancer-conf.yaml
|
||||
apiVersion: v1
|
||||
data:
|
||||
proxy-connect-timeout: "10"
|
||||
proxy-read-timeout: "120"
|
||||
proxy-send-imeout: "120"
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: nginx-load-balancer-conf
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
$ kubectl create -f nginx-load-balancer-conf.yaml
|
||||
```
|
||||
|
||||
Please check the example `rc-custom-configuration.yaml`
|
||||
|
||||
If the Configmap it is updated, NGINX will be reloaded with the new configuration
|
|
@ -0,0 +1,52 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --nginx-configmap=$(POD_NAMESPACE)/nginx-load-balancer-conf
|
80
controllers/nginx/examples/custom-errors/README.md
Normal file
80
controllers/nginx/examples/custom-errors/README.md
Normal file
|
@ -0,0 +1,80 @@
|
|||
|
||||
This example shows how is possible to use a custom backend to render custom error pages. The code of this example is located here [nginx-debug-server](https://github.com/aledbf/contrib/tree/nginx-debug-server)
|
||||
|
||||
|
||||
The idea is to use the headers `X-Code` and `X-Format` that NGINX pass to the backend in case of an error to find out the best existent representation of the response to be returned. i.e. if the request contains an `Accept` header of type `json` the error should be in that format and not in `html` (the default in NGINX).
|
||||
|
||||
First create the custom backend to use in the Ingress controller
|
||||
|
||||
```$ kubectl create -f custom-default-backend.yaml
|
||||
service "nginx-errors" created
|
||||
replicationcontroller "nginx-errors" created
|
||||
```
|
||||
|
||||
```$ kubectl get svc
|
||||
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
echoheaders 10.3.0.7 nodes 80/TCP 23d
|
||||
kubernetes 10.3.0.1 <none> 443/TCP 34d
|
||||
nginx-errors 10.3.0.102 <none> 80/TCP 11s
|
||||
```
|
||||
|
||||
```$ kubectl get rc
|
||||
CONTROLLER REPLICAS AGE
|
||||
echoheaders 1 19d
|
||||
nginx-errors 1 19s
|
||||
```
|
||||
|
||||
Next create the Ingress controller executing
|
||||
```
|
||||
$ kubectl create -f rc-custom-errors.yaml
|
||||
```
|
||||
|
||||
Now to check if this is working we use curl:
|
||||
|
||||
```
|
||||
$ curl -v http://172.17.4.99/
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: 172.17.4.99
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 404 Not Found
|
||||
< Server: nginx/1.10.0
|
||||
< Date: Wed, 04 May 2016 02:53:45 GMT
|
||||
< Content-Type: text/html
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Vary: Accept-Encoding
|
||||
<
|
||||
<span>The page you're looking for could not be found.</span>
|
||||
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
Specifying json as expected format:
|
||||
|
||||
```
|
||||
$ curl -v http://172.17.4.99/ -H 'Accept: application/json'
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: 172.17.4.99
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: application/json
|
||||
>
|
||||
< HTTP/1.1 404 Not Found
|
||||
< Server: nginx/1.10.0
|
||||
< Date: Wed, 04 May 2016 02:54:00 GMT
|
||||
< Content-Type: text/html
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Vary: Accept-Encoding
|
||||
<
|
||||
{ "message": "The page you're looking for could not be found" }
|
||||
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
By default the Ingress controller provides support for `html`, `json` and `XML`.
|
|
@ -0,0 +1,31 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-errors
|
||||
labels:
|
||||
app: nginx-errors
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: nginx-errors
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-errors
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx-errors
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx-errors
|
||||
image: aledbf/nginx-error-server:0.1
|
||||
ports:
|
||||
- containerPort: 80
|
|
@ -0,0 +1,51 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/nginx-errors
|
9
controllers/nginx/examples/custom-template/README.md
Normal file
9
controllers/nginx/examples/custom-template/README.md
Normal file
|
@ -0,0 +1,9 @@
|
|||
|
||||
This example shows how is possible to use a custom template
|
||||
|
||||
First create a configmap with a template inside running:
|
||||
```
|
||||
kubectl create configmap nginx-template --from-file=nginx.tmpl=../../nginx.tmpl
|
||||
```
|
||||
|
||||
Next create the rc `kubectl create -f custom-template.yaml`
|
|
@ -0,0 +1,62 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx/template
|
||||
name: nginx-template-volume
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: nginx-template-volume
|
||||
configMap:
|
||||
name: nginx-template
|
||||
items:
|
||||
- key: nginx.tmpl
|
||||
path: nginx.tmpl
|
45
controllers/nginx/examples/custom-upstream-check/README.md
Normal file
45
controllers/nginx/examples/custom-upstream-check/README.md
Normal file
|
@ -0,0 +1,45 @@
|
|||
This example shows how is possible to create a custom configuration for a particular upstream associated with an Ingress rule.
|
||||
|
||||
```
|
||||
echo "
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echoheaders
|
||||
annotations:
|
||||
ingress.kubernetes.io/upstream-fail-timeout: "30"
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
Check the annotation is present in the Ingress rule:
|
||||
```
|
||||
kubectl get ingress echoheaders -o yaml
|
||||
```
|
||||
|
||||
Check the NGINX configuration is updated using kubectl or the status page:
|
||||
|
||||
```
|
||||
$ kubectl exec nginx-ingress-controller-v1ppm cat /etc/nginx/nginx.conf
|
||||
```
|
||||
|
||||
```
|
||||
....
|
||||
upstream default-echoheaders-x-80 {
|
||||
least_conn;
|
||||
server 10.2.92.2:8080 max_fails=5 fail_timeout=30;
|
||||
|
||||
}
|
||||
....
|
||||
```
|
||||
|
||||
|
||||

|
Binary file not shown.
After Width: | Height: | Size: 59 KiB |
8
controllers/nginx/examples/daemonset/README.md
Normal file
8
controllers/nginx/examples/daemonset/README.md
Normal file
|
@ -0,0 +1,8 @@
|
|||
|
||||
In some cases could be required to run the Ingress controller in all the nodes in cluster.
|
||||
Using [DaemonSet](https://github.com/kubernetes/kubernetes/blob/master/docs/design/daemon.md) it is possible to do this.
|
||||
The file `as-daemonset.yaml` contains an example
|
||||
|
||||
```
|
||||
kubectl create -f as-daemonset.yaml
|
||||
```
|
45
controllers/nginx/examples/daemonset/as-daemonset.yaml
Normal file
45
controllers/nginx/examples/daemonset/as-daemonset.yaml
Normal file
|
@ -0,0 +1,45 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: DaemonSet
|
||||
metadata:
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
36
controllers/nginx/examples/default-backend.yaml
Normal file
36
controllers/nginx/examples/default-backend.yaml
Normal file
|
@ -0,0 +1,36 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: default-http-backend
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
app: default-http-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: default-http-backend
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- name: default-http-backend
|
||||
# Any image is permissable as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: gcr.io/google_containers/defaultbackend:1.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
76
controllers/nginx/examples/default/README.md
Normal file
76
controllers/nginx/examples/default/README.md
Normal file
|
@ -0,0 +1,76 @@
|
|||
|
||||
Create the Ingress controller
|
||||
```
|
||||
kubectl create -f rc-default.yaml
|
||||
```
|
||||
|
||||
To test if evertyhing is working correctly:
|
||||
|
||||
`curl -v http://<node IP address>:80/foo -H 'Host: foo.bar.com'`
|
||||
|
||||
You should see an output similar to
|
||||
```
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET /foo HTTP/1.1
|
||||
> Host: foo.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.9.8
|
||||
< Date: Tue, 15 Dec 2015 13:45:13 GMT
|
||||
< Content-Type: text/plain
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Vary: Accept-Encoding
|
||||
<
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.84.43
|
||||
command=GET
|
||||
real path=/foo
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://foo.bar.com:8080/foo
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.7 - lua: 9019
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
connection=close
|
||||
host=foo.bar.com
|
||||
user-agent=curl/7.43.0
|
||||
x-forwarded-for=172.17.4.1
|
||||
x-forwarded-host=foo.bar.com
|
||||
x-forwarded-server=foo.bar.com
|
||||
x-real-ip=172.17.4.1
|
||||
BODY:
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
If we try to get a non exising route like `/foobar` we should see
|
||||
```
|
||||
$ curl -v 172.17.4.99/foobar -H 'Host: foo.bar.com'
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET /foobar HTTP/1.1
|
||||
> Host: foo.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 404 Not Found
|
||||
< Server: nginx/1.9.8
|
||||
< Date: Tue, 15 Dec 2015 13:48:18 GMT
|
||||
< Content-Type: text/html
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
< Vary: Accept-Encoding
|
||||
<
|
||||
default backend - 404
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
(this test checked that the default backend is properly working)
|
||||
|
||||
*Replacing the default backend with a custom one we can change the default error pages provided by nginx*
|
51
controllers/nginx/examples/default/rc-default.yaml
Normal file
51
controllers/nginx/examples/default/rc-default.yaml
Normal file
|
@ -0,0 +1,51 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
148
controllers/nginx/examples/external-auth/README.md
Normal file
148
controllers/nginx/examples/external-auth/README.md
Normal file
|
@ -0,0 +1,148 @@
|
|||
# External authentication
|
||||
|
||||
### Example 1:
|
||||
|
||||
Use an external service (Basic Auth) located in `https://httpbin.org`
|
||||
|
||||
```
|
||||
$ kubectl create -f ingress.yaml
|
||||
ingress "external-auth" created
|
||||
$ kubectl get ing external-auth
|
||||
NAME HOSTS ADDRESS PORTS AGE
|
||||
external-auth external-auth-01.sample.com 172.17.4.99 80 13s
|
||||
$ kubectl get ing external-auth -o yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
|
||||
creationTimestamp: 2016-10-03T13:50:35Z
|
||||
generation: 1
|
||||
name: external-auth
|
||||
namespace: default
|
||||
resourceVersion: "2068378"
|
||||
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth
|
||||
uid: 5c388f1d-8970-11e6-9004-080027d2dc94
|
||||
spec:
|
||||
rules:
|
||||
- host: external-auth-01.sample.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
path: /
|
||||
status:
|
||||
loadBalancer:
|
||||
ingress:
|
||||
- ip: 172.17.4.99
|
||||
$
|
||||
```
|
||||
|
||||
Test 1: no username/password (expect code 401)
|
||||
```
|
||||
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
|
||||
* Rebuilt URL to: http://172.17.4.99/
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: external-auth-01.sample.com
|
||||
> User-Agent: curl/7.50.1
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 401 Unauthorized
|
||||
< Server: nginx/1.11.3
|
||||
< Date: Mon, 03 Oct 2016 14:52:08 GMT
|
||||
< Content-Type: text/html
|
||||
< Content-Length: 195
|
||||
< Connection: keep-alive
|
||||
< WWW-Authenticate: Basic realm="Fake Realm"
|
||||
<
|
||||
<html>
|
||||
<head><title>401 Authorization Required</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>401 Authorization Required</h1></center>
|
||||
<hr><center>nginx/1.11.3</center>
|
||||
</body>
|
||||
</html>
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
Test 2: valid username/password (expect code 200)
|
||||
```
|
||||
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'
|
||||
* Rebuilt URL to: http://172.17.4.99/
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
* Server auth using Basic with user 'user'
|
||||
> GET / HTTP/1.1
|
||||
> Host: external-auth-01.sample.com
|
||||
> Authorization: Basic dXNlcjpwYXNzd2Q=
|
||||
> User-Agent: curl/7.50.1
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.11.3
|
||||
< Date: Mon, 03 Oct 2016 14:52:50 GMT
|
||||
< Content-Type: text/plain
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
<
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.60.2
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://external-auth-01.sample.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
authorization=Basic dXNlcjpwYXNzd2Q=
|
||||
connection=close
|
||||
host=external-auth-01.sample.com
|
||||
user-agent=curl/7.50.1
|
||||
x-forwarded-for=10.2.60.1
|
||||
x-forwarded-host=external-auth-01.sample.com
|
||||
x-forwarded-port=80
|
||||
x-forwarded-proto=http
|
||||
x-real-ip=10.2.60.1
|
||||
BODY:
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
-no body in request-
|
||||
```
|
||||
|
||||
Test 3: invalid username/password (expect code 401)
|
||||
```
|
||||
curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'
|
||||
* Rebuilt URL to: http://172.17.4.99/
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
* Server auth using Basic with user 'user'
|
||||
> GET / HTTP/1.1
|
||||
> Host: external-auth-01.sample.com
|
||||
> Authorization: Basic dXNlcjp1c2Vy
|
||||
> User-Agent: curl/7.50.1
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 401 Unauthorized
|
||||
< Server: nginx/1.11.3
|
||||
< Date: Mon, 03 Oct 2016 14:53:04 GMT
|
||||
< Content-Type: text/html
|
||||
< Content-Length: 195
|
||||
< Connection: keep-alive
|
||||
* Authentication problem. Ignoring this.
|
||||
< WWW-Authenticate: Basic realm="Fake Realm"
|
||||
<
|
||||
<html>
|
||||
<head><title>401 Authorization Required</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>401 Authorization Required</h1></center>
|
||||
<hr><center>nginx/1.11.3</center>
|
||||
</body>
|
||||
</html>
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
15
controllers/nginx/examples/external-auth/ingress.yaml
Normal file
15
controllers/nginx/examples/external-auth/ingress.yaml
Normal file
|
@ -0,0 +1,15 @@
|
|||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
ingress.kubernetes.io/auth-url: "https://httpbin.org/basic-auth/user/passwd"
|
||||
name: external-auth
|
||||
spec:
|
||||
rules:
|
||||
- host: external-auth-01.sample.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
path: /
|
62
controllers/nginx/examples/full/rc-full.yaml
Normal file
62
controllers/nginx/examples/full/rc-full.yaml
Normal file
|
@ -0,0 +1,62 @@
|
|||
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
volumes:
|
||||
- name: dhparam-example
|
||||
secret:
|
||||
secretName: dhparam-example
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
- containerPort: 8080
|
||||
hostPort: 9000
|
||||
volumeMounts:
|
||||
- mountPath: /etc/nginx-ssl/dhparam
|
||||
name: dhparam-example
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap-example
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
25
controllers/nginx/examples/ingress.yaml
Normal file
25
controllers/nginx/examples/ingress.yaml
Normal file
|
@ -0,0 +1,25 @@
|
|||
# An Ingress with 2 hosts and 3 endpoints
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: echomap
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheaders-x
|
||||
servicePort: 80
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- path: /bar
|
||||
backend:
|
||||
serviceName: echoheaders-y
|
||||
servicePort: 80
|
||||
- path: /foo
|
||||
backend:
|
||||
serviceName: echoheaders-x
|
||||
servicePort: 80
|
94
controllers/nginx/examples/multi-tls/README.md
Normal file
94
controllers/nginx/examples/multi-tls/README.md
Normal file
|
@ -0,0 +1,94 @@
|
|||
# Multi TLS certificate termination
|
||||
|
||||
This examples uses 2 different certificates to terminate SSL for 2 hostnames.
|
||||
|
||||
1. Deploy the controller by creating the rc in the parent dir
|
||||
2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
|
||||
3. Create multi-tls.yaml
|
||||
|
||||
This should generate a segment like:
|
||||
```console
|
||||
$ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
|
||||
server {
|
||||
listen 80;
|
||||
listen 443 ssl http2;
|
||||
ssl_certificate /etc/nginx-ssl/default-foobar.pem;
|
||||
ssl_certificate_key /etc/nginx-ssl/default-foobar.pem;
|
||||
|
||||
|
||||
server_name foo.bar.com;
|
||||
|
||||
|
||||
if ($scheme = http) {
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
|
||||
|
||||
location / {
|
||||
proxy_set_header Host $host;
|
||||
|
||||
# Pass Real IP
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
|
||||
# Allow websocket connections
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection $connection_upgrade;
|
||||
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
|
||||
|
||||
proxy_connect_timeout 5s;
|
||||
proxy_send_timeout 60s;
|
||||
proxy_read_timeout 60s;
|
||||
|
||||
proxy_redirect off;
|
||||
proxy_buffering off;
|
||||
|
||||
proxy_http_version 1.1;
|
||||
|
||||
proxy_pass http://default-echoheaders-80;
|
||||
}
|
||||
```
|
||||
|
||||
And you should be able to reach your nginx service or echoheaders service using a hostname switch:
|
||||
```console
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS AGE
|
||||
foo-tls - 104.154.30.67 13m
|
||||
foo.bar.com
|
||||
/ echoheaders:80
|
||||
bar.baz.com
|
||||
/ nginx:80
|
||||
|
||||
$ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k
|
||||
CLIENT VALUES:
|
||||
client_address=10.245.0.6
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://foo.bar.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
connection=close
|
||||
host=foo.bar.com
|
||||
user-agent=curl/7.35.0
|
||||
x-forwarded-for=10.245.0.1
|
||||
x-forwarded-host=foo.bar.com
|
||||
x-forwarded-proto=https
|
||||
|
||||
$ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Welcome to nginx on Debian!</title>
|
||||
|
||||
$ curl 104.154.30.67
|
||||
default backend - 404
|
||||
```
|
102
controllers/nginx/examples/multi-tls/multi-tls.yaml
Normal file
102
controllers/nginx/examples/multi-tls/multi-tls.yaml
Normal file
|
@ -0,0 +1,102 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
containers:
|
||||
- name: nginx
|
||||
image: gcr.io/google_containers/nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: echoheaders
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
app: echoheaders
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: echoheaders
|
||||
spec:
|
||||
replicas: 1
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: echoheaders
|
||||
spec:
|
||||
containers:
|
||||
- name: echoheaders
|
||||
image: gcr.io/google_containers/echoserver:1.4
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
---
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: foo-tls
|
||||
namespace: default
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- foo.bar.com
|
||||
# This secret must exist beforehand
|
||||
# The cert must also contain the subj-name foo.bar.com
|
||||
# You can create it via:
|
||||
# make keys secret SECRET=/tmp/foobar.json HOST=foo.bar.com NAME=foobar
|
||||
# https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce/https_example
|
||||
secretName: foobar
|
||||
- hosts:
|
||||
- bar.baz.com
|
||||
# This secret must exist beforehand
|
||||
# The cert must also contain the subj-name bar.baz.com
|
||||
# You can create it via:
|
||||
# make keys secret SECRET=/tmp/barbaz.json HOST=bar.baz.com NAME=barbaz
|
||||
# https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce/https_example
|
||||
secretName: barbaz
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
path: /
|
||||
- host: bar.baz.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: nginx
|
||||
servicePort: 80
|
||||
path: /
|
34
controllers/nginx/examples/proxy-protocol/README.md
Normal file
34
controllers/nginx/examples/proxy-protocol/README.md
Normal file
|
@ -0,0 +1,34 @@
|
|||
# Nginx ingress controller using Proxy Protocol
|
||||
|
||||
For using the Proxy Protocol in a load balancing solution, both the load balancer and its backend need to enable Proxy Protocol.
|
||||
|
||||
To enable it for NGINX you have to setup a [configmap](nginx-configmap.yaml) option.
|
||||
|
||||
## HAProxy
|
||||
|
||||
This HAProxy snippet would forward HTTP(S) traffic to a two worker kubernetes cluster, with NGINX running on the node ports, like defined in this example's [service](nginx-svc.yaml).
|
||||
|
||||
|
||||
```
|
||||
listen kube-nginx-http
|
||||
bind :::80 v6only
|
||||
bind 0.0.0.0:80
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance leastconn
|
||||
server node1 <node-ip1>:32080 check-send-proxy inter 10s send-proxy
|
||||
server node2 <node-ip2>:32080 check-send-proxy inter 10s send-proxy
|
||||
|
||||
listen kube-nginx-https
|
||||
bind :::443 v6only
|
||||
bind 0.0.0.0:443
|
||||
mode tcp
|
||||
option tcplog
|
||||
balance leastconn
|
||||
server node1 <node-ip1>:32443 check-send-proxy inter 10s send-proxy
|
||||
server node2 <node-ip2>:32443 check-send-proxy inter 10s send-proxy
|
||||
```
|
||||
|
||||
## ELBs in AWS
|
||||
|
||||
See this [documentation](http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html) how to enable Proxy Protocol in ELBs
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
data:
|
||||
use-proxy-protocol: "true"
|
||||
kind: ConfigMap
|
50
controllers/nginx/examples/proxy-protocol/nginx-rc.yaml
Normal file
50
controllers/nginx/examples/proxy-protocol/nginx-rc.yaml
Normal file
|
@ -0,0 +1,50 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
- containerPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --nginx-configmap=$(POD_NAMESPACE)/nginx-ingress-controller
|
19
controllers/nginx/examples/proxy-protocol/nginx-svc.yaml
Normal file
19
controllers/nginx/examples/proxy-protocol/nginx-svc.yaml
Normal file
|
@ -0,0 +1,19 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
spec:
|
||||
type: NodePort
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 80
|
||||
nodePort: 32080
|
||||
protocol: TCP
|
||||
name: http
|
||||
- port: 443
|
||||
targetPort: 443
|
||||
nodePort: 32443
|
||||
protocol: TCP
|
||||
name: https
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
67
controllers/nginx/examples/rewrite/README.md
Normal file
67
controllers/nginx/examples/rewrite/README.md
Normal file
|
@ -0,0 +1,67 @@
|
|||
|
||||
Create an Ingress rule with a rewrite annotation:
|
||||
```
|
||||
$ echo "
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
ingress.kubernetes.io/rewrite-target: /
|
||||
name: rewrite
|
||||
namespace: default
|
||||
spec:
|
||||
rules:
|
||||
- host: rewrite.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
path: /something
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
Check the rewrite is working
|
||||
|
||||
```
|
||||
$ curl -v http://172.17.4.99/something -H 'Host: rewrite.bar.com'
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET /something HTTP/1.1
|
||||
> Host: rewrite.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.11.0
|
||||
< Date: Tue, 31 May 2016 16:07:31 GMT
|
||||
< Content-Type: text/plain
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
<
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.56.9
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://rewrite.bar.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
connection=close
|
||||
host=rewrite.bar.com
|
||||
user-agent=curl/7.43.0
|
||||
x-forwarded-for=10.2.56.1
|
||||
x-forwarded-host=rewrite.bar.com
|
||||
x-forwarded-port=80
|
||||
x-forwarded-proto=http
|
||||
x-real-ip=10.2.56.1
|
||||
BODY:
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
-no body in request-
|
||||
```
|
||||
|
128
controllers/nginx/examples/sysctl/change-proc-values-rc.yaml
Normal file
128
controllers/nginx/examples/sysctl/change-proc-values-rc.yaml
Normal file
|
@ -0,0 +1,128 @@
|
|||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: default-http-backend
|
||||
labels:
|
||||
k8s-app: default-http-backend
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
targetPort: 8080
|
||||
protocol: TCP
|
||||
name: http
|
||||
selector:
|
||||
k8s-app: default-http-backend
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: default-http-backend
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: default-http-backend
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: default-http-backend
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- name: default-http-backend
|
||||
# Any image is permissable as long as:
|
||||
# 1. It serves a 404 page at /
|
||||
# 2. It serves 200 on a /healthz endpoint
|
||||
image: gcr.io/google_containers/defaultbackend:1.0
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
resources:
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: alpine:3.4
|
||||
name: sysctl-buddy
|
||||
# using kubectl exec you can check which other parameters is possible to change
|
||||
# IPC Namespace: kernel.msgmax, kernel.msgmnb, kernel.msgmni, kernel.sem, kernel.shmall,
|
||||
# kernel.shmmax, kernel.shmmni, kernel.shm_rmid_forced and Sysctls
|
||||
# beginning with fs.mqueue.*
|
||||
# Network Namespace: Sysctls beginning with net.*
|
||||
#
|
||||
# kubectl <podname> -c sysctl-buddy -- sysctl -A | grep net
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- |
|
||||
while true; do
|
||||
sysctl -w net.core.somaxconn=32768
|
||||
sysctl -w net.ipv4.ip_local_port_range='1024 65535'
|
||||
sleep 10
|
||||
done
|
||||
imagePullPolicy: IfNotPresent
|
||||
securityContext:
|
||||
privileged: true
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
# we expose 8080 to access nginx stats in url /nginx-status
|
||||
# this is optional
|
||||
- containerPort: 8080
|
||||
hostPort: 8080
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
74
controllers/nginx/examples/tcp/README.md
Normal file
74
controllers/nginx/examples/tcp/README.md
Normal file
|
@ -0,0 +1,74 @@
|
|||
|
||||
To configure which services and ports will be exposed
|
||||
```
|
||||
kubectl create -f tcp-configmap-example.yaml
|
||||
```
|
||||
|
||||
The file `tcp-configmap-example.yaml` uses a ConfigMap where the key is the external port to use and the value is
|
||||
`<namespace/service name>:<service port>`
|
||||
It is possible to use a number or the name of the port.
|
||||
|
||||
```
|
||||
kubectl create -f rc-tcp.yaml
|
||||
```
|
||||
|
||||
Now we can test the new service:
|
||||
```
|
||||
$ (sleep 1; echo "GET / HTTP/1.1"; echo "Host: 172.17.4.99:9000"; echo;echo;sleep 2) | telnet 172.17.4.99 9000
|
||||
|
||||
Trying 172.17.4.99...
|
||||
Connected to 172.17.4.99.
|
||||
Escape character is '^]'.
|
||||
HTTP/1.1 200 OK
|
||||
Server: nginx/1.9.7
|
||||
Date: Tue, 15 Dec 2015 14:46:28 GMT
|
||||
Content-Type: text/plain
|
||||
Transfer-Encoding: chunked
|
||||
Connection: keep-alive
|
||||
|
||||
f
|
||||
CLIENT VALUES:
|
||||
|
||||
1a
|
||||
client_address=10.2.84.45
|
||||
|
||||
c
|
||||
command=GET
|
||||
|
||||
c
|
||||
real path=/
|
||||
|
||||
a
|
||||
query=nil
|
||||
|
||||
14
|
||||
request_version=1.1
|
||||
|
||||
25
|
||||
request_uri=http://172.17.4.99:8080/
|
||||
|
||||
1
|
||||
|
||||
|
||||
f
|
||||
SERVER VALUES:
|
||||
|
||||
28
|
||||
server_version=nginx: 1.9.7 - lua: 9019
|
||||
|
||||
1
|
||||
|
||||
|
||||
12
|
||||
HEADERS RECEIVED:
|
||||
|
||||
16
|
||||
host=172.17.4.99:9000
|
||||
|
||||
6
|
||||
BODY:
|
||||
|
||||
14
|
||||
-no body in request-
|
||||
0
|
||||
```
|
56
controllers/nginx/examples/tcp/rc-tcp.yaml
Normal file
56
controllers/nginx/examples/tcp/rc-tcp.yaml
Normal file
|
@ -0,0 +1,56 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
# service echoheaders as TCP service default/echoheaders:9000
|
||||
# 9000 indicates the port used to expose the service
|
||||
- containerPort: 9000
|
||||
hostPort: 9000
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-configmap-example
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: tcp-configmap-example
|
||||
data:
|
||||
9000: "default/example-go:8080"
|
90
controllers/nginx/examples/tls/README.md
Normal file
90
controllers/nginx/examples/tls/README.md
Normal file
|
@ -0,0 +1,90 @@
|
|||
This is an example to use a TLS Ingress rule to use SSL in NGINX
|
||||
|
||||
# TLS certificate termination
|
||||
|
||||
This examples uses 2 different certificates to terminate SSL for 2 hostnames.
|
||||
|
||||
1. Deploy the controller by creating the rc in the parent dir
|
||||
2. Create tls secret for foo.bar.com
|
||||
3. Create rc-ssl.yaml
|
||||
|
||||
*Next create a SSL certificate for `foo.bar.com` host:*
|
||||
|
||||
```
|
||||
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=foo.bar.com"
|
||||
```
|
||||
|
||||
*Now store the SSL certificate in a secret:*
|
||||
|
||||
```
|
||||
echo "
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: foo-secret
|
||||
data:
|
||||
tls.crt: `base64 /tmp/tls.crt`
|
||||
tls.key: `base64 /tmp/tls.key`
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
*Finally create a tls Ingress rule:*
|
||||
|
||||
```
|
||||
echo "
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: foo
|
||||
namespace: default
|
||||
spec:
|
||||
tls:
|
||||
- hosts:
|
||||
- foo.bar.com
|
||||
secretName: foo-secret
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders-x
|
||||
servicePort: 80
|
||||
path: /
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
You should be able to reach your nginx service or echoheaders service using a hostname:
|
||||
```
|
||||
$ kubectl get ing
|
||||
NAME RULE BACKEND ADDRESS
|
||||
foo - 10.4.0.3
|
||||
foo.bar.com
|
||||
/ echoheaders-x:80
|
||||
```
|
||||
|
||||
```
|
||||
$ curl https://10.4.0.3 -H 'Host:foo.bar.com' -k
|
||||
old-mbp:contrib aledbf$ curl https://10.4.0.3 -H 'Host:foo.bar.com' -k
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.48.4
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://foo.bar.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.7 - lua: 9019
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
connection=close
|
||||
host=foo.bar.com
|
||||
user-agent=curl/7.43.0
|
||||
x-forwarded-for=10.2.48.1
|
||||
x-forwarded-host=foo.bar.com
|
||||
x-forwarded-proto=https
|
||||
x-real-ip=10.2.48.1
|
||||
BODY:
|
||||
-no body in request-
|
||||
```
|
35
controllers/nginx/examples/tls/dhparam.sh
Executable file
35
controllers/nginx/examples/tls/dhparam.sh
Executable file
|
@ -0,0 +1,35 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
# https://www.openssl.org/docs/manmaster/apps/dhparam.html
|
||||
# this command generates a key used to get "Perfect Forward Secrecy" in nginx
|
||||
# https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam
|
||||
openssl dhparam -out dhparam.pem 4096
|
||||
|
||||
cat <<EOF > dhparam-example.yaml
|
||||
{
|
||||
"kind": "Secret",
|
||||
"apiVersion": "v1",
|
||||
"metadata": {
|
||||
"name": "dhparam-example"
|
||||
},
|
||||
"data": {
|
||||
"dhparam.pem": "$(cat ./dhparam.pem | base64)"
|
||||
}
|
||||
}
|
||||
|
||||
EOF
|
51
controllers/nginx/examples/tls/rc-ssl.yaml
Normal file
51
controllers/nginx/examples/tls/rc-ssl.yaml
Normal file
|
@ -0,0 +1,51 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
13
controllers/nginx/examples/udp/README.md
Normal file
13
controllers/nginx/examples/udp/README.md
Normal file
|
@ -0,0 +1,13 @@
|
|||
|
||||
To configure which services and ports will be exposed
|
||||
```
|
||||
kubectl create -f udp-configmap-example.yaml
|
||||
```
|
||||
|
||||
The file `udp-configmap-example.yaml` uses a ConfigMap where the key is the external port to use and the value is
|
||||
`<namespace/service name>:<service port>`
|
||||
It is possible to use a number or the name of the port.
|
||||
|
||||
```
|
||||
kubectl create -f rc-udp.yaml
|
||||
```
|
54
controllers/nginx/examples/udp/rc-udp.yaml
Normal file
54
controllers/nginx/examples/udp/rc-udp.yaml
Normal file
|
@ -0,0 +1,54 @@
|
|||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: nginx-ingress-controller
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
spec:
|
||||
replicas: 1
|
||||
selector:
|
||||
k8s-app: nginx-ingress-lb
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: nginx-ingress-lb
|
||||
name: nginx-ingress-lb
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 60
|
||||
containers:
|
||||
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
|
||||
name: nginx-ingress-lb
|
||||
imagePullPolicy: Always
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 10254
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 10
|
||||
timeoutSeconds: 1
|
||||
# use downward API
|
||||
env:
|
||||
- name: POD_NAME
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.name
|
||||
- name: POD_NAMESPACE
|
||||
valueFrom:
|
||||
fieldRef:
|
||||
fieldPath: metadata.namespace
|
||||
ports:
|
||||
- containerPort: 80
|
||||
hostPort: 80
|
||||
- containerPort: 443
|
||||
hostPort: 443
|
||||
- containerPort: 53
|
||||
hostPort: 53
|
||||
args:
|
||||
- /nginx-ingress-controller
|
||||
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
|
||||
- --udp-services-configmap=$(POD_NAMESPACE)/udp-configmap-example
|
|
@ -0,0 +1,6 @@
|
|||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: udp-configmap-example
|
||||
data:
|
||||
53: "kube-system/kube-dns:53"
|
123
controllers/nginx/examples/whitelist/README.md
Normal file
123
controllers/nginx/examples/whitelist/README.md
Normal file
|
@ -0,0 +1,123 @@
|
|||
|
||||
This example shows how is possible to restrict access
|
||||
|
||||
```
|
||||
echo "
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: whitelist
|
||||
annotations:
|
||||
ingress.kubernetes.io/whitelist-source-range: "1.1.1.1/24"
|
||||
spec:
|
||||
rules:
|
||||
- host: foo.bar.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
" | kubectl create -f -
|
||||
```
|
||||
|
||||
Check the annotation is present in the Ingress rule:
|
||||
```
|
||||
$ kubectl get ingress whitelist -o yaml
|
||||
apiVersion: extensions/v1beta1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
annotations:
|
||||
ingress.kubernetes.io/whitelist-source-range: 1.1.1.1/24
|
||||
creationTimestamp: 2016-06-09T21:39:06Z
|
||||
generation: 2
|
||||
name: whitelist
|
||||
namespace: default
|
||||
resourceVersion: "419363"
|
||||
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/whitelist
|
||||
uid: 97b74737-2e8a-11e6-90db-080027d2dc94
|
||||
spec:
|
||||
rules:
|
||||
- host: whitelist.bar.com
|
||||
http:
|
||||
paths:
|
||||
- backend:
|
||||
serviceName: echoheaders
|
||||
servicePort: 80
|
||||
path: /
|
||||
status:
|
||||
loadBalancer:
|
||||
ingress:
|
||||
- ip: 172.17.4.99
|
||||
```
|
||||
|
||||
Finally test is not possible to access the URL
|
||||
|
||||
```
|
||||
$ curl -v http://172.17.4.99/ -H 'Host: whitelist.bar.com'
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: whitelist.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 403 Forbidden
|
||||
< Server: nginx/1.11.1
|
||||
< Date: Thu, 09 Jun 2016 21:56:17 GMT
|
||||
< Content-Type: text/html
|
||||
< Content-Length: 169
|
||||
< Connection: keep-alive
|
||||
<
|
||||
<html>
|
||||
<head><title>403 Forbidden</title></head>
|
||||
<body bgcolor="white">
|
||||
<center><h1>403 Forbidden</h1></center>
|
||||
<hr><center>nginx/1.11.1</center>
|
||||
</body>
|
||||
</html>
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
||||
Removing the annotation removes the restriction
|
||||
|
||||
```
|
||||
* Trying 172.17.4.99...
|
||||
* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0)
|
||||
> GET / HTTP/1.1
|
||||
> Host: whitelist.bar.com
|
||||
> User-Agent: curl/7.43.0
|
||||
> Accept: */*
|
||||
>
|
||||
< HTTP/1.1 200 OK
|
||||
< Server: nginx/1.11.1
|
||||
< Date: Thu, 09 Jun 2016 21:57:44 GMT
|
||||
< Content-Type: text/plain
|
||||
< Transfer-Encoding: chunked
|
||||
< Connection: keep-alive
|
||||
<
|
||||
CLIENT VALUES:
|
||||
client_address=10.2.89.7
|
||||
command=GET
|
||||
real path=/
|
||||
query=nil
|
||||
request_version=1.1
|
||||
request_uri=http://whitelist.bar.com:8080/
|
||||
|
||||
SERVER VALUES:
|
||||
server_version=nginx: 1.9.11 - lua: 10001
|
||||
|
||||
HEADERS RECEIVED:
|
||||
accept=*/*
|
||||
connection=close
|
||||
host=whitelist.bar.com
|
||||
user-agent=curl/7.43.0
|
||||
x-forwarded-for=10.2.89.1
|
||||
x-forwarded-host=whitelist.bar.com
|
||||
x-forwarded-port=80
|
||||
x-forwarded-proto=http
|
||||
x-real-ip=10.2.89.1
|
||||
BODY:
|
||||
* Connection #0 to host 172.17.4.99 left intact
|
||||
```
|
||||
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue