This page contains general FAQ for the gce Ingress controller.
Table of Contents
=================
* [How do I deploy an Ingress controller?](#how-do-i-deploy-an-ingress-controller)
* [I created an Ingress and nothing happens, now what?](#i-created-an-ingress-and-nothing-happens-now-what)
* [What are the cloud resources created for a single Ingress?](#what-are-the-cloud-resources-created-for-a-single-ingress)
* [The Ingress controller events complain about quota, how do I increase it?](#the-ingress-controller-events-complain-about-quota-how-do-i-increase-it)
* [Why does the Ingress need a different instance group then the GKE cluster?](#why-does-the-ingress-need-a-different-instance-group-then-the-gke-cluster)
* [Why does the cloud console show 0/N healthy instances?](#why-does-the-cloud-console-show-0n-healthy-instances)
* [Can I configure GCE health checks through the Ingress?](#can-i-configure-gce-health-checks-through-the-ingress)
* [Why does my Ingress have an ephemeral ip?](#why-does-my-ingress-have-an-ephemeral-ip)
* [Can I pre-allocate a static-ip?](#can-i-pre-allocate-a-static-ip)
* [Does updating a Kubernetes secrete update the GCE TLS certs?](#does-updating-a-kubernetes-secrete-update-the-gce-tls-certs)
* [Can I tune the loadbalancing algorithm?](#can-i-tune-the-loadbalancing-algorithm)
* [Is there a maximum number of Endpoints I can add to the Ingress?](#is-there-a-maximum-number-of-endpoints-i-can-add-to-the-ingress)
* [How do I match GCE resources to Kubernetes Services?](#how-do-i-match-gce-resources-to-kubernetes-services)
* [Can I change the cluster UID?](#can-i-change-the-cluster-uid)
* [Why do I need a default backend?](#why-do-i-need-a-default-backend)
* [How does Ingress work across 2 GCE clusters?](#how-does-ingress-work-across-2-gce-clusters)
* [I shutdown a cluster without deleting all Ingresses, how do I manually cleanup?](#i-shutdown-a-cluster-without-deleting-all-ingresses-how-do-i-manually-cleanup)
* [How do I disable the GCE Ingress controller?](#how-do-i-disable-the-gce-ingress-controller)
## I created an Ingress and nothing happens, now what?
Please check the following:
1. Output of `kubectl describe`, as shown [here](README.md#i-created-an-ingress-and-nothing-happens-what-now)
2. Do your Services all have a `NodePort`?
3. Do your Services either serve a http 200 on `/`, or have a readiness probe
as described in [this section](#can-i-configure-gce-health-checks-through-the-ingress)?
4. Do you have enough GCP quota?
## What are the cloud resources created for a single Ingress?
__Terminology:__
* [Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules): Manages the Ingress VIP
* [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies): Manages SSL certs and proxies between the VIP and backend
* [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service): Bridges various Instance Groups on a given Service NodePort
* [Instance Group](https://cloud.google.com/compute/docs/instance-groups/): Collection of Kubernetes nodes
The pipeline is as follows:
```
Global Forwarding Rule -> TargetHTTPProxy
| \ Instance Group (us-east1)
Static IP URL Map - Backend Service(s) - Instance Group (us-central1)
| / ...
Global Forwarding Rule -> TargetHTTPSProxy
ssl cert
```
In addition to this pipeline:
* Each Backend Service requires a HTTP health check to the NodePort of the
Service
* Each port on the Backend Service has a matching port on the Instance Group
* Each port on the Backend Service is exposed through a firewall-rule open
$ kubectl get configmap -o yaml --namespace=kube-system | grep -i " data:" -A 1
data:
uid: cad4ee813812f808
```
and `resource-name` is a short prefix for one of the resources mentioned [here](#what-are-the-cloud-resources-created-for-a-single-ingress)
(eg: `be` for backends, `hc` for health checks). If a given resource is not tied
to a single `node-port`, its name will not include the same.
## Can I change the cluster UID?
The Ingress controller configures itself to add the UID it stores in a configmap in the `kube-system` namespace.
```console
$ kubectl --namespace=kube-system get configmaps
NAME DATA AGE
ingress-uid 1 12d
$ kubectl --namespace=kube-system get configmaps -o yaml
apiVersion: v1
items:
- apiVersion: v1
data:
uid: UID
kind: ConfigMap
...
```
You can pick a different UID, but this requires you to:
1. Delete existing Ingresses
2. Edit the configmap using `kubectl edit`
3. Recreate the same Ingress
After step 3 the Ingress should come up using the new UID as the suffix of all cloud resources. You can't simply change the UID if you have existing Ingresses, because
renaming a cloud resource requires a delete/create cycle that the Ingress controller does not currently automate. Note that the UID in step 1 might be an empty string,
if you had a working Ingress before upgrading to Kubernetes 1.3.
__A note on setting the UID__: The Ingress controller uses the token `--` to split a machine generated prefix from the UID itself. If the user supplied UID is found to
contain `--` the controller will take the token after the last `--`, and use an empty string if it ends with `--`. For example, if you insert `foo--bar` as the UID,
the controller will assume `bar` is the UID. You can either edit the configmap and set the UID to `bar` to match the controller, or delete existing Ingresses as described
above, and reset it to a string bereft of `--`.
## Why do I need a default backend?
All GCE URL maps require at least one [default backend](https://cloud.google.com/compute/docs/load-balancing/http/url-map#url_map_simplest_case), which handles all
requests that don't match a host/path. In Ingress, the default backend is
optional, since the resource is cross-platform and not all platforms require
a default backend. If you don't specify one in your yaml, the GCE ingress
controller will inject the default-http-backend Service that runs in the
`kube-system` namespace as the default backend for the GCE HTTP lb allocated
## What GCE resources are shared between Ingresses?
Every Ingress creates a pipeline of GCE cloud resources behind an IP. Some of
these are shared between Ingresses out of necessity, while some are shared
because there was no perceived need for duplication (all resources consume
quota and usually cost money).
Shared:
* Backend Services: because of low quota and high reuse. A single Service in a
Kubernetes cluster has one NodePort, common throughout the cluster. GCE has
a hard limit of the number of allowed BackendServices, so if multiple Ingresses
all point to a single Service, that creates a single BackendService in GCE
pointing to that Service's NodePort.
* Instance Group: since an instance can only be part of a single loadbalanced
Instance Group, these must be shared. There is 1 Ingress Instance Group per
zone containing Kubernetes nodes.
* HTTP Health Checks: currently the http health checks point at the NodePort
of a BackendService. They don't *need* to be shared, but they are since
BackendServices are shared.
* Firewall rule: In a non-federated cluster there is a single firewall rule
that covers HTTP health check traffic from the range of [GCE loadbalancer IPs](https://cloud.google.com/compute/docs/load-balancing/http/#troubleshooting)
to Service nodePorts.
Unique:
Currently, a single Ingress on GCE creates a unique IP and url map. In this
model the following resources cannot be shared:
* Url Map
* Target HTTP(S) Proxies
* SSL Certificates
* Static-ip
* Forwarding rules
## How do I debug a controller spinloop?
The most likely cause of a controller spin loop is some form of GCE validation
failure, eg:
* It's trying to delete a BackendService already in use, say in a UrlMap
* It's trying to add an Instance to more than 1 loadbalanced InstanceGroups
* It's trying to flip the loadbalancing algorithm on a BackendService to RATE,
when some other BackendService is pointing at the same InstanceGroup and asking
for UTILIZATION
In all such cases, the work queue will put a single key (ingress namespace/name)
that's getting continuously requeued into exponential backoff. However, currently
the Informers that watch the Kubernetes api are setup to periodically resync,
so even though a particular key is in backoff, we might end up syncing all other
keys every, say, 10m, which might trigger the same validation-error-condition
## Creating an Internal Load Balancer without existing ingress
**How the GCE ingress controller Works**
To assemble an L7 Load Balancer, the ingress controller creates an [unmanaged instance-group](https://cloud.google.com/compute/docs/instance-groups/creating-groups-of-unmanaged-instances) named `k8s-ig--{UID}` and adds every known non-master node to the group. For every service specified in all ingresses, a backend service is created to point to that instance group.
**How the Internal Load Balancer Works**
K8s does not yet assemble ILB's for you, but you can manually create one via the GCP Console. The ILB is composed of a regional forwarding rule and a regional backend service. Similar to the L7 LB, the backend-service points to an unmanaged instance-group containing your K8s nodes.
**The Complication**
GCP will only allow one load balanced unmanaged instance-group for a given instance.
If you manually created an instance group named something like `my-kubernetes-group` containing all your nodes and put an ILB in front of it, then you will probably encounter a GCP error when setting up an ingress resource. The controller doesn't know to use your `my-kubernetes-group` group and will create it's own. Unfortunately, it won't be able to add any nodes to that group because they already belong to the ILB group.
As mentioned before, the instance group name is composed of a hard-coded prefix `k8s-ig--` and a cluster-specific UID. The ingress controller will check the K8s configmap for an existing UID value at process start. If it doesn't exist, the controller will create one randomly and update the configmap.
#### Solutions
**Want an ILB and Ingress?**
If you plan on creating both ingresses and internal load balancers, simply create the ingress resource first then use the GCP Console to create an ILB pointing to the existing instance group.
**Want just an ILB for now, ingress maybe later?**
Retrieve the UID via configmap, create an instance-group per used zone, then add all respective nodes to the group.
```shell
# Fetch instance group name from config map
GROUPNAME=`kubectl get configmaps ingress-uid -o jsonpath='k8s-ig--{.data.uid}' --namespace=kube-system`
# Create an instance group for every zone you have nodes. If you use GKE, this is probably a single zone.