2. Keep it in your own repo, make sure it passes the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress_utils.go#L112)
3. Submit an example(s) in the appropriate subdirectories [here](/examples/README.md)
4. Add it to the catalog
## Is there a catalog of existing Ingress controllers?
Yes, a non-comprehensive [catalog](/docs/catalog.md) exists.
## How are the Ingress controllers tested?
Testing for the Ingress controllers is divided between:
* Ingress repo: unittests and pre-submit integration tests run via travis
The configuration for jenkins e2e tests are located [here](https://github.com/kubernetes/test-infra).
The Ingress E2Es are located [here](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress.go),
each controller added to that suite must consistently pass the [conformance suite](https://github.com/kubernetes/kubernetes/blob/master/test/e2e/ingress_utils.go#L112).
## An Ingress controller E2E is failing, what should I do?
First, identify the reason for failure.
* Look at the build log, if there's nothing obvious, search for quota issues.
* Find events logged by the controller in the build log
* Ctrl+f "quota" in the build log
* If the failure is in the GCE controller:
* Navigate to the test artifacts for that run and look at glbc.log, [eg](http://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-ingress-release-1.5/1234/artifacts/bootstrap-e2e-master/)
* Look up the `PROJECT=` line in the build log, and navigate to that project
looking for quota issues (`gcloud compute project-info describe project-name`
or navigate to the cloud console > compute > quotas)
* If the failure is for a non-cloud controller (eg: nginx)
* Make sure the firewall rules required by the controller are opened on the
right ports (80/443), since the jenkins builders run *outside* the
Kubernetes cluster.
Note that you currently need help from a test-infra maintainer to access the GCE
test project. If you think the failures are related to project quota, cleanup
leaked resources and bump up quota before debugging the leak.
If the preceding identification process fails, it's likely that the Ingress api
is broked upstream. Try to setup a [dev environment](/docs/dev/setup.md) from
HEAD and create an Ingress. You should be deploying the [latest](https://github.com/kubernetes/ingress/releases)
release image to the local cluster.
If neither of these 2 strategies produces anything useful, you can either start
reverting images, or digging into the underlying infrastructure the e2es are
running on for more nefarious issues (like permission and scope changes for
some set of nodes on which an Ingress controller is running).
## Is there a roadmap for Ingress features?
The community is working on it. There are currently too many efforts in flight
to serialize into a flat roadmap. You might be interested in the following issues: