From a44e7b06975a37f4b2aa0ebda7afc4d6916157be Mon Sep 17 00:00:00 2001
From: k8s-ci-robot It will install the controller in the Info This command is idempotent: If you want a full list of values that you can set, while installing with Helm, then run: If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: Info The YAML manifest in the command above was generated with Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider. To check which ports are used by your installation of ingress-nginx, look at the output of A few pods should start in the Info The YAML manifest in the command above was generated with Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider. To check which ports are used by your installation of ingress-nginx, look at the output of A few pods should start in the After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: Then create an ingress resource. The following example uses a host that maps to Now, forward a local port to the ingress controller: Info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The "kubectl port-forward..." command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. This issue shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; ingress-nginx
namespace, creating that namespace if it doesn't already exist.
helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
-
helm template
, so you will end up with almost the same resources as if you had used Helm to install the controller.Firewall configuration ¶
kubectl -n ingress-nginx get pod -o yaml
. In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller. - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.Pre-flight check ¶
ingress-nginx
namespace:kubectl get pods --namespace=ingress-nginx
+
helm template
, so you will end up with almost the same resources as if you had used Helm to install the controller.Firewall configuration ¶
kubectl -n ingress-nginx get pod -o yaml
. In general, you need:
Pre-flight check ¶
ingress-nginx
namespace:kubectl get pods --namespace=ingress-nginx
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
@@ -18,26 +18,26 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
localhost
:kubectl create ingress demo-localhost --class=nginx \
--rule="demo.localdev.me/*=demo:80"
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
-
curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080
-
You should see a HTML response containing text like "It works!".
If your Kubernetes cluster is a "real" cluster that supports services of type LoadBalancer
, it will have allocated an external IP address or FQDN to the ingress controller.
You can see that IP address or FQDN with the following command:
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
+
Info
A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The "kubectl port-forward..." command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster.
This issue shows a typical DNS problem and its solution.
At this point, you can access your deployment using curl ;
curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080
+
You should see a HTML response containing text like "It works!".
If your Kubernetes cluster is a "real" cluster that supports services of type LoadBalancer
, it will have allocated an external IP address or FQDN to the ingress controller.
You can see that IP address or FQDN with the following command:
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
It will be the EXTERNAL-IP
field. If that field shows <pending>
, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer
).
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io
:
kubectl create ingress demo --class=nginx \
--rule="www.demo.io/*=demo:80"
-
Alternatively, the above command can be rewritten as follows for the --rule
command and below.
kubectl create ingress demo --class=nginx \
+
Alternatively, the above command can be rewritten as follows for the --rule
command and below.
kubectl create ingress demo --class=nginx \
--rule www.demo.io/=demo:80
-
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉
The ingress controller can be installed through minikube's addons system:
minikube addons enable ingress
+
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉
The ingress controller can be installed through minikube's addons system:
minikube addons enable ingress
The ingress controller can be installed through MicroK8s's addons system:
microk8s enable ingress
-
Please check the MicroK8s documentation page for details.
Kubernetes is available in Docker Desktop:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes
should show a single node called docker-desktop
.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer
bound to port 80, the ingress controller will be assigned the EXTERNAL-IP
of localhost
, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward
method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy
of the ingress controller Service to Local
(instead of the default Cluster
) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local
to the helm install
or helm upgrade
command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true
) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer
.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/aws/deploy.yaml
+
Please check the MicroK8s documentation page for details.
Kubernetes is available in Docker Desktop:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes
should show a single node called docker-desktop
.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer
bound to port 80, the ingress controller will be assigned the EXTERNAL-IP
of localhost
, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward
method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy
of the ingress controller Service to Local
(instead of the default Cluster
) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local
to the helm install
or helm upgrade
command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true
) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer
.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
Helm
There have been too many attempts at using helm to install the controller on AWS recently. The ingress-nginx-controller helm-chart is generic and not aimed at installing on AWS on any other infra provider. There are annotations and configurations that are applicable only to AWS. Please run helm template
and compare with the static yaml manifests you see below. A user is expected to use annotations for (a) internal/external (b) type - nlb © sqcurity-groups (d) and other such requirements during their helm install
process.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/aws/deploy.yaml
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
-
Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
proxy-real-ip-cidr: XXX.XXX.XXX/XX
-
Change the AWS Certificate Manager (ACM) ID as well:
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
-
Deploy the manifest:
kubectl apply -f deploy.yaml
-
Idle timeout value for TCP flows is 350 seconds and cannot be modified.
For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.
By default, NGINX keepalive_timeout
is set to 75s
.
More information with regard to timeouts can be found in the official AWS documentation
First, your user needs to have cluster-admin
permissions on the cluster. This can be done with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
+
proxy-real-ip-cidr: XXX.XXX.XXX/XX
+
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
+
kubectl apply -f deploy.yaml
+
Idle timeout value for TCP flows is 350 seconds and cannot be modified.
For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.
By default, NGINX keepalive_timeout
is set to 75s
.
More information with regard to timeouts can be found in the official AWS documentation
First, your user needs to have cluster-admin
permissions on the cluster. This can be done with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
Then, the ingress controller can be installed like this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
Warning
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp
on worker nodes, or change the existing rule that allows access to port 80/tcp
, 443/tcp
and 10254/tcp
to also allow access to port 8443/tcp
. More information can be found in the Official GCP Documentation.
See the GKE documentation on adding rules and the Kubernetes issue for more detail.
Proxy-protocol is supported in GCE check the Official Documentations on how to enable.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
-
More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/do/deploy.yaml
-
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data
, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/scw/deploy.yaml
+
More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/do/deploy.yaml
+
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data
, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/scw/deploy.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
@@ -51,4 +51,4 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
-
Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1
, then moved to apiVersion: networking.k8s.io/v1beta1
and more recently to apiVersion: networking.k8s.io/v1
.
Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1
Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1
and v1
Ingress resources are supported - in Kubernetes 1.22 and above, only v1
Ingress resources are supported
And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only v1beta1
Ingress resources are supported - in version 1.0 and above, only v1
Ingress resources are
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49).
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4'
to the helm install
command ).