diff --git a/deploy/index.html b/deploy/index.html index d02d031af..e7514adee 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -6,7 +6,7 @@ under a subdirectory, based on the K8S version being used. But until the explici free to use those subdirectories and get the manifest(s) related to their K8S version. -->
If you have Helm, you can deploy the ingress controller with the following command:
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
-
It will install the controller in the ingress-nginx
namespace, creating that namespace if it doesn't already exist.
Info
This command is idempotent:
If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/cloud/deploy.yaml
+
It will install the controller in the ingress-nginx
namespace, creating that namespace if it doesn't already exist.
Info
This command is idempotent:
If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
Info
The YAML manifest in the command above was generated with helm template
, so you will end up with almost the same resources as if you had used Helm to install the controller.
Attention
If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
A few pods should start in the ingress-nginx
namespace:
kubectl get pods --namespace=ingress-nginx
After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
@@ -24,24 +24,24 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
--rule www.demo.io/=demo:80
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉
The ingress controller can be installed through minikube's addons system:
minikube addons enable ingress
The ingress controller can be installed through MicroK8s's addons system:
microk8s enable ingress
-
Please check the MicroK8s documentation page for details.
Kubernetes is available in Docker Desktop:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes
should show a single node called docker-desktop
.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer
bound to port 80, the ingress controller will be assigned the EXTERNAL-IP
of localhost
, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward
method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use NGINX ingress controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the NGINX ingress controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy
of the ingress controller Service to Local
(instead of the default Cluster
) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local
to the helm install
or helm upgrade
command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true
) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer
.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/aws/deploy.yaml
-
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
+
Please check the MicroK8s documentation page for details.
Kubernetes is available in Docker Desktop:
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes
should show a single node called docker-desktop
.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer
bound to port 80, the ingress controller will be assigned the EXTERNAL-IP
of localhost
, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward
method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use NGINX ingress controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the NGINX ingress controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy
of the ingress controller Service to Local
(instead of the default Cluster
) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local
to the helm install
or helm upgrade
command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true
) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
In AWS, we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer
.
Info
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/aws/deploy.yaml
+
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
proxy-real-ip-cidr: XXX.XXX.XXX/XX
Change the AWS Certificate Manager (ACM) ID as well:
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
Deploy the manifest:
kubectl apply -f deploy.yaml
Idle timeout value for TCP flows is 350 seconds and cannot be modified.
For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.
By default, NGINX keepalive_timeout
is set to 75s
.
More information with regard to timeouts can be found in the official AWS documentation
First, your user needs to have cluster-admin
permissions on the cluster. This can be done with the following command:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user $(gcloud config get-value account)
-
Then, the ingress controller can be installed like this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/cloud/deploy.yaml
-
Warning
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp
on worker nodes, or change the existing rule that allows access to port 80/tcp
, 443/tcp
and 10254/tcp
to also allow access to port 8443/tcp
. More information can be found in the Official GCP Documentation.
See the GKE documentation on adding rules and the Kubernetes issue for more detail.
Proxy-protocol is supported in GCE check the Official Documentations on how to enable.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/cloud/deploy.yaml
-
More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/do/deploy.yaml
-
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data
, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/scw/deploy.yaml
+
Then, the ingress controller can be installed like this:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
+
Warning
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp
on worker nodes, or change the existing rule that allows access to port 80/tcp
, 443/tcp
and 10254/tcp
to also allow access to port 8443/tcp
. More information can be found in the Official GCP Documentation.
See the GKE documentation on adding rules and the Kubernetes issue for more detail.
Proxy-protocol is supported in GCE check the Official Documentations on how to enable.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
+
More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/do/deploy.yaml
+
service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data
, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/scw/deploy.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
-
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/cloud/deploy.yaml
+
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/cloud/deploy.yaml
A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace
-
You can find the complete tutorial.
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.3/deploy/static/provider/baremetal/deploy.yaml
+
You can find the complete tutorial.
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.
Run /nginx-ingress-controller --version
within the pod, for instance with kubectl exec
:
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
@@ -49,7 +49,7 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
-
Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1
, then moved to apiVersion: networking.k8s.io/v1beta1
and more recently to apiVersion: networking.k8s.io/v1
.
Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1
Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1
and v1
Ingress resources are supported - in Kubernetes 1.22 and above, only v1
Ingress resources are supported
And here is how these Ingress versions are supported in NGINX Ingress Controller: - before version 1.0, only v1beta1
Ingress resources are supported - in version 1.0 and above, only v1
Ingress resources are
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the NGINX Ingress Controller (e.g. version 0.49).
The Helm chart of the NGINX Ingress Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4'
to the helm install
command).