diff --git a/deploy/index.html b/deploy/index.html index 206ecd280..abd7b20f7 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -1,11 +1,23 @@ - Installation Guide - NGINX Ingress Controller
Skip to content

Installation Guide

Attention

The default configuration watches Ingress object from all namespaces.

To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.

Warning

If multiple Ingresses define paths for the same host, the ingress controller merges the definitions.

Danger

The admission webhook requires connectivity between Kubernetes API server and the ingress controller.

In case Network policies or additional firewalls, please allow access to port 8443.

Attention

The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. For this reason, there is an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

You can wait until it is ready to run the next command:

 kubectl wait --namespace ingress-nginx \
-  --for=condition=ready pod \
-  --selector=app.kubernetes.io/component=controller \
-  --timeout=120s
-

Contents

Provider Specific Steps

Docker Desktop

Kubernetes is available in Docker Desktop

Attention

Before running the command at your terminal, make sure Kubernetes is enabled at Docker settings

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
-

minikube

For standard usage:

minikube addons enable ingress
-

microk8s

For standard usage:

microk8s enable ingress
-

Please check the microk8s documentation page

AWS

In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.

Info

The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml
+ Installation Guide - NGINX Ingress Controller      

Installation Guide

There are multiple ways to install the NGINX ingress controller: - with Helm, using the project repository chart; - with kubectl apply, using YAML manifests; - with specific addons (e.g. for minikube or MicroK8s).

On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. we recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.

Contents

Quick start

You can deploy the ingress controller with the following command:

helm upgrade --install ingress-nginx ingress-nginx \
+  --repo https://kubernetes.github.io/ingress-nginx \
+  --namespace ingress-nginx --create-namespace
+

It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.

Info

This command is idempotent: - if the ingress controller is not installed, it will install it, - if the ingress controller is already installed, it will upgrade it.

This requires Helm version 3. If you prefer to use a YAML manifest, you can run the following command instead:

Attention

Before running the command at your terminal, make sure Kubernetes is enabled at Docker settings

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
+

Info

The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.

If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions.

Pre-flight check

A few pods should start in the ingress-nginx namespace:

kubectl get pods --namespace=ingress-nginx
+

After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:

kubectl wait --namespace ingress-nginx \
+  --for=condition=ready pod \
+  --selector=app.kubernetes.io/component=controller \
+  --timeout=120s
+

Local testing

Let's create a simple web server and the associated service:

kubectl create deployment demo --image=httpd --port=80
+kubectl expose deployment demo
+

Then create an ingress resource. The following example uses an host that maps to localhost:

kubectl create ingress demo-localhost --class=nginx \
+  --rule=demo.localdev.me/*=demo:80
+

Now, forward a local port to the ingress controller:

kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
+

At this point, if you access http://demo.localdev.me:8080/, you should see an HTML page telling you "It works!".

Online testing

If your Kubernetes cluster is a "real" cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller.

You can see that IP address or FQDN with the following command:

kubectl get service ingress-nginx-controller --namespace=ingress-nginx
+

Set up a DNS record pointing to that IP address or FQDN; then create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io:

kubectl create ingress demo --class=nginx \
+  --rule=www.demo.io/*=demo:80
+

You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public web site hosted on a Kubernetes cluster! 🎉

Environment-specific instructions

Local development clusters

minikube

The ingress controller can be installed through minikube's addons system:

minikube addons enable ingress
+

MicroK8s

The ingress controller can be installed through MicroK8s's addons system:

microk8s enable ingress
+

Please check the MicroK8s documentation page for details.

Docker Desktop

Kubernetes is available in Docker Desktop:

The ingress controller can be installed on Docker Desktop using the default quick start instructions.

On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.

Cloud deployments

If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.

Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.

In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.

AWS

In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.

Info

The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml
 
TLS termination in AWS Load Balancer (NLB)

In some scenarios is required to terminate TLS in the Load Balancer and not in the ingress controller.

For this purpose we provide a template:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy-tls-termination.yaml
 
  • Edit the file and change:

  • VPC CIDR in use for the Kubernetes cluster:

proxy-real-ip-cidr: XXX.XXX.XXX/XX

  • AWS Certificate Manager (ACM) ID

arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX

  • Deploy the manifest:
kubectl apply -f deploy-tls-termination.yaml
 
NLB Idle Timeouts

Idle timeout value for TCP flows is 350 seconds and cannot be modified.

For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.

By default NGINX keepalive_timeout is set to 75s.

More information with regards to timeouts can be found in the official AWS documentation

GCE-GKE

Info

Initialize your user as a cluster-admin with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
@@ -17,20 +29,15 @@
 

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/scw/deploy.yaml
 

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
 

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/cloud/deploy.yaml
-

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

Bare-metal

Using NodePort:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/baremetal/deploy.yaml
-

Tip

Applicable on kubernetes clusters deployed on bare-metal with generic Linux distro(Such as CentOs, Ubuntu ...).

Info

For extended notes regarding deployments on bare-metal, see Bare-metal considerations.

Verify installation

To check if the ingress controller pods have started, run the following command:

kubectl get pods -n ingress-nginx \
-  -l app.kubernetes.io/name=ingress-nginx --watch
-

Once the ingress controller pods are running, you can cancel the command typing Ctrl+C.

Now, you are ready to create your first ingress.

Detect installed version

To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller --version.

POD_NAMESPACE=ingress-nginx
-POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o jsonpath='{.items[0].metadata.name}')
-
-kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
-

Using Helm

Attention

Only Helm v3 is supported

NGINX Ingress controller can be installed via Helm using the chart from the project repository. To install the chart with the release name ingress-nginx:

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
-helm repo update
-
-helm install ingress-nginx ingress-nginx/ingress-nginx
-

For multiple NGINX Ingress controllers

Detect installed version:

POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
-kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
-