added info for aws helm install (#11390)
This commit is contained in:
parent
8f4f15f8e3
commit
5fea717bdb
1 changed files with 71 additions and 54 deletions
|
@ -6,10 +6,10 @@ There are multiple ways to install the Ingress-Nginx Controller:
|
|||
- with `kubectl apply`, using YAML manifests;
|
||||
- with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)).
|
||||
|
||||
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to
|
||||
get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many
|
||||
environments, you can improve the performance or get better logs by enabling extra features. We recommend that you
|
||||
check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the
|
||||
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to
|
||||
get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many
|
||||
environments, you can improve the performance or get better logs by enabling extra features. We recommend that you
|
||||
check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the
|
||||
ingress controller for your particular environment or cloud provider.
|
||||
|
||||
## Contents
|
||||
|
@ -72,7 +72,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
|||
```
|
||||
|
||||
!!! info
|
||||
The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same
|
||||
The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same
|
||||
resources as if you had used Helm to install the controller.
|
||||
|
||||
!!! attention
|
||||
|
@ -83,6 +83,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
|||
### Firewall configuration
|
||||
|
||||
To check which ports are used by your installation of ingress-nginx, look at the output of `kubectl -n ingress-nginx get pod -o yaml`. In general, you need:
|
||||
|
||||
- Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx [admission controller](https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/).
|
||||
- Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
|
||||
|
||||
|
@ -94,7 +95,7 @@ A few pods should start in the `ingress-nginx` namespace:
|
|||
kubectl get pods --namespace=ingress-nginx
|
||||
```
|
||||
|
||||
After a while, they should all be running. The following command will wait for the ingress controller pod to be up,
|
||||
After a while, they should all be running. The following command will wait for the ingress controller pod to be up,
|
||||
running, and ready:
|
||||
|
||||
```console
|
||||
|
@ -104,7 +105,6 @@ kubectl wait --namespace ingress-nginx \
|
|||
--timeout=120s
|
||||
```
|
||||
|
||||
|
||||
### Local testing
|
||||
|
||||
Let's create a simple web server and the associated service:
|
||||
|
@ -135,6 +135,7 @@ kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller
|
|||
[This issue](https://github.com/kubernetes/ingress-nginx/issues/10014#issuecomment-1567791549described) shows a typical DNS problem and its solution.
|
||||
|
||||
At this point, you can access your deployment using curl ;
|
||||
|
||||
```console
|
||||
curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080
|
||||
```
|
||||
|
@ -143,7 +144,7 @@ You should see a HTML response containing text like **"It works!"**.
|
|||
|
||||
### Online testing
|
||||
|
||||
If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an
|
||||
If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an
|
||||
external IP address or FQDN to the ingress controller.
|
||||
|
||||
You can see that IP address or FQDN with the following command:
|
||||
|
@ -152,10 +153,10 @@ You can see that IP address or FQDN with the following command:
|
|||
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
||||
```
|
||||
|
||||
It will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't
|
||||
It will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't
|
||||
able to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`).
|
||||
|
||||
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress
|
||||
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress
|
||||
resource. The following example assumes that you have set up a DNS record for `www.demo.io`:
|
||||
|
||||
```console
|
||||
|
@ -164,13 +165,13 @@ kubectl create ingress demo --class=nginx \
|
|||
```
|
||||
|
||||
Alternatively, the above command can be rewritten as follows for the ```--rule``` command and below.
|
||||
|
||||
```console
|
||||
kubectl create ingress demo --class=nginx \
|
||||
--rule www.demo.io/=demo:80
|
||||
```
|
||||
|
||||
|
||||
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations,
|
||||
You should then be able to see the "It works!" page when you connect to <http://www.demo.io/>. Congratulations,
|
||||
you are serving a public website hosted on a Kubernetes cluster! 🎉
|
||||
|
||||
## Environment-specific instructions
|
||||
|
@ -202,19 +203,19 @@ Kubernetes is available in Docker Desktop:
|
|||
- Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018)
|
||||
- Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25)
|
||||
|
||||
First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a
|
||||
First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a
|
||||
single node called `docker-desktop`.
|
||||
|
||||
The ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions.
|
||||
|
||||
On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller
|
||||
will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that
|
||||
doesn't work, you might have to fall back to the `kubectl port-forward` method described in the
|
||||
On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller
|
||||
will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that
|
||||
doesn't work, you might have to fall back to the `kubectl port-forward` method described in the
|
||||
[local testing section](#local-testing).
|
||||
|
||||
#### Rancher Desktop
|
||||
|
||||
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
|
||||
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
|
||||
|
||||
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
|
||||
|
||||
|
@ -222,18 +223,18 @@ Once traefik is disabled, the Ingress-Nginx Controller can be installed on Ranch
|
|||
|
||||
### Cloud deployments
|
||||
|
||||
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the
|
||||
`externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an
|
||||
extra hop in some cases. If you're installing with Helm, this can be done by adding
|
||||
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the
|
||||
`externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an
|
||||
extra hop in some cases. If you're installing with Helm, this can be done by adding
|
||||
`--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command.
|
||||
|
||||
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will
|
||||
let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of
|
||||
the upstream load balancer. This must be done both in the ingress controller
|
||||
(with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration
|
||||
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will
|
||||
let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of
|
||||
the upstream load balancer. This must be done both in the ingress controller
|
||||
(with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration
|
||||
to function correctly.
|
||||
|
||||
In the following sections, we provide YAML manifests that enable these options when possible, using the specific
|
||||
In the following sections, we provide YAML manifests that enable these options when possible, using the specific
|
||||
options of various cloud providers.
|
||||
|
||||
#### AWS
|
||||
|
@ -242,10 +243,22 @@ In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Control
|
|||
|
||||
!!! info
|
||||
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB.
|
||||
AWS provides the documentation on how to use
|
||||
[Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)
|
||||
AWS provides the documentation on how to use
|
||||
[Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)
|
||||
with [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller).
|
||||
|
||||
!!! helm install on AWS
|
||||
There have been too many attempts at using helm to install the controller on AWS recently.
|
||||
The ingress-nginx-controller helm-chart is generic and not aimed at installing on AWS on any other infra provider.
|
||||
There are annotations and configurations that are applicable only to AWS.
|
||||
Please run `helm template` and compare with the static yaml manifests you see below.
|
||||
A user is expected to use annotations for
|
||||
(a) internal/external
|
||||
(b) type - nlb
|
||||
(c) sqcurity-groups
|
||||
(d) and other such requirements
|
||||
during their ``helm install`` process.
|
||||
|
||||
##### Network Load Balancer (NLB)
|
||||
|
||||
```console
|
||||
|
@ -254,7 +267,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
|||
|
||||
##### TLS termination in AWS Load Balancer (NLB)
|
||||
|
||||
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
|
||||
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
|
||||
This section explains how to do that on AWS using an NLB.
|
||||
|
||||
1. Download the [deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml) template
|
||||
|
@ -264,32 +277,35 @@ This section explains how to do that on AWS using an NLB.
|
|||
```
|
||||
|
||||
2. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:
|
||||
|
||||
```
|
||||
proxy-real-ip-cidr: XXX.XXX.XXX/XX
|
||||
```
|
||||
|
||||
3. Change the AWS Certificate Manager (ACM) ID as well:
|
||||
|
||||
```
|
||||
arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
|
||||
```
|
||||
|
||||
4. Deploy the manifest:
|
||||
|
||||
```console
|
||||
kubectl apply -f deploy.yaml
|
||||
```
|
||||
|
||||
##### NLB Idle Timeouts
|
||||
|
||||
Idle timeout value for TCP flows is 350 seconds and
|
||||
Idle timeout value for TCP flows is 350 seconds and
|
||||
[cannot be modified](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout).
|
||||
|
||||
For this reason, you need to ensure the
|
||||
[keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout)
|
||||
For this reason, you need to ensure the
|
||||
[keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout)
|
||||
value is configured less than 350 seconds to work as expected.
|
||||
|
||||
By default, NGINX `keepalive_timeout` is set to `75s`.
|
||||
|
||||
More information with regard to timeouts can be found in the
|
||||
More information with regard to timeouts can be found in the
|
||||
[official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout)
|
||||
|
||||
#### GCE-GKE
|
||||
|
@ -304,15 +320,14 @@ kubectl create clusterrolebinding cluster-admin-binding \
|
|||
|
||||
Then, the ingress controller can be installed like this:
|
||||
|
||||
|
||||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
|
||||
```
|
||||
|
||||
!!! warning
|
||||
For private clusters, you will need to either add a firewall rule that allows master nodes access to
|
||||
port `8443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and
|
||||
`10254/tcp` to also allow access to port `8443/tcp`. More information can be found in the
|
||||
For private clusters, you will need to either add a firewall rule that allows master nodes access to
|
||||
port `8443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and
|
||||
`10254/tcp` to also allow access to port `8443/tcp`. More information can be found in the
|
||||
[Official GCP Documentation](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall).
|
||||
|
||||
See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
|
||||
|
@ -333,8 +348,8 @@ More information with regard to Azure annotations for ingress controller can be
|
|||
```console
|
||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/do/deploy.yaml
|
||||
```
|
||||
- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one `service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"`. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows `no data`, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in [this issue](https://github.com/kubernetes/ingress-nginx/issues/8965). Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
|
||||
|
||||
- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one `service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"`. While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows `no data`, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in [this issue](https://github.com/kubernetes/ingress-nginx/issues/8965). Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
|
||||
|
||||
#### Scaleway
|
||||
|
||||
|
@ -348,7 +363,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
|||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
|
||||
```
|
||||
|
||||
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager
|
||||
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager
|
||||
[documentation](https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md).
|
||||
|
||||
#### Oracle Cloud Infrastructure
|
||||
|
@ -357,8 +372,8 @@ The full list of annotations supported by Exoscale is available in the Exoscale
|
|||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.10.1/deploy/static/provider/cloud/deploy.yaml
|
||||
```
|
||||
|
||||
A
|
||||
[complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md)
|
||||
A
|
||||
[complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md)
|
||||
can be found in the [OCI Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) documentation.
|
||||
|
||||
#### OVHcloud
|
||||
|
@ -373,11 +388,11 @@ You can find the [complete tutorial](https://docs.ovh.com/gb/en/kubernetes/insta
|
|||
|
||||
### Bare metal clusters
|
||||
|
||||
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes
|
||||
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes
|
||||
was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
|
||||
|
||||
For quick testing, you can use a
|
||||
[NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
|
||||
For quick testing, you can use a
|
||||
[NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
|
||||
This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
|
||||
|
||||
```console
|
||||
|
@ -401,20 +416,20 @@ kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
|
|||
|
||||
### Scope
|
||||
|
||||
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior,
|
||||
use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single
|
||||
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior,
|
||||
use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single
|
||||
namespace.
|
||||
|
||||
See also
|
||||
[“How to easily install multiple instances of the Ingress NGINX controller in the same cluster”](https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster)
|
||||
See also
|
||||
[“How to easily install multiple instances of the Ingress NGINX controller in the same cluster”](https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster)
|
||||
for more details.
|
||||
|
||||
### Webhook network access
|
||||
|
||||
!!! warning
|
||||
The controller uses an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||
to validate Ingress definitions. Make sure that you don't have
|
||||
[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
to validate Ingress definitions. Make sure that you don't have
|
||||
[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||
or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service.
|
||||
|
||||
### Certificate generation
|
||||
|
@ -435,22 +450,24 @@ You can wait until it is ready to run the next command:
|
|||
|
||||
### Running on Kubernetes versions older than 1.19
|
||||
|
||||
Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`,
|
||||
Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`,
|
||||
then moved to `apiVersion: networking.k8s.io/v1beta1` and more recently to `apiVersion: networking.k8s.io/v1`.
|
||||
|
||||
Here is how these Ingress versions are supported in Kubernetes:
|
||||
|
||||
- before Kubernetes 1.19, only `v1beta1` Ingress resources are supported
|
||||
- from Kubernetes 1.19 to 1.21, both `v1beta1` and `v1` Ingress resources are supported
|
||||
- in Kubernetes 1.22 and above, only `v1` Ingress resources are supported
|
||||
|
||||
And here is how these Ingress versions are supported in Ingress-Nginx Controller:
|
||||
|
||||
- before version 1.0, only `v1beta1` Ingress resources are supported
|
||||
- in version 1.0 and above, only `v1` Ingress resources are
|
||||
|
||||
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX
|
||||
Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X
|
||||
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX
|
||||
Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X
|
||||
of the Ingress-Nginx Controller (e.g. version 0.49).
|
||||
|
||||
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if
|
||||
you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding
|
||||
The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if
|
||||
you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding
|
||||
`--version='<4'` to the `helm install` command ).
|
||||
|
|
Loading…
Reference in a new issue