Update gce docs (#8866)
* update GCE doc with proxy protocol and some fixes Signed-off-by: James Strong <strong.james.e@gmail.com> * update gke docs Signed-off-by: James Strong <strong.james.e@gmail.com>
This commit is contained in:
parent
fe116d62cb
commit
91e6174556
2 changed files with 100 additions and 41 deletions
7
Makefile
7
Makefile
|
@ -56,7 +56,7 @@ endif
|
||||||
MAC_OS = $(shell uname -s)
|
MAC_OS = $(shell uname -s)
|
||||||
|
|
||||||
ifeq ($(MAC_OS), Darwin)
|
ifeq ($(MAC_OS), Darwin)
|
||||||
MAC_DOCKER_FLAGS=
|
MAC_DOCKER_FLAGS="--load"
|
||||||
else
|
else
|
||||||
MAC_DOCKER_FLAGS=
|
MAC_DOCKER_FLAGS=
|
||||||
endif
|
endif
|
||||||
|
@ -220,7 +220,10 @@ dev-env-stop: ## Deletes local Kubernetes cluster created by kind.
|
||||||
|
|
||||||
.PHONY: live-docs
|
.PHONY: live-docs
|
||||||
live-docs: ## Build and launch a local copy of the documentation website in http://localhost:8000
|
live-docs: ## Build and launch a local copy of the documentation website in http://localhost:8000
|
||||||
@docker build ${PLATFORM_FLAG} ${PLATFORM} -t ingress-nginx-docs .github/actions/mkdocs
|
@docker build ${PLATFORM_FLAG} ${PLATFORM} \
|
||||||
|
--no-cache \
|
||||||
|
$(MAC_DOCKER_FLAGS) \
|
||||||
|
-t ingress-nginx-docs .github/actions/mkdocs
|
||||||
@docker run ${PLATFORM_FLAG} ${PLATFORM} --rm -it \
|
@docker run ${PLATFORM_FLAG} ${PLATFORM} --rm -it \
|
||||||
-p 8000:8000 \
|
-p 8000:8000 \
|
||||||
-v ${PWD}:/docs \
|
-v ${PWD}:/docs \
|
||||||
|
|
|
@ -6,11 +6,15 @@ There are multiple ways to install the NGINX ingress controller:
|
||||||
- with `kubectl apply`, using YAML manifests;
|
- with `kubectl apply`, using YAML manifests;
|
||||||
- with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)).
|
- with specific addons (e.g. for [minikube](#minikube) or [MicroK8s](#microk8s)).
|
||||||
|
|
||||||
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. we recommend that you check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the ingress controller for your particular environment or cloud provider.
|
On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to
|
||||||
|
get started as fast as possible, you can check the [quick start](#quick-start) instructions. However, in many
|
||||||
|
environments, you can improve the performance or get better logs by enabling extra features. we recommend that you
|
||||||
|
check the [environment-specific instructions](#environment-specific-instructions) for details about optimizing the
|
||||||
|
ingress controller for your particular environment or cloud provider.
|
||||||
|
|
||||||
## Contents
|
## Contents
|
||||||
|
|
||||||
<!-- Quick tip: run `grep '^##' index.md` to check that the table of contents is up to date. -->
|
<!-- Quick tip: run `grep '^##' index.md` to check that the table of contents is up-to-date. -->
|
||||||
|
|
||||||
- [Quick start](#quick-start)
|
- [Quick start](#quick-start)
|
||||||
|
|
||||||
|
@ -28,7 +32,12 @@ On most Kubernetes clusters, the ingress controller will work without requiring
|
||||||
- ... [Bare-metal](#bare-metal-clusters)
|
- ... [Bare-metal](#bare-metal-clusters)
|
||||||
- [Miscellaneous](#miscellaneous)
|
- [Miscellaneous](#miscellaneous)
|
||||||
|
|
||||||
<!-- TODO: We have subdirectories for kubernetes versions now because of a PR https://github.com/kubernetes/ingress-nginx/pull/8162 . You can see this here https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/cloud . We need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest under a subdirectory, based on the K8S version being used. But until the explicit clear docs land here, users are recommended to feel free to use those subdirectories and get the manifest(s) related to their K8S version. -->
|
<!-- TODO: We have subdirectories for kubernetes versions now because of a PR
|
||||||
|
https://github.com/kubernetes/ingress-nginx/pull/8162 . You can see this here
|
||||||
|
https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/cloud .
|
||||||
|
We need to add documentation here that is clear and unambiguous in guiding users to pick the deployment manifest
|
||||||
|
under a subdirectory, based on the K8S version being used. But until the explicit clear docs land here, users are
|
||||||
|
free to use those subdirectories and get the manifest(s) related to their K8S version. -->
|
||||||
|
|
||||||
## Quick start
|
## Quick start
|
||||||
|
|
||||||
|
@ -55,13 +64,14 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! info
|
!!! info
|
||||||
The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same resources as if you had used Helm to install the controller.
|
The YAML manifest in the command above was generated with `helm template`, so you will end up with almost the same
|
||||||
|
resources as if you had used Helm to install the controller.
|
||||||
|
|
||||||
!!! attention
|
!!! attention
|
||||||
If you are running an old version of Kubernetes (1.18 or earlier), please read
|
If you are running an old version of Kubernetes (1.18 or earlier), please read
|
||||||
[this paragraph](#running-on-Kubernetes-versions-older-than-1.19) for specific instructions.
|
[this paragraph](#running-on-Kubernetes-versions-older-than-1.19) for specific instructions.
|
||||||
Because of api deprecations, the default manifest may not work on your cluster.
|
Because of api deprecations, the default manifest may not work on your cluster.
|
||||||
Specific manifests for supported Kubernetes versions are available within a subfolder of each provider.
|
Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
|
||||||
|
|
||||||
### Pre-flight check
|
### Pre-flight check
|
||||||
|
|
||||||
|
@ -71,7 +81,8 @@ A few pods should start in the `ingress-nginx` namespace:
|
||||||
kubectl get pods --namespace=ingress-nginx
|
kubectl get pods --namespace=ingress-nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
|
After a while, they should all be running. The following command will wait for the ingress controller pod to be up,
|
||||||
|
running, and ready:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl wait --namespace ingress-nginx \
|
kubectl wait --namespace ingress-nginx \
|
||||||
|
@ -89,7 +100,7 @@ kubectl create deployment demo --image=httpd --port=80
|
||||||
kubectl expose deployment demo
|
kubectl expose deployment demo
|
||||||
```
|
```
|
||||||
|
|
||||||
Then create an ingress resource. The following example uses an host that maps to `localhost`:
|
Then create an ingress resource. The following example uses a host that maps to `localhost`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl create ingress demo-localhost --class=nginx \
|
kubectl create ingress demo-localhost --class=nginx \
|
||||||
|
@ -106,7 +117,8 @@ At this point, if you access http://demo.localdev.me:8080/, you should see an HT
|
||||||
|
|
||||||
### Online testing
|
### Online testing
|
||||||
|
|
||||||
If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an external IP address or FQDN to the ingress controller.
|
If your Kubernetes cluster is a "real" cluster that supports services of type `LoadBalancer`, it will have allocated an
|
||||||
|
external IP address or FQDN to the ingress controller.
|
||||||
|
|
||||||
You can see that IP address or FQDN with the following command:
|
You can see that IP address or FQDN with the following command:
|
||||||
|
|
||||||
|
@ -114,9 +126,11 @@ You can see that IP address or FQDN with the following command:
|
||||||
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
|
||||||
```
|
```
|
||||||
|
|
||||||
It will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`).
|
It will be the `EXTERNAL-IP` field. If that field shows `<pending>`, this means that your Kubernetes cluster wasn't
|
||||||
|
able to provision the load balancer (generally, this is because it doesn't support services of type `LoadBalancer`).
|
||||||
|
|
||||||
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for `www.demo.io`:
|
Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress
|
||||||
|
resource. The following example assumes that you have set up a DNS record for `www.demo.io`:
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl create ingress demo --class=nginx \
|
kubectl create ingress demo --class=nginx \
|
||||||
|
@ -130,7 +144,8 @@ kubectl create ingress demo --class=nginx \
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public web site hosted on a Kubernetes cluster! 🎉
|
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations,
|
||||||
|
you are serving a public website hosted on a Kubernetes cluster! 🎉
|
||||||
|
|
||||||
## Environment-specific instructions
|
## Environment-specific instructions
|
||||||
|
|
||||||
|
@ -161,27 +176,41 @@ Kubernetes is available in Docker Desktop:
|
||||||
- Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018)
|
- Mac, from [version 18.06.0-ce](https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018)
|
||||||
- Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25)
|
- Windows, from [version 18.06.0-ce](https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25)
|
||||||
|
|
||||||
First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a single node called `docker-desktop`.
|
First, make sure that Kubernetes is enabled in the Docker settings. The command `kubectl get nodes` should show a
|
||||||
|
single node called `docker-desktop`.
|
||||||
|
|
||||||
The ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions.
|
The ingress controller can be installed on Docker Desktop using the default [quick start](#quick-start) instructions.
|
||||||
|
|
||||||
On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the `kubectl port-forward` method described in the [local testing section](#local-testing).
|
On most systems, if you don't have any other service of type `LoadBalancer` bound to port 80, the ingress controller
|
||||||
|
will be assigned the `EXTERNAL-IP` of `localhost`, which means that it will be reachable on localhost:80. If that
|
||||||
|
doesn't work, you might have to fall back to the `kubectl port-forward` method described in the
|
||||||
|
[local testing section](#local-testing).
|
||||||
|
|
||||||
### Cloud deployments
|
### Cloud deployments
|
||||||
|
|
||||||
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the `externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding `--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command.
|
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the
|
||||||
|
`externalTrafficPolicy` of the ingress controller Service to `Local` (instead of the default `Cluster`) to save an
|
||||||
|
extra hop in some cases. If you're installing with Helm, this can be done by adding
|
||||||
|
`--set controller.service.externalTrafficPolicy=Local` to the `helm install` or `helm upgrade` command.
|
||||||
|
|
||||||
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration to function correctly.
|
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will
|
||||||
|
let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of
|
||||||
|
the upstream load balancer. This must be done both in the ingress controller
|
||||||
|
(with e.g. `--set controller.config.use-proxy-protocol=true`) and in the cloud provider's load balancer configuration
|
||||||
|
to function correctly.
|
||||||
|
|
||||||
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
|
In the following sections, we provide YAML manifests that enable these options when possible, using the specific
|
||||||
|
options of various cloud providers.
|
||||||
|
|
||||||
#### AWS
|
#### AWS
|
||||||
|
|
||||||
In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
|
In AWS, we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of `Type=LoadBalancer`.
|
||||||
|
|
||||||
!!! info
|
!!! info
|
||||||
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB.
|
The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB.
|
||||||
AWS provides the documentation on how to use [Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html) with [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller).
|
AWS provides the documentation on how to use
|
||||||
|
[Network load balancing on Amazon EKS](https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html)
|
||||||
|
with [AWS Load Balancer Controller](https://github.com/kubernetes-sigs/aws-load-balancer-controller).
|
||||||
|
|
||||||
##### Network Load Balancer (NLB)
|
##### Network Load Balancer (NLB)
|
||||||
|
|
||||||
|
@ -191,7 +220,8 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
||||||
|
|
||||||
##### TLS termination in AWS Load Balancer (NLB)
|
##### TLS termination in AWS Load Balancer (NLB)
|
||||||
|
|
||||||
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
|
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer.
|
||||||
|
This section explains how to do that on AWS using an NLB.
|
||||||
|
|
||||||
1. Download the [deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml) template
|
1. Download the [deploy.yaml](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml) template
|
||||||
|
|
||||||
|
@ -216,13 +246,17 @@ By default, TLS is terminated in the ingress controller. But it is also possible
|
||||||
|
|
||||||
##### NLB Idle Timeouts
|
##### NLB Idle Timeouts
|
||||||
|
|
||||||
Idle timeout value for TCP flows is 350 seconds and [cannot be modified](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout).
|
Idle timeout value for TCP flows is 350 seconds and
|
||||||
|
[cannot be modified](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout).
|
||||||
|
|
||||||
For this reason, you need to ensure the [keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout) value is configured less than 350 seconds to work as expected.
|
For this reason, you need to ensure the
|
||||||
|
[keepalive_timeout](https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout)
|
||||||
|
value is configured less than 350 seconds to work as expected.
|
||||||
|
|
||||||
By default NGINX `keepalive_timeout` is set to `75s`.
|
By default, NGINX `keepalive_timeout` is set to `75s`.
|
||||||
|
|
||||||
More information with regards to timeouts can be found in the [official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout)
|
More information with regard to timeouts can be found in the
|
||||||
|
[official AWS documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout)
|
||||||
|
|
||||||
#### GCE-GKE
|
#### GCE-GKE
|
||||||
|
|
||||||
|
@ -242,12 +276,15 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
||||||
```
|
```
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port `8443/tcp` on worker nodes, or change the existing rule that allows access to ports `80/tcp`, `443/tcp` and `10254/tcp` to also allow access to port `8443/tcp`.
|
For private clusters, you will need to either add a firewall rule that allows master nodes access to
|
||||||
|
port `8443/tcp` on worker nodes, or change the existing rule that allows access to port `80/tcp`, `443/tcp` and
|
||||||
|
`10254/tcp` to also allow access to port `8443/tcp`. More information can be found in the
|
||||||
|
[Official GCP Documentation](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#config-hc-firewall).
|
||||||
|
|
||||||
See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) on adding rules and the [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/79739) for more detail.
|
See the [GKE documentation](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules)
|
||||||
|
on adding rules and the [Kubernetes issue](https://github.com/kubernetes/kubernetes/issues/79739) for more detail.
|
||||||
|
|
||||||
!!! warning
|
Proxy-protocol is supported in GCE check the [Official Documentations on how to enable.](https://cloud.google.com/load-balancing/docs/tcp/setting-up-tcp#proxy-protocol)
|
||||||
Proxy protocol is not supported in GCE/GKE.
|
|
||||||
|
|
||||||
#### Azure
|
#### Azure
|
||||||
|
|
||||||
|
@ -255,7 +292,7 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
||||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
|
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
More information with regards to Azure annotations for ingress controller can be found in the [official AKS documentation](https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller).
|
More information with regard to Azure annotations for ingress controller can be found in the [official AKS documentation](https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller).
|
||||||
|
|
||||||
#### Digital Ocean
|
#### Digital Ocean
|
||||||
|
|
||||||
|
@ -275,7 +312,8 @@ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/cont
|
||||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
|
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager [documentation](https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md).
|
The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager
|
||||||
|
[documentation](https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md).
|
||||||
|
|
||||||
#### Oracle Cloud Infrastructure
|
#### Oracle Cloud Infrastructure
|
||||||
|
|
||||||
|
@ -283,19 +321,25 @@ The full list of annotations supported by Exoscale is available in the Exoscale
|
||||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
|
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
A [complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md) can be found in the [OCI Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) documentation.
|
A
|
||||||
|
[complete list of available annotations for Oracle Cloud Infrastructure](https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md)
|
||||||
|
can be found in the [OCI Cloud Controller Manager](https://github.com/oracle/oci-cloud-controller-manager) documentation.
|
||||||
|
|
||||||
### Bare metal clusters
|
### Bare metal clusters
|
||||||
|
|
||||||
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
|
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes
|
||||||
|
was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
|
||||||
|
|
||||||
For quick testing, you can use a [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport). This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
|
For quick testing, you can use a
|
||||||
|
[NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport).
|
||||||
|
This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
|
||||||
|
|
||||||
```console
|
```console
|
||||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/baremetal/deploy.yaml
|
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/baremetal/deploy.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see [bare-metal considerations](./baremetal.md).
|
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range),
|
||||||
|
see [bare-metal considerations](./baremetal.md).
|
||||||
|
|
||||||
## Miscellaneous
|
## Miscellaneous
|
||||||
|
|
||||||
|
@ -311,14 +355,21 @@ kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
|
||||||
|
|
||||||
### Scope
|
### Scope
|
||||||
|
|
||||||
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single namespace.
|
By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior,
|
||||||
|
use the flag `--watch-namespace` or check the Helm chart value `controller.scope` to limit the controller to a single
|
||||||
|
namespace.
|
||||||
|
|
||||||
See also [“How to easily install multiple instances of the Ingress NGINX controller in the same cluster”](https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster) for more details.
|
See also
|
||||||
|
[“How to easily install multiple instances of the Ingress NGINX controller in the same cluster”](https://kubernetes.github.io/ingress-nginx/#how-to-easily-install-multiple-instances-of-the-ingress-nginx-controller-in-the-same-cluster)
|
||||||
|
for more details.
|
||||||
|
|
||||||
### Webhook network access
|
### Webhook network access
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
The controller uses an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) to validate Ingress definitions. Make sure that you don't have [Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service.
|
The controller uses an [admission webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/)
|
||||||
|
to validate Ingress definitions. Make sure that you don't have
|
||||||
|
[Network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
|
||||||
|
or additional firewalls preventing connections from the API server to the `ingress-nginx-controller-admission` service.
|
||||||
|
|
||||||
### Certificate generation
|
### Certificate generation
|
||||||
|
|
||||||
|
@ -338,7 +389,8 @@ You can wait until it is ready to run the next command:
|
||||||
|
|
||||||
### Running on Kubernetes versions older than 1.19
|
### Running on Kubernetes versions older than 1.19
|
||||||
|
|
||||||
Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`, then moved to `apiVersion: networking.k8s.io/v1beta1` and more recently to `apiVersion: networking.k8s.io/v1`.
|
Ingress resources evolved over time. They started with `apiVersion: extensions/v1beta1`,
|
||||||
|
then moved to `apiVersion: networking.k8s.io/v1beta1` and more recently to `apiVersion: networking.k8s.io/v1`.
|
||||||
|
|
||||||
Here is how these Ingress versions are supported in Kubernetes:
|
Here is how these Ingress versions are supported in Kubernetes:
|
||||||
- before Kubernetes 1.19, only `v1beta1` Ingress resources are supported
|
- before Kubernetes 1.19, only `v1beta1` Ingress resources are supported
|
||||||
|
@ -349,6 +401,10 @@ And here is how these Ingress versions are supported in NGINX Ingress Controller
|
||||||
- before version 1.0, only `v1beta1` Ingress resources are supported
|
- before version 1.0, only `v1beta1` Ingress resources are supported
|
||||||
- in version 1.0 and above, only `v1` Ingress resources are
|
- in version 1.0 and above, only `v1` Ingress resources are
|
||||||
|
|
||||||
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the NGINX Ingress Controller (e.g. version 0.49).
|
As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX
|
||||||
|
Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X
|
||||||
|
of the NGINX Ingress Controller (e.g. version 0.49).
|
||||||
|
|
||||||
The Helm chart of the NGINX Ingress Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding `--version='<4'` to the `helm install` command).
|
The Helm chart of the NGINX Ingress Controller switched to version 1 in version 4 of the chart. In other words, if
|
||||||
|
you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding
|
||||||
|
`--version='<4'` to the `helm install` command).
|
||||||
|
|
Loading…
Reference in a new issue