Merge pull request #5563 from ebeigarts/docs-fix

Use ingress-nginx-* naming in docs to match the default deployments
This commit is contained in:
Kubernetes Prow Robot 2020-05-17 13:29:36 -07:00 committed by GitHub
commit 84e5896299
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
11 changed files with 24 additions and 24 deletions

View file

@ -170,14 +170,14 @@ field of the `ingress-nginx` Service spec to `Local` ([example][preserve-ip]).
host-3 Ready node 203.0.113.3
```
with a `nginx-ingress-controller` Deployment composed of 2 replicas
with a `ingress-nginx-controller` Deployment composed of 2 replicas
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3
nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2
ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3
ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2
```
Requests sent to `host-2` and `host-3` would be forwarded to NGINX and original client's IP would be preserved,
@ -279,15 +279,15 @@ template:
including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.
!!! example
Consider this `nginx-ingress-controller` Deployment composed of 2 replicas, NGINX Pods inherit from the IP address
Consider this `ingress-nginx-controller` Deployment composed of 2 replicas, NGINX Pods inherit from the IP address
of their host instead of an internal Pod IP.
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
```
One major limitation of this deployment approach is that only **a single NGINX Ingress controller Pod** may be scheduled
@ -295,7 +295,7 @@ on each cluster node, because binding the same port multiple times on the same n
impossible. Pods that are unschedulable due to such situation fail with the following event:
```console
$ kubectl -n ingress-nginx describe pod <unschedulable-nginx-ingress-controller-pod>
$ kubectl -n ingress-nginx describe pod <unschedulable-ingress-nginx-controller-pod>
...
Events:
Type Reason From Message
@ -340,14 +340,14 @@ Instead, and because bare-metal nodes usually don't have an ExternalIP, one has
address of all nodes running the NGINX Ingress controller.
!!! example
Given a `nginx-ingress-controller` DaemonSet composed of 2 replicas
Given a `ingress-nginx-controller` DaemonSet composed of 2 replicas
```console
$ kubectl -n ingress-nginx get pod -o wide
NAME READY STATUS IP NODE
default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2
nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3
ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2
```
the controller sets the status of all Ingress objects it manages to the following value:

View file

@ -56,12 +56,12 @@ minikube addons disable ingress
```
- Execute `make dev-env`
- Confirm the `nginx-ingress-controller` deployment exists:
- Confirm the `ingress-nginx-controller` deployment exists:
```console
$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
ingress-nginx-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s
```
#### AWS

View file

@ -11,4 +11,4 @@ $ kubectl apply -f ingress.yaml
## Test
Check if the contents of the annotation are present in the nginx.conf file using:
`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf`
`kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf`

View file

@ -13,7 +13,7 @@ data:
proxy-send-timeout: "120"
kind: ConfigMap
metadata:
name: nginx-configuration
name: ingress-nginx-controller
```
```

View file

@ -1,7 +1,7 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx

View file

@ -28,10 +28,10 @@ service/nginx-errors ClusterIP 10.0.0.12 <none> 80/TCP 10s
If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the
[deployment guide][deploy], then follow these steps:
1. Edit the `nginx-ingress-controller` Deployment and set the value of the `--default-backend-service` flag to the name of the
1. Edit the `ingress-nginx-controller` Deployment and set the value of the `--default-backend-service` flag to the name of the
newly created error backend.
2. Edit the `nginx-configuration` ConfigMap and create the key `custom-http-errors` with a value of `404,503`.
2. Edit the `ingress-nginx-controller` ConfigMap and create the key `custom-http-errors` with a value of `404,503`.
3. Take note of the IP address assigned to the NGINX Ingress controller Service.
```

View file

@ -21,4 +21,4 @@ The nginx ingress controller will read the `ingress-nginx/nginx-configuration` C
## Test
Check the contents of the ConfigMaps are present in the nginx.conf file using:
`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n ingress-nginx cat /etc/nginx/nginx.conf`
`kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx cat /etc/nginx/nginx.conf`

View file

@ -3,7 +3,7 @@ data:
proxy-set-headers: "ingress-nginx/custom-headers"
kind: ConfigMap
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx

View file

@ -13,7 +13,7 @@ data:
ssl-dh-param: "ingress-nginx/lb-dhparam"
kind: ConfigMap
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
@ -52,4 +52,4 @@ $ kubectl create -f ssl-dh-param.yaml
## Test
Check the contents of the configmap is present in the nginx.conf file using:
`kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf`
`kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf`

View file

@ -3,7 +3,7 @@ data:
ssl-dh-param: "ingress-nginx/lb-dhparam"
kind: ConfigMap
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx

View file

@ -122,7 +122,7 @@ data:
enable-opentracing: "true"
zipkin-collector-host: zipkin.default.svc.cluster.local
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: kube-system
' | kubectl replace -f -
```
@ -177,7 +177,7 @@ In the Zipkin interface we can see the details:
enable-opentracing: "true"
jaeger-collector-host: jaeger-agent.default.svc.cluster.local
metadata:
name: nginx-configuration
name: ingress-nginx-controller
namespace: kube-system
' | kubectl replace -f -
```