Minor documentation cleanup (#7826)

* clarify link

* Add section headers

* console blocks

* grpc example json was not valid

* multi-tls update text

The preceding point 1 related to 4f2cb51ef8/ingress/controllers/nginx/examples/ingress.yaml
and the deployments referenced in 4f2cb51ef8/ingress/controllers/nginx/examples/README.md

They are not relevant to the current instructions.

* add whitespace around parens

* grammar

setup would be a proper noun, but it is not the intended concept, which is a state

* grammar

* is-only
* via

* Use bullets for choices

* ingress-controller

nginx is a distinct brand.

generally this repo talks about ingress-controller, although it is quite inconsistent about how...

* drop stray paren

* OAuth is a brand and needs an article here

also GitHub is a brand

* Indent text under numbered lists

* use e.g.

* Document that customer header config maps changes do not trigger updates

This should be removed if
https://github.com/kubernetes/ingress-nginx/issues/5238
is fixed.

* article

* period

* infinitive verb + period

* clarify that the gRPC server is responsible for listening for TCP traffic and not some other part of the backend application

* avoid using ; and reword

* whitespace

* brand: gRPC

* only-does is the right form

`for` adds nothing here

* spelling: GitHub

* punctuation

`;` is generally not the right punctuation...

* drop stray `to`

* sentence

* backticks

* fix link

* Improve readability of compare/vs

* Renumber list

* punctuation

* Favor Ingress-NGINX and Ingress NGINX

* Simplify custom header restart text

* Undo typo damage

Co-authored-by: Josh Soref <jsoref@users.noreply.github.com>
This commit is contained in:
Josh Soref 2022-01-16 19:57:28 -05:00 committed by GitHub
parent 784f9c53bb
commit 1614027cd4
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
27 changed files with 208 additions and 169 deletions

View file

@ -1,4 +1,4 @@
# NGINX Ingress Controller # Ingress NGINX Controller
[![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes/ingress-nginx)](https://goreportcard.com/report/github.com/kubernetes/ingress-nginx) [![Go Report Card](https://goreportcard.com/badge/github.com/kubernetes/ingress-nginx)](https://goreportcard.com/report/github.com/kubernetes/ingress-nginx)
[![GitHub license](https://img.shields.io/github/license/kubernetes/ingress-nginx.svg)](https://github.com/kubernetes/ingress-nginx/blob/main/LICENSE) [![GitHub license](https://img.shields.io/github/license/kubernetes/ingress-nginx.svg)](https://github.com/kubernetes/ingress-nginx/blob/main/LICENSE)
@ -28,7 +28,7 @@ For detailed changes on the `ingress-nginx` helm chart, please check the followi
### Support Versions table ### Support Versions table
| Ingress-nginx version | k8s supported version | Alpine Version | Nginx Version | | Ingress-NGINX version | k8s supported version | Alpine Version | Nginx Version |
|-----------------------|------------------------------|----------------|---------------| |-----------------------|------------------------------|----------------|---------------|
| v1.1.1 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† | | v1.1.1 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
| v1.1.0 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† | | v1.1.0 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |

View file

@ -1,6 +1,6 @@
# e2e test suite for [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/tree/main/) # e2e test suite for [Ingress NGINX Controller](https://github.com/kubernetes/ingress-nginx/tree/main/)

View file

@ -14,13 +14,13 @@ Session affinity can be configured using the following annotations:
|nginx.ingress.kubernetes.io/session-cookie-name|Name of the cookie that will be created|string (defaults to `INGRESSCOOKIE`)| |nginx.ingress.kubernetes.io/session-cookie-name|Name of the cookie that will be created|string (defaults to `INGRESSCOOKIE`)|
|nginx.ingress.kubernetes.io/session-cookie-secure|Set the cookie as secure regardless the protocol of the incoming request|`"true"` or `"false"`| |nginx.ingress.kubernetes.io/session-cookie-secure|Set the cookie as secure regardless the protocol of the incoming request|`"true"` or `"false"`|
|nginx.ingress.kubernetes.io/session-cookie-path|Path that will be set on the cookie (required if your [Ingress paths][ingress-paths] use regular expressions)|string (defaults to the currently [matched path][ingress-paths])| |nginx.ingress.kubernetes.io/session-cookie-path|Path that will be set on the cookie (required if your [Ingress paths][ingress-paths] use regular expressions)|string (defaults to the currently [matched path][ingress-paths])|
|nginx.ingress.kubernetes.io/session-cookie-samesite|SameSite attribute to apply to the cookie|Browser accepted values are `None`, `Lax`, and `Strict`| |nginx.ingress.kubernetes.io/session-cookie-samesite|`SameSite` attribute to apply to the cookie|Browser accepted values are `None`, `Lax`, and `Strict`|
|nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none|Will omit `SameSite=None` attribute for older browsers which reject the more-recently defined `SameSite=None` value|`"true"` or `"false"` |nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none|Will omit `SameSite=None` attribute for older browsers which reject the more-recently defined `SameSite=None` value|`"true"` or `"false"`
|nginx.ingress.kubernetes.io/session-cookie-max-age|Time until the cookie expires, corresponds to the `Max-Age` cookie directive|number of seconds| |nginx.ingress.kubernetes.io/session-cookie-max-age|Time until the cookie expires, corresponds to the `Max-Age` cookie directive|number of seconds|
|nginx.ingress.kubernetes.io/session-cookie-expires|Legacy version of the previous annotation for compatibility with older browsers, generates an `Expires` cookie directive by adding the seconds to the current date|number of seconds| |nginx.ingress.kubernetes.io/session-cookie-expires|Legacy version of the previous annotation for compatibility with older browsers, generates an `Expires` cookie directive by adding the seconds to the current date|number of seconds|
|nginx.ingress.kubernetes.io/session-cookie-change-on-failure|When set to `false` nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to `true` and previous attempt failed, sticky cookie will be changed to point to another upstream.|`true` or `false` (defaults to `false`)| |nginx.ingress.kubernetes.io/session-cookie-change-on-failure|When set to `false` nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to `true` and previous attempt failed, sticky cookie will be changed to point to another upstream.|`true` or `false` (defaults to `false`)|
You can create the [example Ingress](ingress.yaml) to test this: You can create the [session affinity example Ingress](ingress.yaml) to test this:
```console ```console
kubectl create -f ingress.yaml kubectl create -f ingress.yaml
@ -66,13 +66,15 @@ Accept-Ranges: bytes
``` ```
In the example above, you can see that the response contains a `Set-Cookie` header with the settings we have defined. In the example above, you can see that the response contains a `Set-Cookie` header with the settings we have defined.
This cookie is created by NGINX, it contains a randomly generated key corresponding to the upstream used for that request (selected using [consistent hashing][consistent-hashing]) and has an `Expires` directive. This cookie is created by the NGINX Ingress Controller, it contains a randomly generated key corresponding to the upstream used for that request (selected using [consistent hashing][consistent-hashing]) and has an `Expires` directive.
If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream. If a client sends a cookie that doesn't correspond to an upstream, NGINX selects an upstream and creates a corresponding cookie.
If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded. If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.
When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's [consistent hash][consistent-hashing] will change. When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's [consistent hash][consistent-hashing] will change.
## Caveats
When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used.
This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.

View file

@ -3,6 +3,8 @@
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`. This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with `htpasswd`.
It's important the file generated is named `auth` (actually - that the secret has a key `data.auth`), otherwise the ingress-controller returns a 503. It's important the file generated is named `auth` (actually - that the secret has a key `data.auth`), otherwise the ingress-controller returns a 503.
## Create htpasswd file
```console ```console
$ htpasswd -c auth foo $ htpasswd -c auth foo
New password: <bar> New password: <bar>
@ -11,11 +13,15 @@ Re-type new password:
Adding password for user foo Adding password for user foo
``` ```
## Convert htpasswd into a secret
```console ```console
$ kubectl create secret generic basic-auth --from-file=auth $ kubectl create secret generic basic-auth --from-file=auth
secret "basic-auth" created secret "basic-auth" created
``` ```
## Examine secret
```console ```console
$ kubectl get secret basic-auth -o yaml $ kubectl get secret basic-auth -o yaml
apiVersion: v1 apiVersion: v1
@ -28,8 +34,10 @@ metadata:
type: Opaque type: Opaque
``` ```
## Using kubectl, create an ingress tied to the basic-auth secret
```console ```console
echo " $ echo "
apiVersion: networking.k8s.io/v1 apiVersion: networking.k8s.io/v1
kind: Ingress kind: Ingress
metadata: metadata:
@ -57,6 +65,8 @@ spec:
" | kubectl create -f - " | kubectl create -f -
``` ```
## Use curl to confirm authorization is required by the ingress
``` ```
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
* Trying 10.2.29.4... * Trying 10.2.29.4...
@ -84,6 +94,8 @@ $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com'
* Connection #0 to host 10.2.29.4 left intact * Connection #0 to host 10.2.29.4 left intact
``` ```
## Use curl with the correct credentials to connect to the ingress
``` ```
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' $ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar'
* Trying 10.2.29.4... * Trying 10.2.29.4...

View file

@ -1,7 +1,8 @@
# Client Certificate Authentication # Client Certificate Authentication
It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource.
Before getting started you must have the following Certificates Setup:
Before getting started you must have the following Certificates configured:
1. CA certificate and Key (Intermediate Certs need to be in CA) 1. CA certificate and Key (Intermediate Certs need to be in CA)
2. Server Certificate (Signed by CA) and Key (CN should be equal the hostname you will use) 2. Server Certificate (Signed by CA) and Key (CN should be equal the hostname you will use)
@ -15,7 +16,7 @@ You can have as many certificates as you want. If they're in the binary DER form
openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
``` ```
Then, you can concatenate them all in only one file, named 'ca.crt' as the following: Then, you can concatenate them all into one file, named 'ca.crt' with the following:
```bash ```bash
cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
@ -29,7 +30,7 @@ for each certificate generated. Otherwise you will receive an error.
There are many different ways of configuring your secrets to enable Client-Certificate There are many different ways of configuring your secrets to enable Client-Certificate
Authentication to work properly. Authentication to work properly.
1. You can create a secret containing just the CA certificate and another * You can create a secret containing just the CA certificate and another
Secret containing the Server Certificate which is Signed by the CA. Secret containing the Server Certificate which is Signed by the CA.
```bash ```bash
@ -37,14 +38,14 @@ Authentication to work properly.
kubectl create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key kubectl create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key
``` ```
2. You can create a secret containing CA certificate along with the Server * You can create a secret containing CA certificate along with the Server
Certificate, that can be used for both TLS and Client Auth. Certificate that can be used for both TLS and Client Auth.
```bash ```bash
kubectl create secret generic ca-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crt kubectl create secret generic ca-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key --from-file=ca.crt=ca.crt
``` ```
3. If you want to also enable Certificate Revocation List verification you can * If you want to also enable Certificate Revocation List verification you can
create the secret also containing the CRL file in PEM format: create the secret also containing the CRL file in PEM format:
```bash ```bash
kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --from-file=ca.crl=ca.crl kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt --from-file=ca.crl=ca.crl

View file

@ -1,6 +1,6 @@
# External Basic Authentication # External Basic Authentication
### Example 1: ### Example 1
Use an external service (Basic Auth) located in `https://httpbin.org` Use an external service (Basic Auth) located in `https://httpbin.org`
@ -44,7 +44,7 @@ status:
$ $
``` ```
Test 1: no username/password (expect code 401) ## Test 1: no username/password (expect code 401)
```console ```console
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
@ -74,7 +74,8 @@ $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com'
* Connection #0 to host 172.17.4.99 left intact * Connection #0 to host 172.17.4.99 left intact
``` ```
Test 2: valid username/password (expect code 200) ## Test 2: valid username/password (expect code 200)
``` ```
$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' $ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd'
* Rebuilt URL to: http://172.17.4.99/ * Rebuilt URL to: http://172.17.4.99/
@ -121,7 +122,8 @@ BODY:
-no body in request- -no body in request-
``` ```
Test 3: invalid username/password (expect code 401) ## Test 3: invalid username/password (expect code 401)
``` ```
curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user'
* Rebuilt URL to: http://172.17.4.99/ * Rebuilt URL to: http://172.17.4.99/

View file

@ -6,7 +6,7 @@ The `auth-url` and `auth-signin` annotations allow you to use an external
authentication provider to protect your Ingress resources. authentication provider to protect your Ingress resources.
!!! Important !!! Important
This annotation requires `ingress-nginx-controller v0.9.0` or greater.) This annotation requires `ingress-nginx-controller v0.9.0` or greater.
### Key Detail ### Key Detail
@ -32,7 +32,7 @@ metadata:
### Example: OAuth2 Proxy + Kubernetes-Dashboard ### Example: OAuth2 Proxy + Kubernetes-Dashboard
This example will show you how to deploy [`oauth2_proxy`](https://github.com/pusher/oauth2_proxy) This example will show you how to deploy [`oauth2_proxy`](https://github.com/pusher/oauth2_proxy)
into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using GitHub as the OAuth2 provider.
#### Prepare #### Prepare
@ -42,7 +42,7 @@ into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using g
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.10.1.yaml
``` ```
2. Create a [custom Github OAuth application](https://github.com/settings/applications/new) 2. Create a [custom GitHub OAuth application](https://github.com/settings/applications/new)
![Register OAuth2 Application](images/register-oauth-app.png) ![Register OAuth2 Application](images/register-oauth-app.png)
@ -67,10 +67,12 @@ Replace `__INGRESS_HOST__` with a valid FQDN and `__INGRESS_SECRET__` with a Sec
$ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml $ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml
``` ```
Test the oauth integration accessing the configured URL, like `https://foo.bar.com` ### Test
Test the oauth integration accessing the configured URL, e.g. `https://foo.bar.com`
![Register OAuth2 Application](images/github-auth.png) ![Register OAuth2 Application](images/github-auth.png)
![Github authentication](images/oauth-login.png) ![GitHub authentication](images/oauth-login.png)
![Kubernetes dashboard](images/dashboard.png) ![Kubernetes dashboard](images/dashboard.png)

View file

@ -2,13 +2,16 @@
## Ingress ## Ingress
The Ingress in [this example](ingress.yaml) adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [this example](../custom-headers/README.md). The Ingress in [this example](ingress.yaml) adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at [an example of specifying customer headers](../custom-headers/README.md).
```console ```console
$ kubectl apply -f ingress.yaml kubectl apply -f ingress.yaml
``` ```
## Test ## Test
Check if the contents of the annotation are present in the nginx.conf file using: Check if the contents of the annotation are present in the nginx.conf file using:
`kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf`
```console
kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
```

View file

@ -1,5 +1,15 @@
# Custom Headers # Custom Headers
## Caveats
Changes to the custom header config maps do not force a reload of the ingress-nginx-controllers.
### Workaround
To work around this limitation, perform a rolling restart of the deployment.
## Example
This example demonstrates configuration of the nginx ingress controller via This example demonstrates configuration of the nginx ingress controller via
a ConfigMap to pass a custom list of headers to the upstream a ConfigMap to pass a custom list of headers to the upstream
server. server.

View file

@ -1,7 +1,7 @@
# External authentication, authentication service response headers propagation # External authentication, authentication service response headers propagation
This example demonstrates propagation of selected authentication service response headers This example demonstrates propagation of selected authentication service response headers
to backend service. to a backend service.
Sample configuration includes: Sample configuration includes:
@ -37,7 +37,7 @@ public-demo-echo-service public-demo-echo-service.kube.local 80
secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m
``` ```
Test 1: public service with no auth header ## Test 1: public service with no auth header
```console ```console
$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100 $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100
@ -60,7 +60,7 @@ $ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100
UserID: , UserRole: UserID: , UserRole:
``` ```
Test 2: secure service with no auth header ## Test 2: secure service with no auth header
```console ```console
$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100 $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100
@ -89,7 +89,7 @@ $ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100
* Connection #0 to host 192.168.99.100 left intact * Connection #0 to host 192.168.99.100 left intact
``` ```
Test 3: public service with valid auth header ## Test 3: public service with valid auth header
```console ```console
$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100
@ -113,7 +113,7 @@ $ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.
UserID: 1443635317331776148, UserRole: admin UserID: 1443635317331776148, UserRole: admin
``` ```
Test 4: secure service with valid auth header ## Test 4: secure service with valid auth header
```console ```console
$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 $ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100

View file

@ -1,7 +1,7 @@
# Custom DH parameters for perfect forward secrecy # Custom DH parameters for perfect forward secrecy
This example aims to demonstrate the deployment of an nginx ingress controller and This example aims to demonstrate the deployment of an nginx ingress controller and
use a ConfigMap to configure custom Diffie-Hellman parameters file to help with use a ConfigMap to configure a custom Diffie-Hellman parameters file to help with
"Perfect Forward Secrecy". "Perfect Forward Secrecy".
## Custom configuration ## Custom configuration
@ -27,7 +27,7 @@ $ kubectl create -f configmap.yaml
## Custom DH parameters secret ## Custom DH parameters secret
```console ```console
$> openssl dhparam 4096 2> /dev/null | base64 $ openssl dhparam 4096 2> /dev/null | base64
LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...
``` ```
@ -52,4 +52,6 @@ $ kubectl create -f ssl-dh-param.yaml
## Test ## Test
Check the contents of the configmap is present in the nginx.conf file using: Check the contents of the configmap is present in the nginx.conf file using:
`kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf` ```console
$ kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
```

View file

@ -1,6 +1,6 @@
# Sysctl tuning # Sysctl tuning
This example aims to demonstrate the use of an Init Container to adjust sysctl default values using `kubectl patch` This example aims to demonstrate the use of an Init Container to adjust sysctl default values using `kubectl patch`.
```console ```console
kubectl patch deployment -n ingress-nginx ingress-nginx-controller \ kubectl patch deployment -n ingress-nginx ingress-nginx-controller \

View file

@ -1,6 +1,6 @@
# Docker registry # Docker registry
This example demonstrates how to deploy a [docker registry](https://github.com/docker/distribution) in the cluster and configure Ingress enable access from Internet This example demonstrates how to deploy a [docker registry](https://github.com/docker/distribution) in the cluster and configure Ingress to enable access from the Internet.
## Deployment ## Deployment

View file

@ -1,19 +1,19 @@
# gRPC # gRPC
This example demonstrates how to route traffic to a gRPC service through the nginx controller. This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller.
## Prerequisites ## Prerequisites
1. You have a kubernetes cluster running. 1. You have a kubernetes cluster running.
2. You have a domain name such as `example.com` that is configured to route traffic to the ingress controller. 2. You have a domain name such as `example.com` that is configured to route traffic to the Ingress-NGINX controller.
3. You have the ingress-nginx-controller installed as per docs. 3. You have the ingress-nginx-controller installed as per docs.
4. You have a backend application running a gRPC server and listening for TCP traffic. If you want, you can use <https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go> as an example. 4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use <https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go> as an example.
5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application. 5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type `tls`, in the same namespace as the gRPC application.
### Step 1: Create a Kubernetes `Deployment` for gRPC app ### Step 1: Create a Kubernetes `Deployment` for gRPC app
- Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below: - Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
``` ```console
$ kubectl get po -A -o wide | grep go-grpc-greeter-server $ kubectl get po -A -o wide | grep go-grpc-greeter-server
``` ```
- If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below. - If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.
@ -22,7 +22,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
- To create a container image for this app, you can use [this Dockerfile](https://github.com/kubernetes/ingress-nginx/blob/5a52d99ae85cfe5ef9535291b8326b0006e75066/images/go-grpc-greeter-server/rootfs/Dockerfile). - To create a container image for this app, you can use [this Dockerfile](https://github.com/kubernetes/ingress-nginx/blob/5a52d99ae85cfe5ef9535291b8326b0006e75066/images/go-grpc-greeter-server/rootfs/Dockerfile).
- If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is `deployment.go-grpc-greeter-server.yaml` ; - If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs.
``` ```
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
@ -59,7 +59,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
### Step 2: Create the Kubernetes `Service` for the gRPC app ### Step 2: Create the Kubernetes `Service` for the gRPC app
- You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod ; - You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod.
``` ```
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
apiVersion: v1 apiVersion: v1
@ -78,7 +78,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
type: ClusterIP type: ClusterIP
EOF EOF
``` ```
- You can save the above example manifest to a file with name `service.go-grpc-greeter-server.yaml` and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this ; - You can save the above example manifest to a file with name `service.go-grpc-greeter-server.yaml` and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this:
``` ```
$ kubectl create -f service.go-grpc-greeter-server.yaml $ kubectl create -f service.go-grpc-greeter-server.yaml
@ -86,7 +86,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
### Step 3: Create the Kubernetes `Ingress` resource for the gRPC app ### Step 3: Create the Kubernetes `Ingress` resource for the gRPC app
- Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster, in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress; - Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress.
``` ```
cat <<EOF | kubectl apply -f - cat <<EOF | kubectl apply -f -
@ -121,7 +121,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
EOF EOF
``` ```
- If you save the above example manifest as a file named `ingress.go-grpc-greeter-server.yaml` and edit it to match your deployment and service, you can create the ingress like this ; - If you save the above example manifest as a file named `ingress.go-grpc-greeter-server.yaml` and edit it to match your deployment and service, you can create the ingress like this:
``` ```
$ kubectl create -f ingress.go-grpc-greeter-server.yaml $ kubectl create -f ingress.go-grpc-greeter-server.yaml
@ -144,7 +144,7 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
``` ```
$ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello $ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello
{ {
"message": " "message": "Hello "
} }
``` ```
@ -162,12 +162,12 @@ This example demonstrates how to route traffic to a gRPC service through the ngi
> https://proto.stack.build, a protocol buffer / gRPC build service that can use > https://proto.stack.build, a protocol buffer / gRPC build service that can use
> to help make it easier for your users to consume your API. > to help make it easier for your users to consume your API.
> See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html > See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html
### Notes on using response/request streams ### Notes on using response/request streams
1. If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc_read_timeout` to accommodate for this. 1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc_read_timeout` to accommodate this.
2. If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the 2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the
`grpc_send_timeout` and the `client_body_timeout`. `grpc_send_timeout` and the `client_body_timeout`.
3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc_read_timeout`, `grpc_send_timeout` and `client_body_timeout`. 3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc_read_timeout`, `grpc_send_timeout` and `client_body_timeout`.

View file

@ -2,9 +2,8 @@
This example uses 2 different certificates to terminate SSL for 2 hostnames. This example uses 2 different certificates to terminate SSL for 2 hostnames.
1. Deploy the controller by creating the rc in the parent dir 1. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml 2. Create [multi-tls.yaml](multi-tls.yaml)
3. Create [multi-tls.yaml](multi-tls.yaml)
This should generate a segment like: This should generate a segment like:
```console ```console

View file

@ -1,17 +1,17 @@
# Pod Security Policy (PSP) # Pod Security Policy (PSP)
In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) In most clusters today, by default, all resources (e.g. `Deployments` and `ReplicatSets`)
have permissions to create pods. have permissions to create pods.
Kubernetes however provides a more fine-grained authorization policy called Kubernetes however provides a more fine-grained authorization policy called
[Pod Security Policy (PSP)](https://kubernetes.io/docs/concepts/policy/pod-security-policy/). [Pod Security Policy (PSP)](https://kubernetes.io/docs/concepts/policy/pod-security-policy/).
PSP allows the cluster owner to define the permission of each object, for example creating a pod. PSP allows the cluster owner to define the permission of each object, for example creating a pod.
If you have PSP enabled on the cluster, and you deploy ingress-nginx, If you have PSP enabled on the cluster, and you deploy ingress-nginx,
you will need to provide the Deployment with the permissions to create pods. you will need to provide the `Deployment` with the permissions to create pods.
Before applying any objects, first apply the PSP permissions by running: Before applying any objects, first apply the PSP permissions by running:
```console ```console
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/psp/psp.yaml
``` ```
Note: PSP permissions must be granted before to the creation of the Deployment and the ReplicaSet. Note: PSP permissions must be granted before the creation of the `Deployment` and the `ReplicaSet`.

View file

@ -1,6 +1,6 @@
# Rewrite # Rewrite
This example demonstrates how to use the Rewrite annotations This example demonstrates how to use `Rewrite` annotations.
## Prerequisites ## Prerequisites
@ -15,9 +15,9 @@ Rewriting can be controlled using the following annotations:
|Name|Description|Values| |Name|Description|Values|
| --- | --- | --- | | --- | --- | --- |
|nginx.ingress.kubernetes.io/rewrite-target|Target URI where the traffic must be redirected|string| |nginx.ingress.kubernetes.io/rewrite-target|Target URI where the traffic must be redirected|string|
|nginx.ingress.kubernetes.io/ssl-redirect|Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate)|bool| |nginx.ingress.kubernetes.io/ssl-redirect|Indicates if the location section is only accessible via SSL (defaults to True when Ingress contains a Certificate)|bool|
|nginx.ingress.kubernetes.io/force-ssl-redirect|Forces the redirection to HTTPS even if the Ingress is not TLS Enabled|bool| |nginx.ingress.kubernetes.io/force-ssl-redirect|Forces the redirection to HTTPS even if the Ingress is not TLS Enabled|bool|
|nginx.ingress.kubernetes.io/app-root|Defines the Application Root that the Controller must redirect if it's in '/' context|string| |nginx.ingress.kubernetes.io/app-root|Defines the Application Root that the Controller must redirect if it's in `/` context|string|
|nginx.ingress.kubernetes.io/use-regex|Indicates if the paths defined on an Ingress use regular expressions|bool| |nginx.ingress.kubernetes.io/use-regex|Indicates if the paths defined on an Ingress use regular expressions|bool|
## Examples ## Examples

View file

@ -1,6 +1,6 @@
# Static IPs # Static IPs
This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller. This example demonstrates how to assign a static-ip to an Ingress on through the Ingress-NGINX controller.
## Prerequisites ## Prerequisites
@ -11,15 +11,15 @@ and that you have an ingress controller [running](../../deploy/) in your cluster
## Acquiring an IP ## Acquiring an IP
Since instances of the nginx controller actually run on nodes in your cluster, Since instances of the ingress nginx controller actually run on nodes in your cluster,
by default nginx Ingresses will only get static IPs if your cloudprovider by default nginx Ingresses will only get static IPs if your cloudprovider
supports static IP assignments to nodes. On GKE/GCE for example, even though supports static IP assignments to nodes. On GKE/GCE for example, even though
nodes get static IPs, the IPs are not retained across upgrade. nodes get static IPs, the IPs are not retained across upgrades.
To acquire a static IP for the ingress-nginx-controller, simply put it To acquire a static IP for the ingress-nginx-controller, simply put it
behind a Service of `Type=LoadBalancer`. behind a Service of `Type=LoadBalancer`.
First, create a loadbalancer Service and wait for it to acquire an IP First, create a loadbalancer Service and wait for it to acquire an IP:
```console ```console
$ kubectl create -f static-ip-svc.yaml $ kubectl create -f static-ip-svc.yaml
@ -30,7 +30,7 @@ NAME CLUSTER-IP EXTERNAL-IP PORT(S)
ingress-nginx-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m ingress-nginx-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m
``` ```
then, update the ingress controller so it adopts the static IP of the Service Then, update the ingress controller so it adopts the static IP of the Service
by passing the `--publish-service` flag (the example yaml used in the next step by passing the `--publish-service` flag (the example yaml used in the next step
already has it set to "ingress-nginx-lb"). already has it set to "ingress-nginx-lb").
@ -42,7 +42,7 @@ deployment "ingress-nginx-controller" created
## Assigning the IP to an Ingress ## Assigning the IP to an Ingress
From here on every Ingress created with the `ingress.class` annotation set to From here on every Ingress created with the `ingress.class` annotation set to
`nginx` will get the IP allocated in the previous step `nginx` will get the IP allocated in the previous step.
```console ```console
$ kubectl create -f ingress-nginx.yaml $ kubectl create -f ingress-nginx.yaml
@ -65,7 +65,7 @@ request_uri=http://104.154.109.191:8080/
## Retaining the IP ## Retaining the IP
You can test retention by deleting the Ingress You can test retention by deleting the Ingress:
```console ```console
$ kubectl delete ing ingress-nginx $ kubectl delete ing ingress-nginx
@ -85,16 +85,16 @@ ingress-nginx * 104.154.109.191 80, 443 13m
## Promote ephemeral to static IP ## Promote ephemeral to static IP
To promote the allocated IP to static, you can update the Service manifest To promote the allocated IP to static, you can update the Service manifest:
```console ```console
$ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}' $ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
"ingress-nginx-lb" patched "ingress-nginx-lb" patched
``` ```
and promote the IP to static (promotion works differently for cloudproviders, ... and promote the IP to static (promotion works differently for cloudproviders,
provided example is for GKE/GCE) provided example is for GKE/GCE):
`
```console ```console
$ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1 $ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1
Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb]. Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].
@ -114,4 +114,3 @@ users:
Now even if the Service is deleted, the IP will persist, so you can recreate the Now even if the Service is deleted, the IP will persist, so you can recreate the
Service with `spec.loadBalancerIP` set to `104.154.109.191`. Service with `spec.loadBalancerIP` set to `104.154.109.191`.

View file

@ -1,6 +1,6 @@
# How it works # How it works
The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one. The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.
## NGINX configuration ## NGINX configuration

View file

@ -1,6 +1,6 @@
# Overview # Overview
This is the documentation for the NGINX Ingress Controller. This is the documentation for the Ingress NGINX Controller.
It is built around the [Kubernetes Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/), using a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) to store the controller configuration. It is built around the [Kubernetes Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/), using a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) to store the controller configuration.
@ -26,7 +26,7 @@ Its important because until now, a default install of the Ingress-NGINX controll
On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The `ingressClassName` field of an Ingress is the way to let the controller know about that. On clusters with more than one instance of the Ingress-NGINX controller, all instances of the controllers must be aware of which Ingress objects they serve. The `ingressClassName` field of an Ingress is the way to let the controller know about that.
``` ```console
kubectl explain ingressclass kubectl explain ingressclass
``` ```
``` ```
@ -67,7 +67,9 @@ FIELDS:
There are 2 reasons primarily. There are 2 reasons primarily.
_(Reason #1)_ Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as: ### Reason #1
Until K8s version 1.21, it was possible to create an Ingress resource using deprecated versions of the Ingress API, such as:
- `extensions/v1beta1` - `extensions/v1beta1`
- `networking.k8s.io/v1beta1` - `networking.k8s.io/v1beta1`
@ -76,7 +78,9 @@ You would get a message about deprecation, but the Ingress resource would get cr
From K8s version 1.22 onwards, you can **only** access the Ingress API via the stable, `networking.k8s.io/v1` API. The reason is explained in the [official blog on deprecated ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/). From K8s version 1.22 onwards, you can **only** access the Ingress API via the stable, `networking.k8s.io/v1` API. The reason is explained in the [official blog on deprecated ingress API versions](https://kubernetes.io/blog/2021/07/26/update-with-ingress-nginx/).
_(Reason #2)_ if you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case. ### Reason #2
If you are already using the Ingress-NGINX controller and then upgrade to K8s version v1.22 , there are several scenarios where your existing Ingress objects will not work how you expect. Read this FAQ to check which scenario matches your use case.
## What is ingressClassName field ? ## What is ingressClassName field ?
@ -85,7 +89,7 @@ _(Reason #2)_ if you are already using the Ingress-NGINX controller and then upg
```shell ```shell
kubectl explain ingress.spec.ingressClassName kubectl explain ingress.spec.ingressClassName
``` ```
``` ```console
KIND: Ingress KIND: Ingress
VERSION: networking.k8s.io/v1 VERSION: networking.k8s.io/v1
@ -112,7 +116,7 @@ The `.spec.ingressClassName` behavior has precedence over the deprecated `kubern
- If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation `ingressclass.kubernetes.io/is-default-class` in your IngressClass, so that any new Ingress objects will have this one as default IngressClass. - If you have only one instance of the Ingress-NGINX controller running in your cluster, and you still want to use IngressClass, you should add the annotation `ingressclass.kubernetes.io/is-default-class` in your IngressClass, so that any new Ingress objects will have this one as default IngressClass.
In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the [`.spec.ingressClassName`](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) field set in their manifest, or the ingress annotation (`kubernetes.io/ingress.class`), then you should start your Ingress-NGINX controller with the flag `--watch-ingress-without-class=true`. In this case, you need to make your controller aware of the objects. If you have any Ingress objects that don't yet have either the [`.spec.ingressClassName`](https://kubernetes.io/docs/reference/kubernetes-api/service-resources/ingress-v1/#IngressSpec) field set in their manifest, or the ingress annotation (`kubernetes.io/ingress.class`), then you should start your Ingress-NGINX controller with the flag [--watch-ingress-without-class=true](#what-is-the-flag-watch-ingress-without-class).
You can configure your Helm chart installation's values file with `.controller.watchIngressWithoutClass: true`. You can configure your Helm chart installation's values file with `.controller.watchIngressWithoutClass: true`.
@ -130,7 +134,8 @@ metadata:
spec: spec:
controller: k8s.io/ingress-nginx controller: k8s.io/ingress-nginx
``` ```
And add the value "spec.ingressClassName=nginx" in your Ingress objects
And add the value `spec.ingressClassName=nginx` in your Ingress objects.
## I have multiple ingress objects in my cluster. What should I do ? ## I have multiple ingress objects in my cluster. What should I do ?
@ -138,7 +143,7 @@ And add the value "spec.ingressClassName=nginx" in your Ingress objects
### What is the flag '--watch-ingress-without-class' ? ### What is the flag '--watch-ingress-without-class' ?
- Its a flag that is passed,as an argument, to the `nginx-ingress-controller` executable. In the configuration, it looks like this ; - Its a flag that is passed,as an argument, to the `nginx-ingress-controller` executable. In the configuration, it looks like this:
``` ```
... ...
... ...
@ -209,7 +214,7 @@ If you start Ingress-Nginx B with the command line argument `--watch-ingress-wit
``` ```
helm repo update helm repo update
``` ```
- Now, install an additional instance of the ingress-NGINX controller like this ; - Now, install an additional instance of the ingress-NGINX controller like this:
``` ```
helm install ingress-nginx-2 ingress-nginx/ingress-nginx \ helm install ingress-nginx-2 ingress-nginx/ingress-nginx \
--namespace ingress-nginx-2 \ --namespace ingress-nginx-2 \

View file

@ -226,7 +226,9 @@ Use the `--service <service>` flag if your `ingress-nginx` `LoadBalancer` servic
### ingresses ### ingresses
`kubectl ingress-nginx ingresses`, alternately `kubectl ingress-nginx ing`, shows a more detailed view of the ingress definitions in a namespace. Compare: `kubectl ingress-nginx ingresses`, alternately `kubectl ingress-nginx ing`, shows a more detailed view of the ingress definitions in a namespace.
Compare:
```console ```console
$ kubectl get ingresses --all-namespaces $ kubectl get ingresses --all-namespaces
@ -235,7 +237,7 @@ default example-ingress1 testaddr.local,testaddr2.local localhost 80
default test-ingress-2 * localhost 80 5d default test-ingress-2 * localhost 80 5d
``` ```
vs vs.
```console ```console
$ kubectl ingress-nginx ingresses --all-namespaces $ kubectl ingress-nginx ingresses --all-namespaces
@ -272,7 +274,7 @@ Checking deployments...
https://github.com/kubernetes/ingress-nginx/issues/3808 https://github.com/kubernetes/ingress-nginx/issues/3808
``` ```
to show the lints added **only** for a particular `ingress-nginx` release, use the `--from-version` and `--to-version` flags: To show the lints added **only** for a particular `ingress-nginx` release, use the `--from-version` and `--to-version` flags:
```console ```console
$ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0 $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0.24.0 --to-version 0.24.0

View file

@ -13,7 +13,7 @@ Do not move it without providing redirects.
There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting
methods to obtain more information. methods to obtain more information.
Check the Ingress Resource Events ### Check the Ingress Resource Events
```console ```console
$ kubectl get ing -n <namespace-of-ingress-resource> $ kubectl get ing -n <namespace-of-ingress-resource>
@ -41,7 +41,7 @@ Events:
Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress
``` ```
Check the Ingress Controller Logs ### Check the Ingress Controller Logs
```console ```console
$ kubectl get pods -n <namespace-of-ingress-controller> $ kubectl get pods -n <namespace-of-ingress-controller>
@ -58,7 +58,7 @@ NGINX Ingress controller
.... ....
``` ```
Check the Nginx Configuration ### Check the Nginx Configuration
```console ```console
$ kubectl get pods -n <namespace-of-ingress-controller> $ kubectl get pods -n <namespace-of-ingress-controller>
@ -80,7 +80,7 @@ http {
.... ....
``` ```
Check if used Services Exist ### Check if used Services Exist
```console ```console
$ kubectl get svc --all-namespaces $ kubectl get svc --all-namespaces
@ -130,14 +130,14 @@ Both authentications must work:
**Service authentication** **Service authentication**
The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways:
1. _Service Account:_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. * _Service Account:_ This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.
2. _Kubeconfig file:_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`. * _Kubeconfig file:_ In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the `--kubeconfig` flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the `--kubeconfig` does not requires the flag `--apiserver-host`.
The format of the file is identical to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. The format of the file is identical to `~/.kube/config` which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.
3. _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/). * _Using the flag `--apiserver-host`:_ Using this flag `--apiserver-host=http://localhost:8080` it is possible to specify an unsecured API server or reach a remote kubernetes cluster using [kubectl proxy](https://kubernetes.io/docs/user-guide/kubectl/kubectl_proxy/).
Please do not use this approach in production. Please do not use this approach in production.
In the diagram below you can see the full authentication flow with all options, starting with the browser In the diagram below you can see the full authentication flow with all options, starting with the browser
@ -284,7 +284,7 @@ nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process
root 172 0 0 20:43 pts/0 00:00:00 bash root 172 0 0 20:43 pts/0 00:00:00 bash
``` ```
7. Attach gdb to the nginx master process 6. Attach gdb to the nginx master process
```console ```console
$ gdb -p 21 $ gdb -p 21
@ -295,7 +295,7 @@ Reading symbols from /usr/sbin/nginx...done.
(gdb) (gdb)
``` ```
8. Copy and paste the following: 7. Copy and paste the following:
```console ```console
set $cd = ngx_cycle->config_dump set $cd = ngx_cycle->config_dump
@ -309,9 +309,9 @@ append memory nginx_conf.txt \
end end
``` ```
9. Quit GDB by pressing CTRL+D 8. Quit GDB by pressing CTRL+D
10. Open nginx_conf.txt 9. Open nginx_conf.txt
```console ```console
cat nginx_conf.txt cat nginx_conf.txt

View file

@ -1,6 +1,6 @@
# Default backend # Default backend
The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand The default backend is a service which handles all URL paths and hosts the Ingress-NGINX controller doesn't understand
(i.e., all the requests that are not mapped with an Ingress). (i.e., all the requests that are not mapped with an Ingress).
Basically a default backend exposes two URLs: Basically a default backend exposes two URLs:

View file

@ -22,11 +22,11 @@ This tutorial will show you how to install [Prometheus](https://prometheus.io/)
--set-string controller.podAnnotations."prometheus\.io/scrape"="true" \ --set-string controller.podAnnotations."prometheus\.io/scrape"="true" \
--set-string controller.podAnnotations."prometheus\.io/port"="10254" --set-string controller.podAnnotations."prometheus\.io/port"="10254"
``` ```
- You can validate that the controller is configured for metrics by looking at the values of the installed release, like this ; - You can validate that the controller is configured for metrics by looking at the values of the installed release, like this:
``` ```
helm get values ingress-controller --namespace ingress-nginx helm get values ingress-controller --namespace ingress-nginx
``` ```
- You should be able to see the values shown below ; - You should be able to see the values shown below:
``` ```
.. ..
controller: controller:

View file

@ -82,7 +82,7 @@ metadata:
kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.class: "gce"
``` ```
will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like will target the GCE controller, forcing the Ingress-NGINX controller to ignore it, while an annotation like:
```yaml ```yaml
metadata: metadata:
@ -91,7 +91,7 @@ metadata:
kubernetes.io/ingress.class: "nginx" kubernetes.io/ingress.class: "nginx"
``` ```
will target the nginx controller, forcing the GCE controller to ignore it. will target the Ingress-NGINX controller, forcing the GCE controller to ignore it.
You can change the value "nginx" to something else by setting the `--ingress-class` flag: You can change the value "nginx" to something else by setting the `--ingress-class` flag:

View file

@ -221,7 +221,7 @@ Enables the return of the header Server from the backend instead of the generic
## allow-snippet-annotations ## allow-snippet-annotations
Enables Ingress to parse and add *-snippet annotations/directives created by the user. _**default:**_ `true`; Enables Ingress to parse and add *-snippet annotations/directives created by the user. _**default:**_ `true`
Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this
may allow a user to add restricted configurations to the final nginx.conf file may allow a user to add restricted configurations to the final nginx.conf file

View file

@ -140,7 +140,7 @@ kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/ma
kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml
``` ```
Also we need to configure the NGINX controller ConfigMap with the required values: Also we need to configure the Ingress-NGINX controller ConfigMap with the required values:
``` ```
$ echo ' $ echo '