From a4777c531056cc0dcc6e39db1c9a52f80370fc73 Mon Sep 17 00:00:00 2001 From: k8s-ci-robot Date: Wed, 28 Jul 2021 22:05:54 +0000 Subject: [PATCH] Deploy GitHub Pages --- OWNERS | 4 - deploy/index.html | 16 +-- developer-guide/getting-started/index.html | 2 +- .../affinity/cookie/ingress-samesite.yaml | 4 +- examples/affinity/cookie/ingress.yaml | 2 +- examples/auth/basic/index.html | 2 +- examples/auth/client-certs/ingress.yaml | 2 +- examples/auth/external-auth/index.html | 4 +- examples/auth/external-auth/ingress.yaml | 2 +- .../dashboard-ingress.yaml | 4 +- examples/chashsubset/deployment.yaml | 2 +- .../configuration-snippets/ingress.yaml | 2 +- .../deploy/echo-service.yaml | 4 +- .../docker-registry/ingress-with-tls.yaml | 2 +- .../docker-registry/ingress-without-tls.yaml | 2 +- examples/grpc/app.yaml | 23 ++++ examples/grpc/cert.yaml | 7 ++ examples/grpc/index.html | 96 +++-------------- examples/grpc/ingress.yaml | 24 +++++ examples/grpc/svc.yaml | 12 +++ examples/multi-tls/multi-tls.yaml | 2 +- examples/rewrite/index.html | 4 +- examples/static-ip/nginx-ingress.yaml | 2 +- examples/tls-termination/index.html | 2 +- examples/tls-termination/ingress.yaml | 2 +- how-it-works/index.html | 2 +- search/search_index.json | 2 +- sitemap.xml | 102 +++++++++--------- sitemap.xml.gz | Bin 711 -> 711 bytes troubleshooting/index.html | 77 ++++++++----- user-guide/basic-usage/index.html | 14 +-- user-guide/cli-arguments/index.html | 2 +- user-guide/fcgi-services/index.html | 2 +- user-guide/ingress-path-matching/index.html | 8 +- user-guide/monitoring/index.html | 42 +++----- .../annotations/index.html | 2 +- .../nginx-configuration/configmap/index.html | 8 +- .../nginx-configuration/log-format/index.html | 2 +- .../third-party-addons/opentracing/index.html | 2 +- 39 files changed, 242 insertions(+), 251 deletions(-) delete mode 100644 OWNERS create mode 100644 examples/grpc/app.yaml create mode 100644 examples/grpc/cert.yaml create mode 100644 examples/grpc/ingress.yaml create mode 100644 examples/grpc/svc.yaml diff --git a/OWNERS b/OWNERS deleted file mode 100644 index 1d3805a73..000000000 --- a/OWNERS +++ /dev/null @@ -1,4 +0,0 @@ -# See the OWNERS docs: https://github.com/kubernetes/community/blob/master/contributors/guide/owners.md - -labels: -- area/docs \ No newline at end of file diff --git a/deploy/index.html b/deploy/index.html index 268e5cf33..4ff17bc88 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -2,22 +2,22 @@ --for=condition=ready pod \ --selector=app.kubernetes.io/component=controller \ --timeout=120s -

Contents

Provider Specific Steps

Docker Desktop

Kubernetes is available in Docker Desktop

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
+

Contents

Provider Specific Steps

Docker Desktop

Kubernetes is available in Docker Desktop

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
 

minikube

For standard usage:

minikube addons enable ingress
 

microk8s

For standard usage:

microk8s enable ingress
-

Please check the microk8s documentation page

AWS

In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy.yaml
-
TLS termination in AWS Load Balancer (ELB)

In some scenarios is required to terminate TLS in the Load Balancer and not in the ingress controller.

For this purpose we provide a template:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/aws/deploy-tls-termination.yaml
+

Please check the microk8s documentation page

AWS

In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/aws/deploy.yaml
+
TLS termination in AWS Load Balancer (ELB)

In some scenarios is required to terminate TLS in the Load Balancer and not in the ingress controller.

For this purpose we provide a template:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/aws/deploy-tls-termination.yaml
 

proxy-real-ip-cidr: XXX.XXX.XXX/XX

arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX

kubectl apply -f deploy-tls-termination.yaml
 
NLB Idle Timeouts

Idle timeout value for TCP flows is 350 seconds and cannot be modified.

For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.

By default NGINX keepalive_timeout is set to 75s.

More information with regards to timeouts can be found in the official AWS documentation

GCE-GKE

Info

Initialize your user as a cluster-admin with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
   --clusterrole cluster-admin \
   --user $(gcloud config get-value account)
-

Danger

For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
-

Failure

Proxy protocol is not supported in GCE/GKE

Azure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/cloud/deploy.yaml
-

More information with regards to Azure annotations for ingress controller can be found in the official AKS documentation.

Digital Ocean

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/do/deploy.yaml
-

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/scw/deploy.yaml
+

Danger

For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
+

Failure

Proxy protocol is not supported in GCE/GKE

Azure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/cloud/deploy.yaml
+

More information with regards to Azure annotations for ingress controller can be found in the official AKS documentation.

Digital Ocean

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/do/deploy.yaml
+

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/scw/deploy.yaml
 

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/exoscale/deploy.yaml
 

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/cloud/deploy.yaml
-

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

Bare-metal

Using NodePort:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.48.1/deploy/static/provider/baremetal/deploy.yaml
+

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

Bare-metal

Using NodePort:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
 

Tip

Applicable on kubernetes clusters deployed on bare-metal with generic Linux distro(Such as CentOs, Ubuntu ...).

Info

For extended notes regarding deployments on bare-metal, see Bare-metal considerations.

Verify installation

To check if the ingress controller pods have started, run the following command:

kubectl get pods -n ingress-nginx \
   -l app.kubernetes.io/name=ingress-nginx --watch
 

Once the ingress controller pods are running, you can cancel the command typing Ctrl+C.

Now, you are ready to create your first ingress.

Detect installed version

To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller --version.

POD_NAMESPACE=ingress-nginx
diff --git a/developer-guide/getting-started/index.html b/developer-guide/getting-started/index.html
index b14962f6e..b0fe1a7c7 100644
--- a/developer-guide/getting-started/index.html
+++ b/developer-guide/getting-started/index.html
@@ -3,7 +3,7 @@
 

Run unit-tests for lua code

make lua-test
 

Lua tests are located in the directory rootfs/etc/nginx/lua/test

Important

Test files must follow the naming convention <mytest>_test.lua or it will be ignored

Run e2e test suite

make kind-e2e-test
 

To limit the scope of the tests to execute, we can use the environment variable FOCUS

FOCUS="no-auth-locations" make kind-e2e-test
-

Note

The variable FOCUS defines Ginkgo Focused Specs

Valid values are defined in the describe definition of the e2e tests like Default Backend

The complete list of tests can be found here

Custom docker image

In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

This can be done setting two environment variables, REGISTRY and TAG

export TAG="dev"
+

Note

The variable FOCUS defines Ginkgo Focused Specs

Valid values are defined in the describe definition of the e2e tests like Default Backend

The complete list of tests can be found here

Custom docker image

In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

This can be done setting two environment variables, REGISTRY and TAG

export TAG="dev"
 export REGISTRY="$USER"
 
 make build image
diff --git a/examples/affinity/cookie/ingress-samesite.yaml b/examples/affinity/cookie/ingress-samesite.yaml
index 42d1c2e2d..b3f8f4b20 100644
--- a/examples/affinity/cookie/ingress-samesite.yaml
+++ b/examples/affinity/cookie/ingress-samesite.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: cookie-samesite-none
@@ -19,7 +19,7 @@ spec:
           servicePort: 80
         path: /
 ---
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: cookie-samesite-strict
diff --git a/examples/affinity/cookie/ingress.yaml b/examples/affinity/cookie/ingress.yaml
index 57edbdbd3..eac973fde 100644
--- a/examples/affinity/cookie/ingress.yaml
+++ b/examples/affinity/cookie/ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: nginx-test
diff --git a/examples/auth/basic/index.html b/examples/auth/basic/index.html
index a2c1a3ce8..8ecaedae8 100644
--- a/examples/auth/basic/index.html
+++ b/examples/auth/basic/index.html
@@ -15,7 +15,7 @@
   namespace: default
 type: Opaque
 
echo "
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: ingress-with-auth
diff --git a/examples/auth/client-certs/ingress.yaml b/examples/auth/client-certs/ingress.yaml
index cf5f701b2..7172081b4 100644
--- a/examples/auth/client-certs/ingress.yaml
+++ b/examples/auth/client-certs/ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/auth/external-auth/index.html b/examples/auth/external-auth/index.html
index 17525daec..6a44f31bf 100644
--- a/examples/auth/external-auth/index.html
+++ b/examples/auth/external-auth/index.html
@@ -6,7 +6,7 @@ NAME            HOSTS                         ADDRESS       PORTS     AGE
 external-auth   external-auth-01.sample.com   172.17.4.99   80        13s
 
 $ kubectl get ing external-auth -o yaml
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
@@ -16,7 +16,7 @@ metadata:
   name: external-auth
   namespace: default
   resourceVersion: "2068378"
-  selfLink: /apis/networking/v1beta1/namespaces/default/ingresses/external-auth
+  selfLink: /apis/networking/v1/namespaces/default/ingresses/external-auth
   uid: 5c388f1d-8970-11e6-9004-080027d2dc94
 spec:
   rules:
diff --git a/examples/auth/external-auth/ingress.yaml b/examples/auth/external-auth/ingress.yaml
index c7a87a240..2a58ca2e3 100644
--- a/examples/auth/external-auth/ingress.yaml
+++ b/examples/auth/external-auth/ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/auth/oauth-external-auth/dashboard-ingress.yaml b/examples/auth/oauth-external-auth/dashboard-ingress.yaml
index ade56a9e6..725bf1dc5 100644
--- a/examples/auth/oauth-external-auth/dashboard-ingress.yaml
+++ b/examples/auth/oauth-external-auth/dashboard-ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
@@ -18,7 +18,7 @@ spec:
 
 ---
 
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: oauth2-proxy
diff --git a/examples/chashsubset/deployment.yaml b/examples/chashsubset/deployment.yaml
index 9b1bafcb1..82fdc7ac0 100644
--- a/examples/chashsubset/deployment.yaml
+++ b/examples/chashsubset/deployment.yaml
@@ -54,7 +54,7 @@ spec:
       targetPort: 8080
 
 ---
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/customization/configuration-snippets/ingress.yaml b/examples/customization/configuration-snippets/ingress.yaml
index 07af3552f..70d9042c7 100644
--- a/examples/customization/configuration-snippets/ingress.yaml
+++ b/examples/customization/configuration-snippets/ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: nginx-configuration-snippet
diff --git a/examples/customization/external-auth-headers/deploy/echo-service.yaml b/examples/customization/external-auth-headers/deploy/echo-service.yaml
index 1c3667c7c..075421807 100644
--- a/examples/customization/external-auth-headers/deploy/echo-service.yaml
+++ b/examples/customization/external-auth-headers/deploy/echo-service.yaml
@@ -43,7 +43,7 @@ spec:
   selector:
     k8s-app: demo-echo-service
 ---
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: public-demo-echo-service
@@ -61,7 +61,7 @@ spec:
           servicePort: 80
         path: /
 ---
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: secure-demo-echo-service
diff --git a/examples/docker-registry/ingress-with-tls.yaml b/examples/docker-registry/ingress-with-tls.yaml
index fc277b20f..11ccf6627 100644
--- a/examples/docker-registry/ingress-with-tls.yaml
+++ b/examples/docker-registry/ingress-with-tls.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/docker-registry/ingress-without-tls.yaml b/examples/docker-registry/ingress-without-tls.yaml
index 1ce1b98fb..2d713cb8c 100644
--- a/examples/docker-registry/ingress-without-tls.yaml
+++ b/examples/docker-registry/ingress-without-tls.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/grpc/app.yaml b/examples/grpc/app.yaml
new file mode 100644
index 000000000..acc4060d0
--- /dev/null
+++ b/examples/grpc/app.yaml
@@ -0,0 +1,23 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+  name: fortune-teller-app
+  labels:
+    k8s-app: fortune-teller-app
+  namespace: default
+spec:
+  replicas: 1
+  selector:
+    matchLabels:
+      k8s-app: fortune-teller-app
+  template:
+    metadata:
+      labels:
+        k8s-app: fortune-teller-app
+    spec:
+      containers:
+      - name: fortune-teller-app
+        image: quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1
+        ports:
+        - containerPort: 50051
+          name: grpc
diff --git a/examples/grpc/cert.yaml b/examples/grpc/cert.yaml
new file mode 100644
index 000000000..562c30313
--- /dev/null
+++ b/examples/grpc/cert.yaml
@@ -0,0 +1,7 @@
+apiVersion: "stable.k8s.psg.io/v1"
+kind: "Certificate"
+metadata:
+  name: fortune-teller.stack.build
+  namespace: default
+spec:
+  domain: "fortune-teller.stack.build"
diff --git a/examples/grpc/index.html b/examples/grpc/index.html
index a48a2868e..da484e60f 100644
--- a/examples/grpc/index.html
+++ b/examples/grpc/index.html
@@ -1,86 +1,16 @@
- gRPC - NGINX Ingress Controller     
Skip to content

gRPC

This example demonstrates how to route traffic to a gRPC service through the nginx controller.

Prerequisites

  1. You have a kubernetes cluster running.
  2. You have a domain name such as example.com that is configured to route traffic to the ingress controller.
  3. You have the nginx-ingress controller installed as per docs, with gRPC support.
  4. You have a backend application running a gRPC server and listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
  5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.

Step 1: Create a Kubernetes Deployment for gRPC app

  • Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
    $ kubectl get po -A -o wide | grep go-grpc-greeter-server
    -
  • If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.

  • As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go .

  • To create a container image for this app, you can use this Dockerfile.

  • If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is deployment.go-grpc-greeter-server.yaml ;

cat <<EOF | kubectl apply -f -
-apiVersion: apps/v1
-kind: Deployment
-metadata:
-  labels:
-    app: go-grpc-greeter-server
-  name: go-grpc-greeter-server
-spec:
-  replicas: 1
-  selector:
-    matchLabels:
-      app: go-grpc-greeter-server
-  template:
-    metadata:
-      labels:
-        app: go-grpc-greeter-server
-    spec:
-      containers:
-      - image: <reponame>/go-grpc-greeter-server   # Edit this for your reponame
-        resources:
-          limits:
-            cpu: 100m
-            memory: 100Mi
-          requests:
-            cpu: 50m
-            memory: 50Mi
-        name: go-grpc-greeter-server
-        ports:
-        - containerPort: 50051
-EOF
-

Step 2: Create the Kubernetes Service for the gRPC app

  • You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod ;
    cat <<EOF | kubectl apply -f -
    -apiVersion: v1
    -kind: Service
    -metadata:
    -  labels:
    -    app: go-grpc-greeter-server
    -  name: go-grpc-greeter-server
    -spec:
    -  ports:
    -  - port: 80
    -    protocol: TCP
    -    targetPort: 50051
    -  selector:
    -    app: go-grpc-greeter-server
    -  type: ClusterIP
    -EOF
    -
  • You can save the above example manifest to a file with name service.go-grpc-greeter-server.yaml and edit it to match your deployment/pod, if required. You can create the service resource with a kubectl command like this ;
$ kubectl create -f service.go-grpc-greeter-server.yaml
-

Step 3: Create the Kubernetes Ingress resource for the gRPC app

  • Use the following example manifest of a ingress resource to create a ingress for your grpc app. If required, edit it to match your app's details like name, namespace, service, secret etc. Make sure you have the required SSL-Certificate, existing in your Kubernetes cluster, in the same namespace where the gRPC app is. The certificate must be available as a kubernetes secret resource, of type "kubernete.io/tls" https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets. This is because we are terminating TLS on the ingress;
cat <<EOF | kubectl apply -f -
-apiVersion: networking.k8s.io/v1
-kind: Ingress
-metadata:
-  annotations:
-    kubernetes.io/ingress.class: "nginx"
-    nginx.ingress.kubernetes.io/ssl-redirect: "true"
-    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
-  name: fortune-ingress
-  namespace: default
-spec:
-  rules:
-  - host: grpctest.dev.mydomain.com
-    http:
-      paths:
-      - path: /
-        pathType: Prefix
-        backend:
-          service:
-            name: go-grpc-greeter-server
-            port:
-              number: 80
-  tls:
-  # This secret must exist beforehand
-  # The cert must also contain the subj-name grpctest.dev.mydomain.com
-  # https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/PREREQUISITES.md#tls-certificates
-  - secretName: wildcard.dev.mydomain.com
-    hosts:
-      - grpctest.dev.mydomain.com
-EOF
-
  • If you save the above example manifest as a file named ingress.go-grpc-greeter-server.yaml and edit it to match your deployment and service, you can create the ingress like this ;
$ kubectl create -f ingress.go-grpc-greeter-server.yaml
-
  • The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, gRPC traffic will travel unencrypted inside the cluster and arrive "insecure").

  • For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".

  • A few more things to note:

  • We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.

  • We're terminating TLS at the ingress and have configured an SSL certificate wildcard.dev.mydomain.com. The ingress matches traffic arriving as https://grpctest.dev.mydomain.com:443 and routes unencrypted messages to the backend Kubernetes service.

Step 4: test the connection

  • Once we've applied our configuration to Kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:
$ grpcurl grpctest.dev.mydomain.com:443 helloworld.Greeter/SayHello
-{
-  "message": "
-}
+ gRPC - NGINX Ingress Controller      

gRPC

This example demonstrates how to route traffic to a gRPC service through the nginx controller.

Prerequisites

  1. You have a kubernetes cluster running.
  2. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress).
  3. You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 for grpc support.
  4. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.

Step 1: kubernetes Deployment

$ kubectl create -f app.yaml
+

This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051.

The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation:

func main() {
+    grpcServer := grpc.NewServer()
+    fortune.RegisterFortuneTellerServer(grpcServer, &FortuneTeller{})
+    lis, _ := net.Listen("tcp", ":50051")
+    grpcServer.Serve(lis)
+}
+

The takeaway is that we are not doing any TLS configuration on the server (as we are terminating TLS at the ingress level, grpc traffic will travel unencrypted inside the cluster and arrive "insecure").

For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".

Step 2: the kubernetes Service

$ kubectl create -f svc.yaml
+

Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051.

Step 3: the kubernetes Ingress

$ kubectl create -f ingress.yaml
+

A few things to note:

  1. We've tagged the ingress with the annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPC". This is the magic ingredient that sets up the appropriate nginx configuration to route http/2 traffic to our service.
  2. We're terminating TLS at the ingress and have configured an SSL certificate fortune-teller.stack.build. The ingress matches traffic arriving as https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.

Step 4: test the connection

Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:

$ grpcurl fortune-teller.stack.build:443 build.stack.fortune.FortuneTeller/Predict
+{
+  "message": "Let us endeavor so to live that when we come to die even the undertaker will be sorry.\n\t\t-- Mark Twain, \"Pudd'nhead Wilson's Calendar\""
+}
 

Debugging Hints

  1. Obviously, watch the logs on your app.
  2. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed).
  3. Double-check your address and ports.
  4. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
  5. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.

If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.

See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html

Notes on using response/request streams

  1. If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate for this.
  2. If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
  3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.

Values for the timeouts must be specified as e.g. "1200s".

On the most recent versions of nginx-ingress, changing these timeouts requires using the nginx.ingress.kubernetes.io/server-snippet annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout separately.

Rewrite

This example demonstrates how to use the Rewrite annotations

Prerequisites

You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

Deployment

Rewriting can be controlled using the following annotations:

Name Description Values
nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string
nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool
nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool
nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string
nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool

Examples

Rewrite Target

Attention

Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

Note

Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

Create an Ingress rule with a rewrite annotation:

$ echo '
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
@@ -17,7 +17,7 @@
         path: /something(/|$)(.*)
 ' | kubectl create -f -
 

In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.

For example, the ingress definition above will result in the following rewrites:

  • rewrite.bar.com/something rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/ rewrites to rewrite.bar.com/
  • rewrite.bar.com/something/new rewrites to rewrite.bar.com/new

App Root

Create an Ingress rule with an app-root annotation:

$ echo "
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   annotations:
diff --git a/examples/static-ip/nginx-ingress.yaml b/examples/static-ip/nginx-ingress.yaml
index aa4877e56..358942f5c 100644
--- a/examples/static-ip/nginx-ingress.yaml
+++ b/examples/static-ip/nginx-ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: ingress-nginx
diff --git a/examples/tls-termination/index.html b/examples/tls-termination/index.html
index da2cf0b67..0d8e52707 100644
--- a/examples/tls-termination/index.html
+++ b/examples/tls-termination/index.html
@@ -1,4 +1,4 @@
- TLS termination - NGINX Ingress Controller      

TLS termination

This example demonstrates how to terminate TLS through the nginx Ingress controller.

Prerequisites

You need a TLS cert and a test HTTP service for this example.

Deployment

Create a ingress.yaml file.

apiVersion: networking.k8s.io/v1beta1
+ TLS termination - NGINX Ingress Controller      

TLS termination

This example demonstrates how to terminate TLS through the nginx Ingress controller.

Prerequisites

You need a TLS cert and a test HTTP service for this example.

Deployment

Create a ingress.yaml file.

apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: nginx-test
diff --git a/examples/tls-termination/ingress.yaml b/examples/tls-termination/ingress.yaml
index fc97b3707..2e989d1b0 100644
--- a/examples/tls-termination/ingress.yaml
+++ b/examples/tls-termination/ingress.yaml
@@ -1,4 +1,4 @@
-apiVersion: networking.k8s.io/v1beta1
+apiVersion: networking.k8s.io/v1
 kind: Ingress
 metadata:
   name: nginx-test
diff --git a/how-it-works/index.html b/how-it-works/index.html
index c84d7dd59..049465e65 100644
--- a/how-it-works/index.html
+++ b/how-it-works/index.html
@@ -1,4 +1,4 @@
- How it works - NGINX Ingress Controller      

How it works

The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.

NGINX configuration

The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

NGINX model

Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

Building the NGINX model

Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

Operations to build the model:

  • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

  • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

  • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
  • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

  • Create a list of NGINX Servers (per hostname)

  • Create a list of NGINX Upstreams
  • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
  • Annotations are applied to all the paths in the Ingress.
  • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.

When a reload is required

The next list describes the scenarios when a reload is required:

  • New Ingress Resource Created.
  • TLS section is added to existing Ingress.
  • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
  • A path is added/removed from an Ingress.
  • An Ingress, Service, Secret is removed.
  • Some missing referenced object from the Ingress is available, like a Service or Secret.
  • A Secret is updated.

Avoiding reloads

In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

Avoiding reloads on Endpoints changes

On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

Avoiding outage from wrong configuration

Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

How it works

The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.

NGINX configuration

The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

NGINX model

Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

Building the NGINX model

Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

Operations to build the model:

  • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

  • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

  • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
  • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

  • Create a list of NGINX Servers (per hostname)

  • Create a list of NGINX Upstreams
  • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
  • Annotations are applied to all the paths in the Ingress.
  • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.

When a reload is required

The next list describes the scenarios when a reload is required:

  • New Ingress Resource Created.
  • TLS section is added to existing Ingress.
  • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
  • A path is added/removed from an Ingress.
  • An Ingress, Service, Secret is removed.
  • Some missing referenced object from the Ingress is available, like a Service or Secret.
  • A Secret is updated.

Avoiding reloads

In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

Avoiding reloads on Endpoints changes

On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

Avoiding outage from wrong configuration

Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.