From 58778d721342f0d3cc680a25b6ad5bf0cd625445 Mon Sep 17 00:00:00 2001 From: rikatz Date: Fri, 30 Jun 2023 21:38:09 +0000 Subject: [PATCH] Deploy GitHub Pages --- deploy/index.html | 18 +++---- e2e-tests/index.html | 4 +- search/search_index.json | 2 +- sitemap.xml | 110 +++++++++++++++++++-------------------- sitemap.xml.gz | Bin 742 -> 742 bytes 5 files changed, 67 insertions(+), 67 deletions(-) diff --git a/deploy/index.html b/deploy/index.html index 4264540f5..be79a544a 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -7,7 +7,7 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx --create-namespace

It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist.

Info

This command is idempotent:

If you want a full list of values that you can set, while installing with Helm, then run:

helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx
-

If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml
+

If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
 

Info

The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.

Attention

If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.

Pre-flight check

A few pods should start in the ingress-nginx namespace:

kubectl get pods --namespace=ingress-nginx
 

After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:

kubectl wait --namespace ingress-nginx \
   --for=condition=ready pod \
@@ -26,24 +26,24 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
   --rule www.demo.io/=demo:80
 

You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉

Environment-specific instructions

Local development clusters

minikube

The ingress controller can be installed through minikube's addons system:

minikube addons enable ingress
 

MicroK8s

The ingress controller can be installed through MicroK8s's addons system:

microk8s enable ingress
-

Please check the MicroK8s documentation page for details.

Docker Desktop

Kubernetes is available in Docker Desktop:

First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.

The ingress controller can be installed on Docker Desktop using the default quick start instructions.

On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.

Rancher Desktop

Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.

Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.

Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.

Cloud deployments

If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.

Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.

In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.

AWS

In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.

Info

The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/aws/deploy.yaml
-
TLS termination in AWS Load Balancer (NLB)

By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.

  1. Download the deploy.yaml template
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
+

Please check the MicroK8s documentation page for details.

Docker Desktop

Kubernetes is available in Docker Desktop:

First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.

The ingress controller can be installed on Docker Desktop using the default quick start instructions.

On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.

Rancher Desktop

Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.

Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.

Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.

Cloud deployments

If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.

Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.

In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.

AWS

In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer.

Info

The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller.

Network Load Balancer (NLB)
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/aws/deploy.yaml
+
TLS termination in AWS Load Balancer (NLB)

By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.

  1. Download the deploy.yaml template
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml
 
  1. Edit the file and change the VPC CIDR in use for the Kubernetes cluster:

    proxy-real-ip-cidr: XXX.XXX.XXX/XX
     

  2. Change the AWS Certificate Manager (ACM) ID as well:

    arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
     

  3. Deploy the manifest:

    kubectl apply -f deploy.yaml
     

NLB Idle Timeouts

Idle timeout value for TCP flows is 350 seconds and cannot be modified.

For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected.

By default, NGINX keepalive_timeout is set to 75s.

More information with regard to timeouts can be found in the official AWS documentation

GCE-GKE

First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command:

kubectl create clusterrolebinding cluster-admin-binding \
   --clusterrole cluster-admin \
   --user $(gcloud config get-value account)
-

Then, the ingress controller can be installed like this:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml
-

Warning

For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

Proxy-protocol is supported in GCE check the Official Documentations on how to enable.

Azure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml
-

More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.

Digital Ocean

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/do/deploy.yaml
-
- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/scw/deploy.yaml
+

Then, the ingress controller can be installed like this:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
+

Warning

For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.

See the GKE documentation on adding rules and the Kubernetes issue for more detail.

Proxy-protocol is supported in GCE check the Official Documentations on how to enable.

Azure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
+

More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation.

Digital Ocean

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/do/deploy.yaml
+
- By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.

Scaleway

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/scw/deploy.yaml
 

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
-

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/cloud/deploy.yaml
+

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
 

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

OVHcloud

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
 helm repo update
 helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace
-

You can find the complete tutorial.

Bare metal clusters

This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)

For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml
+

You can find the complete tutorial.

Bare metal clusters

This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)

For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/baremetal/deploy.yaml
 

For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.

Miscellaneous

Checking ingress controller version

Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec:

POD_NAMESPACE=ingress-nginx
 POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
 kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
diff --git a/e2e-tests/index.html b/e2e-tests/index.html
index b02c13e33..9b9b772e9 100644
--- a/e2e-tests/index.html
+++ b/e2e-tests/index.html
@@ -1,7 +1,7 @@
- E2e tests - Ingress-Nginx Controller     
Skip to content

e2e test suite for Ingress NGINX Controller

[Admission] admission controller

affinitymode

server-alias

app-root

auth-*

auth-tls-*

backend-protocol

canary-*

client-body-buffer-size

connection-proxy-header

cors-*

custom-http-errors

default-backend

disable-access-log disable-http-access-log disable-stream-access-log

backend-protocol - FastCGI

force-ssl-redirect

from-to-www-redirect

annotation-global-rate-limit

backend-protocol - GRPC

http2-push-preload

denylist-source-range

whitelist-source-range

Annotation - limit-connections

limit-rate

enable-access-log enable-rewrite-log

mirror-*

modsecurity owasp

preserve-trailing-slash

proxy-*

proxy-ssl-*

permanent-redirect permanent-redirect-code

rewrite-target use-regex enable-rewrite-log

satisfy

server-snippet

service-upstream

configuration-snippet

ssl-ciphers

stream-snippet

upstream-hash-by-*

upstream-vhost

x-forwarded-prefix

Debug CLI

[Default Backend] custom service

[Default Backend]

[Default Backend] SSL

[Default Backend] change default settings

[Endpointslices] long service name

[TopologyHints] topology aware routing

[Setting]

[Shutdown] Grace period shutdown

[Shutdown] ingress controller

[Shutdown] Graceful shutdown with pending request

[Ingress] DeepInspection

single ingress - multiple hosts

[Ingress] [PathType] exact

[Ingress] [PathType] mix Exact and Prefix paths

[Ingress] [PathType] prefix checks

[Ingress] definition without host

[Memory Leak] Dynamic Certificates

[Load Balancer] load-balance

[Load Balancer] EWMA

[Load Balancer] round-robin

[Lua] dynamic certificates

[Lua] dynamic configuration

[metrics] exported prometheus metrics

nginx-configuration

[Security] request smuggling

[Service] backend status code 503

[Service] Type ExternalName

[Service] Nil Service Backend

access-log

Bad annotation values

brotli

Configmap change

add-headers

[SSL] [Flag] default-ssl-certificate

[Flag] disable-catch-all

[Flag] disable-service-external-name

[Flag] disable-sync-events

enable-real-ip

use-forwarded-headers

Geoip2

[Security] block-*

[Security] global-auth-url

global-options

settings-global-rate-limit

gzip

hash size

[Flag] ingress-class

keep-alive keep-alive-requests

Configmap - limit-rate

[Flag] custom HTTP and HTTPS ports

log-format-*

[Lua] lua-shared-dicts

main-snippet

[Security] modsecurity-snippet

enable-multi-accept

[Flag] watch namespace selector

[Security] no-auth-locations

Add no tls redirect locations

OCSP

Configure Opentelemetry

Configure OpenTracing

plugins

[Security] Pod Security Policies

[Security] Pod Security Policies with volumes

proxy-connect-timeout

Dynamic $proxy_host

proxy-next-upstream

use-proxy-protocol

proxy-read-timeout

proxy-send-timeout

reuse-port

configmap server-snippet

server-tokens

ssl-ciphers

With enable-ssl-passthrough enabled

configmap stream-snippet

[SSL] TLS protocols, ciphers and headers)

[SSL] redirect to HTTPS

[SSL] secret update

[Status] status update

[TCP] tcp-services