From e7d9a33edc348a2755029f9aad2b314f4a21fe9d Mon Sep 17 00:00:00 2001 From: k8s-ci-robot Date: Thu, 13 May 2021 00:17:34 +0000 Subject: [PATCH] Deploy GitHub Pages --- 404.html | 73 +++++++++++- deploy/baremetal/index.html | 2 +- deploy/hardening-guide/index.html | 2 +- deploy/index.html | 4 +- deploy/rbac/index.html | 2 +- deploy/upgrade/index.html | 2 +- developer-guide/code-overview/index.html | 9 ++ developer-guide/getting-started/index.html | 19 +++ development/index.html | 19 --- e2e-tests/index.html | 2 +- .../20190724-only-dynamic-ssl/index.html | 2 +- .../20190815-zone-aware-routing/index.html | 2 +- enhancements/YYYYMMDD-kep-template/index.html | 2 +- enhancements/index.html | 2 +- examples/PREREQUISITES/index.html | 2 +- examples/affinity/cookie/index.html | 2 +- examples/auth/basic/index.html | 2 +- examples/auth/client-certs/index.html | 2 +- examples/auth/external-auth/index.html | 2 +- examples/auth/oauth-external-auth/index.html | 2 +- .../configuration-snippets/index.html | 2 +- .../custom-configuration/index.html | 2 +- .../customization/custom-errors/index.html | 2 +- .../customization/custom-headers/index.html | 2 +- .../external-auth-headers/index.html | 2 +- .../customization/ssl-dh-param/index.html | 2 +- examples/customization/sysctl/index.html | 2 +- examples/docker-registry/index.html | 2 +- examples/grpc/index.html | 2 +- examples/index.html | 2 +- examples/multi-tls/index.html | 2 +- examples/psp/index.html | 4 +- examples/rewrite/index.html | 2 +- examples/static-ip/index.html | 2 +- examples/tls-termination/index.html | 2 +- how-it-works/index.html | 2 +- index.html | 2 +- kubectl-plugin/index.html | 4 +- search/search_index.json | 2 +- sitemap.xml | 110 +++++++++--------- sitemap.xml.gz | Bin 688 -> 712 bytes troubleshooting/index.html | 2 +- user-guide/basic-usage/index.html | 2 +- user-guide/cli-arguments/index.html | 2 +- user-guide/custom-errors/index.html | 2 +- user-guide/default-backend/index.html | 2 +- .../exposing-tcp-udp-services/index.html | 2 +- user-guide/external-articles/index.html | 2 +- user-guide/fcgi-services/index.html | 2 +- user-guide/ingress-path-matching/index.html | 2 +- user-guide/miscellaneous/index.html | 2 +- user-guide/monitoring/index.html | 2 +- user-guide/multiple-ingress/index.html | 2 +- .../annotations/index.html | 2 +- .../nginx-configuration/configmap/index.html | 2 +- .../custom-template/index.html | 2 +- user-guide/nginx-configuration/index.html | 2 +- .../nginx-configuration/log-format/index.html | 2 +- .../third-party-addons/modsecurity/index.html | 2 +- .../third-party-addons/opentracing/index.html | 2 +- user-guide/tls/index.html | 2 +- 61 files changed, 215 insertions(+), 131 deletions(-) create mode 100644 developer-guide/code-overview/index.html create mode 100644 developer-guide/getting-started/index.html delete mode 100644 development/index.html diff --git a/404.html b/404.html index c8d95aad7..459f93b38 100644 --- a/404.html +++ b/404.html @@ -229,6 +229,22 @@ + + + + + + + + +
  • + + Developer Guide + +
  • + + + @@ -358,7 +374,7 @@
  • - + Development
  • @@ -1165,6 +1181,61 @@ + + + + + + + +
  • + + + + + + +
  • + + diff --git a/deploy/baremetal/index.html b/deploy/baremetal/index.html index 2c339fbd2..8026161fc 100644 --- a/deploy/baremetal/index.html +++ b/deploy/baremetal/index.html @@ -1,4 +1,4 @@ - Bare-metal considerations - NGINX Ingress Controller
    Skip to content

    Bare-metal considerations

    In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

    Cloud environment Bare-metal environment

    The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.

    A pure software solution: MetalLB

    MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

    This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.

    MetalLB in L2 mode

    Note

    The description of other supported configuration modes is off-scope for this document.

    Warning

    MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.

    MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions.

    MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.

    Example

    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

    $ kubectl get node
    + Bare-metal considerations - NGINX Ingress Controller      

    Bare-metal considerations

    In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.

    Cloud environment Bare-metal environment

    The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.

    A pure software solution: MetalLB

    MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

    This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.

    MetalLB in L2 mode

    Note

    The description of other supported configuration modes is off-scope for this document.

    Warning

    MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.

    MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions.

    MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.

    Example

    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

    $ kubectl get node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
    diff --git a/deploy/hardening-guide/index.html b/deploy/hardening-guide/index.html
    index eba723166..f44f29e0d 100644
    --- a/deploy/hardening-guide/index.html
    +++ b/deploy/hardening-guide/index.html
    @@ -1,4 +1,4 @@
    - Hardening guide - NGINX Ingress Controller      

    Hardening Guide

    Overview

    There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:

    This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.

    Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.

    This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself

    Configuration Guide

    Chapter in CIS benchmark Status Default Action to do if not default
    1 Initial Setup
    1.1 Installation
    1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress
    1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress
    1.2 Configure Software Updates
    1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then
    1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates
    2 Basic Configuration
    2.1 Minimize NGINX Modules
    2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome
    2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK
    2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK
    2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults
    2.2 Account Security
    2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script
    2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this
    2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script
    2.3 Permissions and Ownership
    2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically
    2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer
    2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design
    2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default
    2.4 Network Configuration
    2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration
    2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the "default backend" delivering appropriate errors (mostly 404)
    2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation
    2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable
    2.5 Information Disclosure
    2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default
    2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show "nginx", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500
    2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please
    2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of "X-Powered-By" and "Server": according to this documentation
    3 Logging
    3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default
    3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default
    3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway
    3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately
    3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer
    3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer
    3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default
    4 Encryption
    4.1 TLS / SSL Configuration
    4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default
    4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager
    4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer
    4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to "TLSv1.3"
    4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to "EECDH+AESGCM:EDH+AESGCM"
    4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to
    4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter
    4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default
    4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown
    4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here
    4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here
    4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true
    4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default
    4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default
    5 Request Filtering and Restrictions
    5.1 Access Control
    5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into nginx ingress controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!)
    5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet
    5.2 Request Limits
    5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent
    5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter
    5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter
    5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations
    5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations
    5.3 Browser Security
    5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers
    5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer
    5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer
    5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer
    5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver
    6 Mandatory Access Control n/a too high level, depends on backends

    Hardening Guide

    Overview

    There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points:

    This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.

    Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.

    This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself

    Configuration Guide

    Chapter in CIS benchmark Status Default Action to do if not default
    1 Initial Setup
    1.1 Installation
    1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress
    1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress
    1.2 Configure Software Updates
    1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then
    1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates
    2 Basic Configuration
    2.1 Minimize NGINX Modules
    2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome
    2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK
    2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK
    2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults
    2.2 Account Security
    2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values. Compiled with user www-data: See this line in build script
    2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this
    2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script
    2.3 Permissions and Ownership
    2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically
    2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer
    2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design
    2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default
    2.4 Network Configuration
    2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration
    2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the "default backend" delivering appropriate errors (mostly 404)
    2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation
    2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable
    2.5 Information Disclosure
    2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default
    2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show "nginx", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500
    2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please
    2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of "X-Powered-By" and "Server": according to this documentation
    3 Logging
    3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default
    3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default
    3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway
    3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately
    3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer
    3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer
    3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default
    4 Encryption
    4.1 TLS / SSL Configuration
    4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default
    4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager
    4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer
    4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to "TLSv1.3"
    4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to "EECDH+AESGCM:EDH+AESGCM"
    4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to
    4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter
    4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default
    4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown
    4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here
    4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here
    4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true
    4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default
    4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default
    5 Request Filtering and Restrictions
    5.1 Access Control
    5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into nginx ingress controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!)
    5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet
    5.2 Request Limits
    5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent
    5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter
    5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter
    5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations
    5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations
    5.3 Browser Security
    5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers
    5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer
    5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer
    5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer
    5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver
    6 Mandatory Access Control n/a too high level, depends on backends

    Installation Guide

    Attention

    The default configuration watches Ingress object from all namespaces.

    To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.

    Warning

    If multiple Ingresses define paths for the same host, the ingress controller merges the definitions.

    Danger

    The admission webhook requires connectivity between Kubernetes API server and the ingress controller.

    In case Network policies or additional firewalls, please allow access to port 8443.

    Attention

    The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. For this reason, there is an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

    You can wait until it is ready to run the next command:

     kubectl wait --namespace ingress-nginx \
    + Installation Guide - NGINX Ingress Controller      

    Installation Guide

    Attention

    The default configuration watches Ingress object from all namespaces.

    To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.

    Warning

    If multiple Ingresses define paths for the same host, the ingress controller merges the definitions.

    Danger

    The admission webhook requires connectivity between Kubernetes API server and the ingress controller.

    In case Network policies or additional firewalls, please allow access to port 8443.

    Attention

    The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. For this reason, there is an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

    You can wait until it is ready to run the next command:

     kubectl wait --namespace ingress-nginx \
       --for=condition=ready pod \
       --selector=app.kubernetes.io/component=controller \
       --timeout=120s
    @@ -30,7 +30,7 @@
     helm install ingress-nginx ingress-nginx/ingress-nginx
     

    Detect installed version:

    POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
     kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
    -

    Role Based Access Control (RBAC)

    Overview

    This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.

    Role Based Access Control is comprised of four layers:

    1. ClusterRole - permissions assigned to a role that apply to an entire cluster
    2. ClusterRoleBinding - binding a ClusterRole to a specific account
    3. Role - permissions assigned to a role that apply to a specific namespace
    4. RoleBinding - binding a Role to a specific account

    In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.

    Service Accounts created in this example

    One ServiceAccount is created in this example, nginx-ingress-serviceaccount.

    Permissions Granted in this example

    There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.

    Cluster Permissions

    These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole

    • configmaps, endpoints, nodes, pods, secrets: list, watch
    • nodes: get
    • services, ingresses: get, list, watch
    • events: create, patch
    • ingresses/status: update

    Namespace Permissions

    These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role

    • configmaps, pods, secrets: get
    • endpoints: get

    Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx

    Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).

    • configmaps: get, update (for resourceName ingress-controller-leader-nginx)
    • configmaps: create

    This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to:

    • election-id: ingress-controller-leader
    • ingress-class: nginx
    • resourceName : <election-id>-<ingress-class>

    Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.

    Bindings

    The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.

    The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.

    Role Based Access Control (RBAC)

    Overview

    This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.

    Role Based Access Control is comprised of four layers:

    1. ClusterRole - permissions assigned to a role that apply to an entire cluster
    2. ClusterRoleBinding - binding a ClusterRole to a specific account
    3. Role - permissions assigned to a role that apply to a specific namespace
    4. RoleBinding - binding a Role to a specific account

    In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.

    Service Accounts created in this example

    One ServiceAccount is created in this example, nginx-ingress-serviceaccount.

    Permissions Granted in this example

    There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.

    Cluster Permissions

    These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole

    • configmaps, endpoints, nodes, pods, secrets: list, watch
    • nodes: get
    • services, ingresses: get, list, watch
    • events: create, patch
    • ingresses/status: update

    Namespace Permissions

    These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role

    • configmaps, pods, secrets: get
    • endpoints: get

    Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx

    Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).

    • configmaps: get, update (for resourceName ingress-controller-leader-nginx)
    • configmaps: create

    This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to:

    • election-id: ingress-controller-leader
    • ingress-class: nginx
    • resourceName : <election-id>-<ingress-class>

    Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.

    Bindings

    The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.

    The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.

    Upgrading

    Important

    No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

    Without Helm

    To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

    I.e. if your deployment resource looks like (partial example):

    kind: Deployment
    + Upgrade - NGINX Ingress Controller      

    Upgrading

    Important

    No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

    Without Helm

    To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

    I.e. if your deployment resource looks like (partial example):

    kind: Deployment
     metadata:
       name: nginx-ingress-controller
       namespace: ingress-nginx
    diff --git a/developer-guide/code-overview/index.html b/developer-guide/code-overview/index.html
    new file mode 100644
    index 000000000..90d6000fa
    --- /dev/null
    +++ b/developer-guide/code-overview/index.html
    @@ -0,0 +1,9 @@
    + Code Overview - NGINX Ingress Controller      

    Ingress NGINX - Code Overview

    This document provides an overview of Ingress NGINX code.

    Core Golang code

    This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.

    Core Sync Logics:

    Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copy of that (1) currently running configuration model and (2) the one generated in response to some changes in the cluster.

    The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one.

    There are static and dynamic configuration changes.

    All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.


    The following parts of the code can be found:

    Entrypoint

    Is the main package, responsible for starting ingress-nginx program.

    It can be found in cmd/nginx directory.

    Version

    Is the package of the code responsible for adding version subcommand, and can be found in version directory.

    Internal code

    This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split in:

    Admission Controller

    Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it.

    This code can be found in internal/admission/controller directory.

    File functions

    Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories.

    This code can be found in internal/file directory.

    Ingress functions

    Contains all the logics from NGINX Ingress Controller, with some examples being:

    And other parts of the code that will be written in this document in a future.

    K8s functions

    Contains helper functions for parsing Kubernetes objects.

    This part of the code can be found in internal/k8s directory.

    Networking functions

    Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc.

    This part of the code can be found in internal/net directory.

    NGINX functions

    Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts.

    This part of the code can be found in internal/nginx directory.

    Tasks / Queue

    Contains the functions responsible for the sync queue part of the controller.

    This part of the code can be found in internal/task directory.

    Other parts of internal

    Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.

    E2E Test

    The e2e tests code is in test directory.

    Other programs

    Describe here kubectl plugin, dbg, waitshutdown and cover the hack scripts.

    Deploy files

    This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other componentes.

    Those files are in deploy directory.

    Helm Chart

    Used to generate the Helm chart published.

    Code is in charts/ingress-nginx.

    Documentation/Website

    The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/

    This code is available in docs and it's main "language" is Markdown, used by mkdocs file to generate static pages.

    Container Images

    Container images used to run ingress-nginx, or to build the final image.

    Base Images

    Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.

    There are other images inside this directory.

    Ingress Controller Image

    The image used to build the final ingress controller, used in deploy scripts and Helm charts.

    This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.

    The files are in rootfs directory and contains:

    Ingress NGINX Lua Scripts

    Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper.

    The directory containing Lua scripts is rootfs/etc/nginx/lua.

    Nginx Go template file

    One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file.

    To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.

    \ No newline at end of file diff --git a/developer-guide/getting-started/index.html b/developer-guide/getting-started/index.html new file mode 100644 index 000000000..45fa669ed --- /dev/null +++ b/developer-guide/getting-started/index.html @@ -0,0 +1,19 @@ + Getting Started - NGINX Ingress Controller

    Developing for NGINX Ingress Controller

    This document explains how to get started with developing for NGINX Ingress controller.

    Prerequisites

    Install Go 1.14 or later.

    Note

    The project uses Go Modules

    Install Docker (v19.03.0 or later with experimental feature on)

    Important

    The majority of make tasks run as docker containers

    Quick Start

    1. Fork the repository
    2. Clone the repository to any location in your work station
    3. Add a GO111MODULE environment variable with export GO111MODULE=on
    4. Run go mod download to install dependencies

    Local build

    Start a local Kubernetes cluster using kind, build and deploy the ingress controller

    make dev-env
    +

    Testing

    Run go unit tests

    make test
    +

    Run unit-tests for lua code

    make lua-test
    +

    Lua tests are located in the directory rootfs/etc/nginx/lua/test

    Important

    Test files must follow the naming convention <mytest>_test.lua or it will be ignored

    Run e2e test suite

    make kind-e2e-test
    +

    To limit the scope of the tests to execute, we can use the environment variable FOCUS

    FOCUS="no-auth-locations" make kind-e2e-test
    +

    Note

    The variable FOCUS defines Ginkgo Focused Specs

    Valid values are defined in the describe definition of the e2e tests like Default Backend

    The complete list of tests can be found here

    Custom docker image

    In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

    This can be done setting two environment variables, REGISTRY and TAG

    export TAG="dev"
    +export REGISTRY="$USER"
    +
    +make build image
    +

    and then publish such version with

    docker push $REGISTRY/controller:$TAG
    +
    \ No newline at end of file diff --git a/development/index.html b/development/index.html deleted file mode 100644 index 542b02dd4..000000000 --- a/development/index.html +++ /dev/null @@ -1,19 +0,0 @@ - Development - NGINX Ingress Controller

    Developing for NGINX Ingress Controller

    This document explains how to get started with developing for NGINX Ingress controller.

    Prerequisites

    Install Go 1.14 or later.

    Note

    The project uses Go Modules

    Install Docker (v19.03.0 or later with experimental feature on)

    Important

    The majority of make tasks run as docker containers

    Quick Start

    1. Fork the repository
    2. Clone the repository to any location in your work station
    3. Add a GO111MODULE environment variable with export GO111MODULE=on
    4. Run go mod download to install dependencies

    Local build

    Start a local Kubernetes cluster using kind, build and deploy the ingress controller

    make dev-env
    -

    Testing

    Run go unit tests

    make test
    -

    Run unit-tests for lua code

    make lua-test
    -

    Lua tests are located in the directory rootfs/etc/nginx/lua/test

    Important

    Test files must follow the naming convention <mytest>_test.lua or it will be ignored

    Run e2e test suite

    make kind-e2e-test
    -

    To limit the scope of the tests to execute, we can use the environment variable FOCUS

    FOCUS="no-auth-locations" make kind-e2e-test
    -

    Note

    The variable FOCUS defines Ginkgo Focused Specs

    Valid values are defined in the describe definition of the e2e tests like Default Backend

    The complete list of tests can be found here

    Custom docker image

    In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location.

    This can be done setting two environment variables, REGISTRY and TAG

    export TAG="dev"
    -export REGISTRY="$USER"
    -
    -make build image
    -

    and then publish such version with

    docker push $REGISTRY/controller:$TAG
    -
    \ No newline at end of file diff --git a/e2e-tests/index.html b/e2e-tests/index.html index 505f194c3..7b2f243cc 100644 --- a/e2e-tests/index.html +++ b/e2e-tests/index.html @@ -1,4 +1,4 @@ - e2e test suite for [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/tree/master/) - NGINX Ingress Controller

    e2e test suite for NGINX Ingress Controller

    [Default Backend] change default settings

    [Default Backend]

    [Default Backend] custom service

    [Default Backend] SSL

    [TCP] tcp-services

    auth-*

    affinitymode

    proxy-*

    mirror-*

    canary-*

    limit-rate

    force-ssl-redirect

    http2-push-preload

    proxy-ssl-*

    modsecurity owasp

    backend-protocol - GRPC

    cors-*

    influxdb-*

    Annotation - limit-connections

    client-body-buffer-size

    default-backend

    connection-proxy-header

    upstream-vhost

    custom-http-errors

    disable-access-log disable-http-access-log disable-stream-access-log

    server-snippet

    rewrite-target use-regex enable-rewrite-log

    app-root

    whitelist-source-range

    enable-access-log enable-rewrite-log

    x-forwarded-prefix

    configuration-snippet

    backend-protocol - FastCGI

    from-to-www-redirect

    permanent-redirect permanent-redirect-code

    upstream-hash-by-*

    annotation-global-rate-limit

    backend-protocol

    satisfy

    server-alias

    ssl-ciphers

    auth-tls-*

    [Status] status update

    Debug CLI

    [Memory Leak] Dynamic Certificates

    [Ingress] [PathType] mix Exact and Prefix paths

    [Ingress] definition without host

    single ingress - multiple hosts

    [Ingress] [PathType] exact

    [Ingress] [PathType] prefix checks

    [Security] request smuggling

    [SSL] [Flag] default-ssl-certificate

    enable-real-ip

    access-log

    [Lua] lua-shared-dicts

    server-tokens

    use-proxy-protocol

    [Flag] custom HTTP and HTTPS ports

    [Security] no-auth-locations

    Dynamic $proxy_host

    proxy-connect-timeout

    [Security] Pod Security Policies

    Geoip2

    [Security] Pod Security Policies with volumes

    enable-multi-accept

    log-format-*

    [Flag] ingress-class

    ssl-ciphers

    proxy-next-upstream

    [Security] global-auth-url

    [Security] block-*

    plugins

    Configmap - limit-rate

    Configure OpenTracing

    use-forwarded-headers

    proxy-send-timeout

    Add no tls redirect locations

    settings-global-rate-limit

    add-headers

    hash size

    keep-alive keep-alive-requests

    [Flag] disable-catch-all

    main-snippet

    [SSL] TLS protocols, ciphers and headers)

    Configmap change

    proxy-read-timeout

    [Security] modsecurity-snippet

    OCSP

    reuse-port

    [Shutdown] Graceful shutdown with pending request

    [Shutdown] ingress controller

    [Service] backend status code 503

    [Service] Type ExternalName

    e2e test suite for NGINX Ingress Controller

    [Default Backend] change default settings

    [Default Backend]

    [Default Backend] custom service

    [Default Backend] SSL

    [TCP] tcp-services

    auth-*

    affinitymode

    proxy-*

    mirror-*

    canary-*

    limit-rate

    force-ssl-redirect

    http2-push-preload

    proxy-ssl-*

    modsecurity owasp

    backend-protocol - GRPC

    cors-*

    influxdb-*

    Annotation - limit-connections

    client-body-buffer-size

    default-backend

    connection-proxy-header

    upstream-vhost

    custom-http-errors

    disable-access-log disable-http-access-log disable-stream-access-log

    server-snippet

    rewrite-target use-regex enable-rewrite-log

    app-root

    whitelist-source-range

    enable-access-log enable-rewrite-log

    x-forwarded-prefix

    configuration-snippet

    backend-protocol - FastCGI

    from-to-www-redirect

    permanent-redirect permanent-redirect-code

    upstream-hash-by-*

    annotation-global-rate-limit

    backend-protocol

    satisfy

    server-alias

    ssl-ciphers

    auth-tls-*

    [Status] status update

    Debug CLI

    [Memory Leak] Dynamic Certificates

    [Ingress] [PathType] mix Exact and Prefix paths

    [Ingress] definition without host

    single ingress - multiple hosts

    [Ingress] [PathType] exact

    [Ingress] [PathType] prefix checks

    [Security] request smuggling

    [SSL] [Flag] default-ssl-certificate

    enable-real-ip

    access-log

    [Lua] lua-shared-dicts

    server-tokens

    use-proxy-protocol

    [Flag] custom HTTP and HTTPS ports

    [Security] no-auth-locations

    Dynamic $proxy_host

    proxy-connect-timeout

    [Security] Pod Security Policies

    Geoip2

    [Security] Pod Security Policies with volumes

    enable-multi-accept

    log-format-*

    [Flag] ingress-class

    ssl-ciphers

    proxy-next-upstream

    [Security] global-auth-url

    [Security] block-*

    plugins

    Configmap - limit-rate

    Configure OpenTracing

    use-forwarded-headers

    proxy-send-timeout

    Add no tls redirect locations

    settings-global-rate-limit

    add-headers

    hash size

    keep-alive keep-alive-requests

    [Flag] disable-catch-all

    main-snippet

    [SSL] TLS protocols, ciphers and headers)

    Configmap change

    proxy-read-timeout

    [Security] modsecurity-snippet

    OCSP

    reuse-port

    [Shutdown] Graceful shutdown with pending request

    [Shutdown] ingress controller

    [Service] backend status code 503

    [Service] Type ExternalName

    Remove static SSL configuration mode

    Table of Contents

    Summary

    Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

    Motivation

    The static configuration implies reloads, something that affects the majority of the users.

    Goals

    • Deprecation of the flag --enable-dynamic-certificates.
    • Cleanup of the codebase.

    Non-Goals

    • Features related to certificate authentication are not changed in any way.

    Proposal

    • Remove static SSL configuration

    Implementation Details/Notes/Constraints

    • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
    • Remove any action of the flag --enable-dynamic-certificates

    Drawbacks

    Alternatives

    Keep both implementations

    Remove static SSL configuration mode

    Table of Contents

    Summary

    Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

    Motivation

    The static configuration implies reloads, something that affects the majority of the users.

    Goals

    • Deprecation of the flag --enable-dynamic-certificates.
    • Cleanup of the codebase.

    Non-Goals

    • Features related to certificate authentication are not changed in any way.

    Proposal

    • Remove static SSL configuration

    Implementation Details/Notes/Constraints

    • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
    • Remove any action of the flag --enable-dynamic-certificates

    Drawbacks

    Alternatives

    Keep both implementations

    Availability zone aware routing

    Table of Contents

    Summary

    Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

    Motivation

    When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

    At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

    This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

    Arguably inter-zone network latency should also be better than cross-zone.

    Goals

    • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
    • This should not impact canary feature
    • ingress-nginx should be able to operate successfully if there are no zonal endpoints

    Non-Goals

    • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
    • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases

    Proposal

    The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

    Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

    How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

    How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

    Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

    How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

    We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

    Implementation History

    • initial version of KEP is shipped
    • proposal and implementation details are done

    Drawbacks [optional]

    More load on the Kubernetes API server.

    Availability zone aware routing

    Table of Contents

    Summary

    Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

    Motivation

    When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

    At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

    This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

    Arguably inter-zone network latency should also be better than cross-zone.

    Goals

    • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
    • This should not impact canary feature
    • ingress-nginx should be able to operate successfully if there are no zonal endpoints

    Non-Goals

    • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
    • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases

    Proposal

    The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

    Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

    How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

    How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

    Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

    How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

    We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

    Implementation History

    • initial version of KEP is shipped
    • proposal and implementation details are done

    Drawbacks [optional]

    More load on the Kubernetes API server.

    Title

    This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

    The title should be lowercased and spaces/punctuation should be replaced with -.

    To get started with this template:

    1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
    2. Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
    3. Create a PR. Assign it to folks that are sponsoring this process.
    4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
    5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

    The canonical place for the latest set of instructions (and the likely source of this file) is here.

    The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

    Table of Contents

    A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

    Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

    Summary

    The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

    A good summary is probably at least a paragraph in length.

    Motivation

    This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

    Goals

    List the specific goals of the KEP. How will we know that this has succeeded?

    Non-Goals

    What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

    Proposal

    This is where we get down to the nitty gritty of what the proposal actually is.

    User Stories [optional]

    Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.

    Story 1

    Story 2

    Implementation Details/Notes/Constraints [optional]

    What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

    Risks and Mitigations

    What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

    How will security be reviewed and by whom? How will UX be reviewed and by whom?

    Consider including folks that also work outside project.

    Design Details

    Test Plan

    Note: Section not required until targeted at a release.

    Consider the following in developing a test plan for this enhancement:

    • Will there be e2e and integration tests, in addition to unit tests?
    • How will it be tested in isolation vs with other components?

    No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

    All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

    Removing a deprecated flag

    • Announce deprecation and support policy of the existing flag
    • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
    • Address feedback on usage/changed behavior, provided on GitHub issues
    • Deprecate the flag

    Implementation History

    Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

    • the Summary and Motivation sections being merged signaling acceptance
    • the Proposal section being merged signaling agreement on a proposed design
    • the date implementation started
    • the first Kubernetes release where an initial version of the KEP was available
    • the version of Kubernetes where the KEP graduated to general availability
    • when the KEP was retired or superseded

    Drawbacks [optional]

    Why should this KEP not be implemented.

    Alternatives [optional]

    Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

    Title

    This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

    The title should be lowercased and spaces/punctuation should be replaced with -.

    To get started with this template:

    1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
    2. Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
    3. Create a PR. Assign it to folks that are sponsoring this process.
    4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
    5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

    The canonical place for the latest set of instructions (and the likely source of this file) is here.

    The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

    Table of Contents

    A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

    Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

    Summary

    The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

    A good summary is probably at least a paragraph in length.

    Motivation

    This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

    Goals

    List the specific goals of the KEP. How will we know that this has succeeded?

    Non-Goals

    What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

    Proposal

    This is where we get down to the nitty gritty of what the proposal actually is.

    User Stories [optional]

    Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.

    Story 1

    Story 2

    Implementation Details/Notes/Constraints [optional]

    What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

    Risks and Mitigations

    What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

    How will security be reviewed and by whom? How will UX be reviewed and by whom?

    Consider including folks that also work outside project.

    Design Details

    Test Plan

    Note: Section not required until targeted at a release.

    Consider the following in developing a test plan for this enhancement:

    • Will there be e2e and integration tests, in addition to unit tests?
    • How will it be tested in isolation vs with other components?

    No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

    All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

    Removing a deprecated flag

    • Announce deprecation and support policy of the existing flag
    • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
    • Address feedback on usage/changed behavior, provided on GitHub issues
    • Deprecate the flag

    Implementation History

    Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

    • the Summary and Motivation sections being merged signaling acceptance
    • the Proposal section being merged signaling agreement on a proposed design
    • the date implementation started
    • the first Kubernetes release where an initial version of the KEP was available
    • the version of Kubernetes where the KEP graduated to general availability
    • when the KEP was retired or superseded

    Drawbacks [optional]

    Why should this KEP not be implemented.

    Alternatives [optional]

    Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

    Kubernetes Enhancement Proposals (KEPs)

    A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

    Quick start for the KEP process

    Follow the process outlined in the KEP template

    Do I have to use the KEP process?

    No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

    KEPs are only required when the changes are wide ranging and impact most of the project.

    Why would I want to use the KEP process?

    Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

    Benefits to KEP users (in the limit):

    • Exposure on a kubernetes blessed web site that is findable via web search engines.
    • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
    • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

    We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.

    Kubernetes Enhancement Proposals (KEPs)

    A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

    Quick start for the KEP process

    Follow the process outlined in the KEP template

    Do I have to use the KEP process?

    No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

    KEPs are only required when the changes are wide ranging and impact most of the project.

    Why would I want to use the KEP process?

    Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

    Benefits to KEP users (in the limit):

    • Exposure on a kubernetes blessed web site that is findable via web search engines.
    • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
    • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

    We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.

    Prerequisites

    Many of the examples in this directory have common prerequisites.

    TLS certificates

    Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
    + Prerequisites - NGINX Ingress Controller      

    Prerequisites

    Many of the examples in this directory have common prerequisites.

    TLS certificates

    Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows

    $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
     Generating a 2048 bit RSA private key
     ................+++
     ................+++
    diff --git a/examples/affinity/cookie/index.html b/examples/affinity/cookie/index.html
    index 0e5d73e1c..e610f96b6 100644
    --- a/examples/affinity/cookie/index.html
    +++ b/examples/affinity/cookie/index.html
    @@ -1,4 +1,4 @@
    - Sticky Sessions - NGINX Ingress Controller      

    Sticky sessions

    This example demonstrates how to achieve session affinity using cookies.

    Deployment

    Session affinity can be configured using the following annotations:

    Name Description Value
    nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie)
    nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent
    nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE)
    nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path)
    nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict
    nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value "true" or "false"
    nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds
    nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds
    nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)

    You can create the example Ingress to test this:

    kubectl create -f ingress.yaml
    + Sticky Sessions - NGINX Ingress Controller      

    Sticky sessions

    This example demonstrates how to achieve session affinity using cookies.

    Deployment

    Session affinity can be configured using the following annotations:

    Name Description Value
    nginx.ingress.kubernetes.io/affinity Type of the affinity, set this to cookie to enable session affinity string (NGINX only supports cookie)
    nginx.ingress.kubernetes.io/affinity-mode The affinity mode defines how sticky a session is. Use balanced to redistribute some sessions when scaling pods or persistent for maximum stickiness. balanced (default) or persistent
    nginx.ingress.kubernetes.io/session-cookie-name Name of the cookie that will be created string (defaults to INGRESSCOOKIE)
    nginx.ingress.kubernetes.io/session-cookie-path Path that will be set on the cookie (required if your Ingress paths use regular expressions) string (defaults to the currently matched path)
    nginx.ingress.kubernetes.io/session-cookie-samesite SameSite attribute to apply to the cookie Browser accepted values are None, Lax, and Strict
    nginx.ingress.kubernetes.io/session-cookie-conditional-samesite-none Will omit SameSite=None attribute for older browsers which reject the more-recently defined SameSite=None value "true" or "false"
    nginx.ingress.kubernetes.io/session-cookie-max-age Time until the cookie expires, corresponds to the Max-Age cookie directive number of seconds
    nginx.ingress.kubernetes.io/session-cookie-expires Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date number of seconds
    nginx.ingress.kubernetes.io/session-cookie-change-on-failure When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream. true or false (defaults to false)

    You can create the example Ingress to test this:

    kubectl create -f ingress.yaml
     

    Validation

    You can confirm that the Ingress works:

    $ kubectl describe ing nginx-test
     Name:           nginx-test
     Namespace:      default
    diff --git a/examples/auth/basic/index.html b/examples/auth/basic/index.html
    index 90f62d4f1..f8d43602e 100644
    --- a/examples/auth/basic/index.html
    +++ b/examples/auth/basic/index.html
    @@ -1,4 +1,4 @@
    - Basic Authentication - NGINX Ingress Controller      

    Basic Authentication

    This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.

    $ htpasswd -c auth foo
    + Basic Authentication - NGINX Ingress Controller      

    Basic Authentication

    This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.

    $ htpasswd -c auth foo
     New password: <bar>
     New password:
     Re-type new password:
    diff --git a/examples/auth/client-certs/index.html b/examples/auth/client-certs/index.html
    index 7c057792b..9e4f6ba54 100644
    --- a/examples/auth/client-certs/index.html
    +++ b/examples/auth/client-certs/index.html
    @@ -1,4 +1,4 @@
    - Client Certificate Authentication - NGINX Ingress Controller      

    Client Certificate Authentication

    It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:

    1. CA certificate and Key(Intermediate Certs need to be in CA)
    2. Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use)
    3. Client Certificate(Signed by CA) and Key

    For more details on the generation process, checkout the Prerequisite docs.

    You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:

    openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
    + Client Certificate Authentication - NGINX Ingress Controller      

    Client Certificate Authentication

    It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:

    1. CA certificate and Key(Intermediate Certs need to be in CA)
    2. Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use)
    3. Client Certificate(Signed by CA) and Key

    For more details on the generation process, checkout the Prerequisite docs.

    You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:

    openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
     

    Then, you can concatenate them all in only one file, named 'ca.crt' as the following:

    cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
     

    Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.

    Creating Certificate Secrets

    There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.

    1. You can create a secret containing just the CA certificate and another Secret containing the Server Certificate which is Signed by the CA.

      kubectl create secret generic ca-secret --from-file=ca.crt=ca.crt
       kubectl create secret generic tls-secret --from-file=tls.crt=server.crt --from-file=tls.key=server.key
      diff --git a/examples/auth/external-auth/index.html b/examples/auth/external-auth/index.html
      index 7a116d571..f2bb4f62c 100644
      --- a/examples/auth/external-auth/index.html
      +++ b/examples/auth/external-auth/index.html
      @@ -1,4 +1,4 @@
      - External Basic Authentication - NGINX Ingress Controller      

      External Basic Authentication

      Example 1:

      Use an external service (Basic Auth) located in https://httpbin.org

      $ kubectl create -f ingress.yaml
      + External Basic Authentication - NGINX Ingress Controller      

      External Basic Authentication

      Example 1:

      Use an external service (Basic Auth) located in https://httpbin.org

      $ kubectl create -f ingress.yaml
       ingress "external-auth" created
       
       $ kubectl get ing external-auth
      diff --git a/examples/auth/oauth-external-auth/index.html b/examples/auth/oauth-external-auth/index.html
      index 66794f427..147177ebd 100644
      --- a/examples/auth/oauth-external-auth/index.html
      +++ b/examples/auth/oauth-external-auth/index.html
      @@ -1,4 +1,4 @@
      - External OAUTH Authentication - NGINX Ingress Controller      

      External OAUTH Authentication

      Overview

      The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

      Important

      This annotation requires nginx-ingress-controller v0.9.0 or greater.)

      Key Detail

      This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

      Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

      Sample:

      ...
      + External OAUTH Authentication - NGINX Ingress Controller      

      External OAUTH Authentication

      Overview

      The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

      Important

      This annotation requires nginx-ingress-controller v0.9.0 or greater.)

      Key Detail

      This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

      Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

      Sample:

      ...
       metadata:
         name: application
         annotations:
      diff --git a/examples/customization/configuration-snippets/index.html b/examples/customization/configuration-snippets/index.html
      index 55827448e..957a02c88 100644
      --- a/examples/customization/configuration-snippets/index.html
      +++ b/examples/customization/configuration-snippets/index.html
      @@ -1,4 +1,4 @@
      - Configuration Snippets - NGINX Ingress Controller      

      Configuration Snippets

      Ingress

      The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.

      $ kubectl apply -f ingress.yaml
      + Configuration Snippets - NGINX Ingress Controller      

      Configuration Snippets

      Ingress

      The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.

      $ kubectl apply -f ingress.yaml
       

      Test

      Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf

      Custom Configuration

      Using a ConfigMap is possible to customize the NGINX configuration

      For example, if we want to change the timeouts we need to create a ConfigMap:

      $ cat configmap.yaml
      + Custom Configuration - NGINX Ingress Controller      

      Custom Configuration

      Using a ConfigMap is possible to customize the NGINX configuration

      For example, if we want to change the timeouts we need to create a ConfigMap:

      $ cat configmap.yaml
       apiVersion: v1
       data:
         proxy-connect-timeout: "10"
      diff --git a/examples/customization/custom-errors/index.html b/examples/customization/custom-errors/index.html
      index d7668870b..bc99bec8d 100644
      --- a/examples/customization/custom-errors/index.html
      +++ b/examples/customization/custom-errors/index.html
      @@ -1,4 +1,4 @@
      - Custom Errors - NGINX Ingress Controller      

      Custom Errors

      This example demonstrates how to use a custom backend to render custom error pages.

      Customized default backend

      First, create the custom default-backend. It will be used by the Ingress controller later on.

      $ kubectl create -f custom-default-backend.yaml
      + Custom Errors - NGINX Ingress Controller      

      Custom Errors

      This example demonstrates how to use a custom backend to render custom error pages.

      Customized default backend

      First, create the custom default-backend. It will be used by the Ingress controller later on.

      $ kubectl create -f custom-default-backend.yaml
       service "nginx-errors" created
       deployment.apps "nginx-errors" created
       

      This should have created a Deployment and a Service with the name nginx-errors.

      $ kubectl get deploy,svc
      diff --git a/examples/customization/custom-headers/index.html b/examples/customization/custom-headers/index.html
      index 34712f6c7..78b42ff3a 100644
      --- a/examples/customization/custom-headers/index.html
      +++ b/examples/customization/custom-headers/index.html
      @@ -1,4 +1,4 @@
      - Custom Headers - NGINX Ingress Controller      

      Custom Headers

      This example demonstrates configuration of the nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.

      custom-headers.yaml defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml
      + Custom Headers - NGINX Ingress Controller      

      Custom Headers

      This example demonstrates configuration of the nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.

      custom-headers.yaml defines a ConfigMap in the ingress-nginx namespace named custom-headers, holding several custom X-prefixed HTTP headers.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml
       

      configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml
       

      The nginx ingress controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.

      Test

      Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf

      External authentication, authentication service response headers propagation

      This example demonstrates propagation of selected authentication service response headers to backend service.

      Sample configuration includes:

      • Sample authentication service producing several response headers
      • Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
      • After successful authentication service generates response headers UserID and UserRole
      • Sample echo service displaying header information
      • Two ingress objects pointing to echo service
      • Public, which allows access from unauthenticated users
      • Private, which allows access from authenticated users only

      You can deploy the controller as follows:

      $ kubectl create -f deploy/
      + External authentication - NGINX Ingress Controller      

      External authentication, authentication service response headers propagation

      This example demonstrates propagation of selected authentication service response headers to backend service.

      Sample configuration includes:

      • Sample authentication service producing several response headers
      • Authentication logic is based on HTTP header: requests with header User containing string internal are considered authenticated
      • After successful authentication service generates response headers UserID and UserRole
      • Sample echo service displaying header information
      • Two ingress objects pointing to echo service
      • Public, which allows access from unauthenticated users
      • Private, which allows access from authenticated users only

      You can deploy the controller as follows:

      $ kubectl create -f deploy/
       deployment "demo-auth-service" created
       service "demo-auth-service" created
       ingress "demo-auth-service" created
      diff --git a/examples/customization/ssl-dh-param/index.html b/examples/customization/ssl-dh-param/index.html
      index 752ef83cc..d655edc4b 100644
      --- a/examples/customization/ssl-dh-param/index.html
      +++ b/examples/customization/ssl-dh-param/index.html
      @@ -1,4 +1,4 @@
      - Custom DH parameters for perfect forward secrecy - NGINX Ingress Controller      

      Custom DH parameters for perfect forward secrecy

      This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".

      Custom configuration

      $ cat configmap.yaml
      + Custom DH parameters for perfect forward secrecy - NGINX Ingress Controller      

      Custom DH parameters for perfect forward secrecy

      This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".

      Custom configuration

      $ cat configmap.yaml
       apiVersion: v1
       data:
         ssl-dh-param: "ingress-nginx/lb-dhparam"
      diff --git a/examples/customization/sysctl/index.html b/examples/customization/sysctl/index.html
      index 748ae8dda..cfec7dce0 100644
      --- a/examples/customization/sysctl/index.html
      +++ b/examples/customization/sysctl/index.html
      @@ -1,4 +1,4 @@
      - Sysctl tuning - NGINX Ingress Controller      

      Sysctl tuning

      This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch

      kubectl patch deployment -n ingress-nginx nginx-ingress-controller \
      + Sysctl tuning - NGINX Ingress Controller      

      Sysctl tuning

      This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch

      kubectl patch deployment -n ingress-nginx nginx-ingress-controller \
           --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/sysctl/patch.json)"
       

      Changes:

      • Backlog Queue setting net.core.somaxconn from 128 to 32768
      • Ephemeral Ports setting net.ipv4.ip_local_port_range from 32768 60999 to 1024 65000

      In a post from the NGINX blog, it is possible to see an explanation for the changes.

      Docker registry

      This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet

      Deployment

      First we deploy the docker registry in the cluster:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml
      + Docker registry - NGINX Ingress Controller      

      Docker registry

      This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet

      Deployment

      First we deploy the docker registry in the cluster:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml
       

      Important

      DO NOT RUN THIS IN PRODUCTION

      This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

      The next required step is creation of the ingress rules. To do this we have two options: with and without TLS

      Without TLS

      Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml
       

      Important

      Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.

      Please check deploy a plain http registry

      With TLS

      Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

      wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml
       

      Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.

      Testing

      To test the registry is working correctly we download a known image from docker hub, create a tag pointing to the new registry and upload the image:

      docker pull ubuntu:16.04
      diff --git a/examples/grpc/index.html b/examples/grpc/index.html
      index fb01f2be7..17ef9e748 100644
      --- a/examples/grpc/index.html
      +++ b/examples/grpc/index.html
      @@ -1,4 +1,4 @@
      - gRPC - NGINX Ingress Controller      

      gRPC

      This example demonstrates how to route traffic to a gRPC service through the nginx controller.

      Prerequisites

      1. You have a kubernetes cluster running.
      2. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress).
      3. You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 for grpc support.
      4. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.

      Step 1: kubernetes Deployment

      $ kubectl create -f app.yaml
      + gRPC - NGINX Ingress Controller      

      gRPC

      This example demonstrates how to route traffic to a gRPC service through the nginx controller.

      Prerequisites

      1. You have a kubernetes cluster running.
      2. You have a domain name such as example.com that is configured to route traffic to the ingress controller. Replace references to fortune-teller.stack.build (the domain name used in this example) to your own domain name (you're also responsible for provisioning an SSL certificate for the ingress).
      3. You have the nginx-ingress controller installed in typical fashion (must be at least quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0 for grpc support.
      4. You have a backend application running a gRPC server and listening for TCP traffic. If you prefer, you can use the fortune-teller application provided here as an example.

      Step 1: kubernetes Deployment

      $ kubectl create -f app.yaml
       

      This is a standard kubernetes deployment object. It is running a grpc service listening on port 50051.

      The sample application fortune-teller-app is a grpc server implemented in go. Here's the stripped-down implementation:

      func main() {
           grpcServer := grpc.NewServer()
           fortune.RegisterFortuneTellerServer(grpcServer, &FortuneTeller{})
      diff --git a/examples/index.html b/examples/index.html
      index 3cc522e5e..908c90fc5 100644
      --- a/examples/index.html
      +++ b/examples/index.html
      @@ -1,4 +1,4 @@
      - Introduction - NGINX Ingress Controller      

      Ingress examples

      This directory contains a catalog of examples on how to run, configure and scale Ingress.
      Please review the prerequisites before trying them.

      Category Name Description Complexity Level
      Apps Docker Registry TODO TODO
      Auth Basic authentication password protect your website Intermediate
      Auth Client certificate authentication secure your website with client certificate authentication Intermediate
      Auth External authentication plugin defer to an external authentication service Intermediate
      Auth OAuth external auth TODO TODO
      Customization Configuration snippets customize nginx location configuration using annotations Advanced
      Customization Custom configuration TODO TODO
      Customization Custom DH parameters for perfect forward secrecy TODO TODO
      Customization Custom errors serve custom error pages from the default backend Intermediate
      Customization Custom headers set custom headers before sending traffic to backends Advanced
      Customization External authentication with response header propagation TODO TODO
      Customization Sysctl tuning TODO TODO
      Features Rewrite TODO TODO
      Features Session stickiness route requests consistently to the same endpoint Advanced
      Scaling Static IP a single ingress gets a single static IP Intermediate
      TLS Multi TLS certificate termination TODO TODO
      TLS TLS termination TODO TODO

      Ingress examples

      This directory contains a catalog of examples on how to run, configure and scale Ingress.
      Please review the prerequisites before trying them.

      Category Name Description Complexity Level
      Apps Docker Registry TODO TODO
      Auth Basic authentication password protect your website Intermediate
      Auth Client certificate authentication secure your website with client certificate authentication Intermediate
      Auth External authentication plugin defer to an external authentication service Intermediate
      Auth OAuth external auth TODO TODO
      Customization Configuration snippets customize nginx location configuration using annotations Advanced
      Customization Custom configuration TODO TODO
      Customization Custom DH parameters for perfect forward secrecy TODO TODO
      Customization Custom errors serve custom error pages from the default backend Intermediate
      Customization Custom headers set custom headers before sending traffic to backends Advanced
      Customization External authentication with response header propagation TODO TODO
      Customization Sysctl tuning TODO TODO
      Features Rewrite TODO TODO
      Features Session stickiness route requests consistently to the same endpoint Advanced
      Scaling Static IP a single ingress gets a single static IP Intermediate
      TLS Multi TLS certificate termination TODO TODO
      TLS TLS termination TODO TODO

      Multi TLS certificate termination

      This example uses 2 different certificates to terminate SSL for 2 hostnames.

      1. Deploy the controller by creating the rc in the parent dir
      2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
      3. Create multi-tls.yaml

      This should generate a segment like:

      $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
      + Multi TLS certificate termination - NGINX Ingress Controller      

      Multi TLS certificate termination

      This example uses 2 different certificates to terminate SSL for 2 hostnames.

      1. Deploy the controller by creating the rc in the parent dir
      2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
      3. Create multi-tls.yaml

      This should generate a segment like:

      $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
           server {
               listen 80;
               listen 443 ssl http2;
      diff --git a/examples/psp/index.html b/examples/psp/index.html
      index 5189c2f4c..c82e47219 100644
      --- a/examples/psp/index.html
      +++ b/examples/psp/index.html
      @@ -1,5 +1,5 @@
      - Pod Security Policy (PSP) - NGINX Ingress Controller      

      Pod Security Policy (PSP)

      In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).

      PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.

      Before applying any objects, first apply the PSP permissions by running:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/psp/psp.yaml
      -

      Note: PSP permissions must be granted before to the creation of the Deployment and the ReplicaSet.

      Pod Security Policy (PSP)

      In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called Pod Security Policy (PSP).

      PSP allows the cluster owner to define the permission of each object, for example creating a pod. If you have PSP enabled on the cluster, and you deploy ingress-nginx, you will need to provide the Deployment with the permissions to create pods.

      Before applying any objects, first apply the PSP permissions by running:

      kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/psp/psp.yaml
      +

      Note: PSP permissions must be granted before to the creation of the Deployment and the ReplicaSet.

      Rewrite

      This example demonstrates how to use the Rewrite annotations

      Prerequisites

      You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      Deployment

      Rewriting can be controlled using the following annotations:

      Name Description Values
      nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string
      nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool
      nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool
      nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string
      nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool

      Examples

      Rewrite Target

      Attention

      Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

      Note

      Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

      Create an Ingress rule with a rewrite annotation:

      $ echo '
      + Rewrite - NGINX Ingress Controller      

      Rewrite

      This example demonstrates how to use the Rewrite annotations

      Prerequisites

      You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      Deployment

      Rewriting can be controlled using the following annotations:

      Name Description Values
      nginx.ingress.kubernetes.io/rewrite-target Target URI where the traffic must be redirected string
      nginx.ingress.kubernetes.io/ssl-redirect Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) bool
      nginx.ingress.kubernetes.io/force-ssl-redirect Forces the redirection to HTTPS even if the Ingress is not TLS Enabled bool
      nginx.ingress.kubernetes.io/app-root Defines the Application Root that the Controller must redirect if it's in '/' context string
      nginx.ingress.kubernetes.io/use-regex Indicates if the paths defined on an Ingress use regular expressions bool

      Examples

      Rewrite Target

      Attention

      Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

      Note

      Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

      Create an Ingress rule with a rewrite annotation:

      $ echo '
       apiVersion: networking.k8s.io/v1beta1
       kind: Ingress
       metadata:
      diff --git a/examples/static-ip/index.html b/examples/static-ip/index.html
      index 94c569260..a27291f17 100644
      --- a/examples/static-ip/index.html
      +++ b/examples/static-ip/index.html
      @@ -1,4 +1,4 @@
      - Static IPs - NGINX Ingress Controller      

      Static IPs

      This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.

      Prerequisites

      You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      Acquiring an IP

      Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade.

      To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer.

      First, create a loadbalancer Service and wait for it to acquire an IP

      $ kubectl create -f static-ip-svc.yaml
      + Static IPs - NGINX Ingress Controller      

      Static IPs

      This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.

      Prerequisites

      You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

      Acquiring an IP

      Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade.

      To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer.

      First, create a loadbalancer Service and wait for it to acquire an IP

      $ kubectl create -f static-ip-svc.yaml
       service "nginx-ingress-lb" created
       
       $ kubectl get svc nginx-ingress-lb
      diff --git a/examples/tls-termination/index.html b/examples/tls-termination/index.html
      index 9afff511d..f7563ea93 100644
      --- a/examples/tls-termination/index.html
      +++ b/examples/tls-termination/index.html
      @@ -1,4 +1,4 @@
      - TLS termination - NGINX Ingress Controller      

      TLS termination

      This example demonstrates how to terminate TLS through the nginx Ingress controller.

      Prerequisites

      You need a TLS cert and a test HTTP service for this example.

      Deployment

      Create a ingress.yaml file.

      apiVersion: networking.k8s.io/v1beta1
      + TLS termination - NGINX Ingress Controller      

      TLS termination

      This example demonstrates how to terminate TLS through the nginx Ingress controller.

      Prerequisites

      You need a TLS cert and a test HTTP service for this example.

      Deployment

      Create a ingress.yaml file.

      apiVersion: networking.k8s.io/v1beta1
       kind: Ingress
       metadata:
         name: nginx-test
      diff --git a/how-it-works/index.html b/how-it-works/index.html
      index fe86d7fad..439ca1d04 100644
      --- a/how-it-works/index.html
      +++ b/how-it-works/index.html
      @@ -1,4 +1,4 @@
      - How it works - NGINX Ingress Controller      

      How it works

      The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.

      NGINX configuration

      The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

      NGINX model

      Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

      To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

      One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

      The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

      Building the NGINX model

      Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

      Operations to build the model:

      • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

      • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

      • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
      • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

      • Create a list of NGINX Servers (per hostname)

      • Create a list of NGINX Upstreams
      • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
      • Annotations are applied to all the paths in the Ingress.
      • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.

      When a reload is required

      The next list describes the scenarios when a reload is required:

      • New Ingress Resource Created.
      • TLS section is added to existing Ingress.
      • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
      • A path is added/removed from an Ingress.
      • An Ingress, Service, Secret is removed.
      • Some missing referenced object from the Ingress is available, like a Service or Secret.
      • A Secret is updated.

      Avoiding reloads

      In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

      Avoiding reloads on Endpoints changes

      On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

      In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

      Avoiding outage from wrong configuration

      Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

      To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

      How it works

      The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.

      NGINX configuration

      The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

      NGINX model

      Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

      To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

      One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

      The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

      Building the NGINX model

      Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

      Operations to build the model:

      • Order Ingress rules by CreationTimestamp field, i.e., old rules first.

      • If the same path for the same host is defined in more than one Ingress, the oldest rule wins.

      • If more than one Ingress contains a TLS section for the same host, the oldest rule wins.
      • If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins.

      • Create a list of NGINX Servers (per hostname)

      • Create a list of NGINX Upstreams
      • If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions.
      • Annotations are applied to all the paths in the Ingress.
      • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.

      When a reload is required

      The next list describes the scenarios when a reload is required:

      • New Ingress Resource Created.
      • TLS section is added to existing Ingress.
      • Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
      • A path is added/removed from an Ingress.
      • An Ingress, Service, Secret is removed.
      • Some missing referenced object from the Ingress is available, like a Service or Secret.
      • A Secret is updated.

      Avoiding reloads

      In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

      Avoiding reloads on Endpoints changes

      On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

      In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

      Avoiding outage from wrong configuration

      Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

      To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

      Welcome

      This is the documentation for the NGINX Ingress Controller.

      It is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.

      Learn more about using Ingress on k8s.io.

      Getting Started

      See Deployment for a whirlwind tour that will get you started.

      Welcome

      This is the documentation for the NGINX Ingress Controller.

      It is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.

      Learn more about using Ingress on k8s.io.

      Getting Started

      See Deployment for a whirlwind tour that will get you started.