diff --git a/deploy/index.html b/deploy/index.html index 092fc3690..84261ac5c 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -1481,7 +1481,7 @@ Then execute:

This example creates an ELB with just two listeners, one in port 80 and another in port 443

-

Listeners

+

Listeners

If the ingress controller uses RBAC run:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml
 
diff --git a/development/index.html b/development/index.html index db1cfd409..62def534d 100644 --- a/development/index.html +++ b/development/index.html @@ -1200,7 +1200,10 @@ It includes how to build, test, and release ingress controllers.

Quick Start

Initial developer environment build

-

Prequisites: Minikube must be installed; See releases for installation instructions.

+
+

Prequisites: Minikube must be installed. +See releases for installation instructions.

+

If you are using MacOS and deploying to minikube, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx:

$ make dev-env
 
diff --git a/examples/PREREQUISITES/index.html b/examples/PREREQUISITES/index.html index 98ee56a4c..477bfad16 100644 --- a/examples/PREREQUISITES/index.html +++ b/examples/PREREQUISITES/index.html @@ -1165,7 +1165,7 @@ key/cert pair with an arbitrarily chosen hostname, created as follows

CA Authentication

You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our own CA, and also generate a client certificate.

-

These instructions are based on CoreOS OpenSSL instructions

+

These instructions are based on CoreOS OpenSSL. See live doc.

Generating a CA

First of all, you've to generate a CA. This is going to be the one who will sign your client certificates. In real production world, you may face CAs with intermediate certificates, as the following:

@@ -1243,7 +1243,7 @@ the TLS Auth directive:

-

Note: You can also generate the CA Authentication Secret along with the TLS Secret by using:

+

Note: You can also generate the CA Authentication Secret along with the TLS Secret by using:

$ kubectl create secret generic caingress --namespace=default --from-file=ca.crt=<ca.crt> --from-file=tls.crt=<tls.crt> --from-file=tls.key=<tls.key>
 
diff --git a/examples/customization/custom-errors/rc-custom-errors.yaml b/examples/customization/custom-errors/rc-custom-errors.yaml index c400e5fee..88f6ce60a 100644 --- a/examples/customization/custom-errors/rc-custom-errors.yaml +++ b/examples/customization/custom-errors/rc-custom-errors.yaml @@ -16,7 +16,7 @@ spec: spec: terminationGracePeriodSeconds: 60 containers: - - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0 + - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 name: nginx-ingress-lb imagePullPolicy: Always readinessProbe: diff --git a/examples/docker-registry/README/index.html b/examples/docker-registry/README/index.html index 09fb98991..6412cd484 100644 --- a/examples/docker-registry/README/index.html +++ b/examples/docker-registry/README/index.html @@ -1125,8 +1125,11 @@ -

Important: DO NOT RUN THIS IN PRODUCTION. -This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

+
+

Important

+

DO NOT RUN THIS IN PRODUCTION

+

This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

+

The next required step is creation of the ingress rules. To do this we have two options: with and without TLS

Without TLS

Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

@@ -1134,8 +1137,11 @@ This deployment uses emptyDir in the -

Important: running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. -Please check deploy a plain http registry

+
+

Important

+

Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.

+

Please check deploy a plain http registry

+

With TLS

Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml
diff --git a/examples/external-auth/README/index.html b/examples/external-auth/README/index.html
index 54a3ae666..7a972bea4 100644
--- a/examples/external-auth/README/index.html
+++ b/examples/external-auth/README/index.html
@@ -1121,7 +1121,10 @@
 

Overview

The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

-

(Note, this annotation requires nginx-ingress-controller v0.9.0 or greater.)

+
+

Important

+

this annotation requires nginx-ingress-controller v0.9.0 or greater.)

+

Key Detail

This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

@@ -1151,7 +1154,7 @@ into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using g
    -
  1. Create a custom Github OAuth application https://github.com/settings/applications/new
  2. +
  3. Create a custom Github OAuth application

Register OAuth2 Application

    diff --git a/examples/static-ip/README/index.html b/examples/static-ip/README/index.html index 820c4635d..9e3c48de3 100644 --- a/examples/static-ip/README/index.html +++ b/examples/static-ip/README/index.html @@ -1187,9 +1187,11 @@ already has it set to "nginx-ingress-lb").

+

Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.

+

Promote ephemeral to static IP

To promote the allocated IP to static, you can update the Service manifest

$ kubectl patch svc nginx-ingress-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
diff --git a/examples/static-ip/nginx-ingress-controller.yaml b/examples/static-ip/nginx-ingress-controller.yaml
index 5b97148a3..0975d76be 100644
--- a/examples/static-ip/nginx-ingress-controller.yaml
+++ b/examples/static-ip/nginx-ingress-controller.yaml
@@ -21,7 +21,7 @@ spec:
       # hostNetwork: true
       terminationGracePeriodSeconds: 60
       containers:
-      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.13.0
+      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
         name: nginx-ingress-controller
         readinessProbe:
           httpGet:
diff --git a/extra.css b/extra.css
index f532283de..32e38b131 100644
--- a/extra.css
+++ b/extra.css
@@ -1,3 +1,3 @@
-td{
+td:nth-child(1){
 	white-space: nowrap;
 }
\ No newline at end of file
diff --git a/search/search_index.json b/search/search_index.json
index ee96fa587..8867fbaf1 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -167,12 +167,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/",
-            "text": "Annotations\n\u00b6\n\n\nYou can add these Kubernetes annotations to specific Ingress objects to customize their behavior.\n\n\n\n\nTip\n\n\nAnnotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e. \n\"true\"\n, \n\"false\"\n, \n\"100\"\n.\n\n\n\n\n\n\n\n\n\n\nName\n\n\ntype\n\n\n\n\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/add-base-url\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/app-root\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/affinity\n\n\ncookie\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-realm\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-type\n\n\nbasic or digest\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-url\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/base-url-scheme\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/default-backend\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-cors\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/force-ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/from-to-www-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-rps\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-from\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-to\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-target\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-backends\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-alias\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/service-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-name\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-hash\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-passthrough\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/load-balance\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-vhost\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/whitelist-source-range\n\n\nCIDR\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n\n\nstring\n\n\n\n\n\n\n\n\nRewrite\n\u00b6\n\n\nIn some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.\nSet the annotation \nnginx.ingress.kubernetes.io/rewrite-target\n to the path expected by the service.\n\n\nIf the application contains relative links it is possible to add an additional annotation \nnginx.ingress.kubernetes.io/add-base-url\n that will prepend a \nbase\n tag\n in the header of the returned HTML from the backend.\n\n\nIf the scheme of \nbase\n tag\n need to be specific, set the annotation \nnginx.ingress.kubernetes.io/base-url-scheme\n to the scheme such as \nhttp\n and \nhttps\n.\n\n\nIf the Application Root is exposed in a different path and needs to be redirected, set the annotation \nnginx.ingress.kubernetes.io/app-root\n to redirect requests for \n/\n.\n\n\nPlease check the \nrewrite\n example.\n\n\nSession Affinity\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/affinity\n enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.\nThe only affinity type available for NGINX is \ncookie\n.\n\n\nPlease check the \naffinity\n example.\n\n\nAuthentication\n\u00b6\n\n\nIs possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key \nauth\n.\n\n\nThe annotations are:\n\n\nnginx.ingress.kubernetes.io/auth-type: [basic|digest]\n\n\n\n\n\nIndicates the \nHTTP Authentication Type: Basic or Digest Access Authentication\n.\n\n\nnginx.ingress.kubernetes.io/auth-secret: secretName\n\n\n\n\n\nThe name of the Secret that contains the usernames and passwords which are granted access to the \npath\ns defined in the Ingress rules.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n\n\n\n\n\nPlease check the \nauth\n example.\n\n\nCustom NGINX upstream checks\n\u00b6\n\n\nNGINX exposes some flags in the \nupstream configuration\n that enable the configuration of each server in the upstream. The Ingress controller allows custom \nmax_fails\n and \nfail_timeout\n parameters in a global context using \nupstream-max-fails\n and \nupstream-fail-timeout\n in the NGINX ConfigMap or in a particular Ingress rule. \nupstream-max-fails\n defaults to 0. This means NGINX will respect the container's \nreadinessProbe\n if it is defined. If there is no probe and no values for \nupstream-max-fails\n NGINX will continue to send traffic to the container.\n\n\nWith the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.\n\n\nTo use custom values in an Ingress rule define these annotations:\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the \nupstream-fail-timeout\n parameter to consider the server unavailable.\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.\n\n\nIn NGINX, backend server pools are called \"\nupstreams\n\". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.\n\n\nImportant:\n All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.\n\n\nPlease check the \ncustom upstream check\n example.\n\n\nCustom NGINX upstream hashing\n\u00b6\n\n\nNGINX supports load balancing by client-server mapping based on \nconsistent hashing\n for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The \nketama\n consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.\n\n\nTo enable consistent hashing for a backend:\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n: the nginx variable, text value or any combination thereof to use for consistent hashing. For example \nnginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\"\n to consistently hash upstream requests by the current request URI.\n\n\nCustom NGINX load balancing\n\u00b6\n\n\nThis is similar to https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance but configures load balancing algorithm per ingress.\nNote that \nnginx.ingress.kubernetes.io/upstream-hash-by\n takes preference over this. If this and \nnginx.ingress.kubernetes.io/upstream-hash-by\n are not set then we fallback to using globally configured load balancing algorithm.\n\n\nCustom NGINX upstream vhost\n\u00b6\n\n\nThis configuration setting allows you to control the value for host in the following statement: \nproxy_set_header Host $host\n, which forms part of the location block.  This is useful if you need to call the upstream server by something other than \n$host\n.\n\n\nClient Certificate Authentication\n\u00b6\n\n\nIt is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.\n\n\nThe annotations are:\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret: secretName\n\n\n\n\n\nThe name of the Secret that contains the full Certificate Authority chain \nca.crt\n that is enabled to authenticate against this Ingress.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\n\n\n\nThe validation depth between the provided client certificate and the Certification Authority chain.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\n\n\n\nEnables verification of client certificates.\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\n\n\n\nThe URL/Page that user should be redirected in case of a Certificate Authentication Error\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\n\n\nIndicates if the received certificates should be passed or not to the upstream server.\nBy default this is disabled.\n\n\nPlease check the \nclient-certs\n example.\n\n\nImportant:\n\n\nTLS with Client Authentication is NOT possible in Cloudflare as is not allowed it and might result in unexpected behavior.\n\n\nCloudflare only allows Authenticated Origin Pulls and is required to use their own certificate:\nhttps://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/\n\n\nOnly Authenticated Origin Pulls are allowed and can be configured by following their tutorial:\nhttps://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls\n\n\nConfiguration snippet\n\u00b6\n\n\nUsing this annotation you can add additional configuration to the NGINX location. For example:\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n:\n \n|\n\n  \nmore_set_headers \"Request-Id: $req_id\";\n\n\n\n\n\n\nDefault Backend\n\u00b6\n\n\nThe ingress controller requires a default backend. This service handles the response when the service in the Ingress rule does not have endpoints.\nThis is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation \nnginx.ingress.kubernetes.io/default-backend: \n to specify a custom default backend.\n\n\nEnable CORS\n\u00b6\n\n\nTo enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/enable-cors: \"true\"\n. This will add a section in the server location enabling this functionality.\n\n\nCORS can be controlled with the following annotations:\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n controls what's the accepted Origin for CORS and defaults to '*'. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n controls if credentials can be passed during CORS operations.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-credentials: \"true\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n controls how long preflight requests can be cached.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-max-age: 600\n\n\nFor more information please check https://enable-cors.org/server_nginx.html\n\n\nServer Alias\n\u00b6\n\n\nTo add Server Aliases to an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/server-alias: \"\"\n.\nThis will create a server with the same configuration, but a different server_name as the provided host.\n\n\nNote:\n A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias\nannotation will be ignored. If a server-alias is created and later a new server with the same hostname is created\nthe new server configuration will take place over the alias configuration.\n\n\nFor more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name\n\n\nServer snippet\n\u00b6\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/server-snippet\n it is possible to add custom configuration in the server configuration block.\n\n\napiVersion\n:\n \nextensions/v1beta1\n\n\nkind\n:\n \nIngress\n\n\nmetadata\n:\n\n  \nannotations\n:\n\n    \nnginx.ingress.kubernetes.io/server-snippet\n:\n \n|\n\n\nset $agentflag 0;\n\n\n\nif ($http_user_agent ~* \"(Mobile)\" ){\n\n  \nset $agentflag 1;\n\n\n}\n\n\n\nif ( $agentflag = 1 ) {\n\n  \nreturn 301 https://m.example.com;\n\n\n}\n\n\n\n\n\n\nImportant:\n This annotation can be used only once per host\n\n\nClient Body Buffer Size\n\u00b6\n\n\nSets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.\n\n\nNote:\n The annotation value must be given in a valid format otherwise the\nFor example to set the client-body-buffer-size the following can be done:\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"\n # 1000 bytes\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1k\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1K\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1m\n # 1 megabyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1M\n # 1 megabyte\n\n\n\n\nFor more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size\n\n\nExternal Authentication\n\u00b6\n\n\nTo use an existing service that provides authentication the Ingress rule can be annotated with \nnginx.ingress.kubernetes.io/auth-url\n to indicate the URL where the HTTP request should be sent.\n\n\nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"URL\n \nto\n \nthe\n \nauthentication\n \nservice\"\n\n\n\n\n\n\nAdditionally it is possible to set:\n\n\nnginx.ingress.kubernetes.io/auth-method\n: \n\n to specify the HTTP method to use.\n\n\nnginx.ingress.kubernetes.io/auth-signin\n: \n\n to specify the location of the error page.\n\n\nnginx.ingress.kubernetes.io/auth-response-headers\n: \n\n to specify headers to pass to backend once authorization request completes.\n\n\nnginx.ingress.kubernetes.io/auth-request-redirect\n: \n\n  to specify the X-Auth-Request-Redirect header value.\n\n\nPlease check the \nexternal-auth\n example.\n\n\nRate limiting\n\u00b6\n\n\nThe annotations \nnginx.ingress.kubernetes.io/limit-connections\n, \nnginx.ingress.kubernetes.io/limit-rps\n, and \nnginx.ingress.kubernetes.io/limit-rpm\n define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate \nDDoS Attacks\n.\n\n\nnginx.ingress.kubernetes.io/limit-connections\n: number of concurrent connections allowed from a single IP address.\n\n\nnginx.ingress.kubernetes.io/limit-rps\n: number of connections that may be accepted from a given IP each second.\n\n\nnginx.ingress.kubernetes.io/limit-rpm\n: number of connections that may be accepted from a given IP each minute.\n\n\nYou can specify the client IP source ranges to be excluded from rate-limiting through the \nnginx.ingress.kubernetes.io/limit-whitelist\n annotation. The value is a comma separated list of CIDRs.\n\n\nIf you specify multiple annotations in a single Ingress rule, \nlimit-rpm\n, and then \nlimit-rps\n takes precedence.\n\n\nThe annotation \nnginx.ingress.kubernetes.io/limit-rate\n, \nnginx.ingress.kubernetes.io/limit-rate-after\n define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nnginx.ingress.kubernetes.io/limit-rate-after\n: sets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nnginx.ingress.kubernetes.io/limit-rate\n: rate of request that accepted from a client each second.\n\n\nTo configure this setting globally for all Ingress rules, the \nlimit-rate-after\n and \nlimit-rate\n value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.\n\n\nPermanent Redirect\n\u00b6\n\n\nThis annotation allows to return a permanent redirect instead of sending data to the upstream.  For example \nnginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com\n would redirect everything to Google.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n allows to configure TLS termination in the pod and not in NGINX.\n\n\nImportant:\n\n\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).\n\n\nThe use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described \nhere\n. If you're using ingress-controller without load balancer then the flag \n--enable-ssl-passthrough\n is required (by default it is disabled).\n\n\n\n\nSecure backends\n\u00b6\n\n\nBy default NGINX uses \nhttp\n to reach the services. Adding the annotation \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n in the Ingress rule changes the protocol to \nhttps\n.\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation \nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n.\n\n\nPlease note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the \nsecure-backends\n annotation.\n\n\nService Upstream\n\u00b6\n\n\nBy default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue \n#257\n.\n\n\nKnown Issues\n\u00b6\n\n\nIf the \nservice-upstream\n annotation is specified the following things should be taken into consideration:\n\n\n\n\nSticky Sessions will not work as only round-robin load balancing is supported.\n\n\nThe \nproxy_next_upstream\n directive will not have any effect meaning on error the request will not be dispatched to another upstream.\n\n\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects (301) to \nHTTPS\n if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use \nssl-redirect: \"false\"\n in the NGINX config map.\n\n\nTo configure this feature for specific ingress resources, you can use the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n annotation in the particular resource.\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to \nHTTPS\n even when there is not TLS cert available. This can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n annotation in the particular resource.\n\n\nRedirect from to www\n\u00b6\n\n\nIn some scenarios is required to redirect from \nwww.domain.com\n to \ndomain.com\n or viceversa.\nTo enable this feature use the annotation \nnginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"\n\n\nImportant:\n\nIf at some point a new Ingress is created with a host equal to one of the options (like \ndomain.com\n) the annotation will be omitted.\n\n\nWhitelist source range\n\u00b6\n\n\nYou can specify the allowed client IP source ranges through the \nnginx.ingress.kubernetes.io/whitelist-source-range\n annotation. The value is a comma separated list of \nCIDRs\n, e.g.  \n10.0.0.0/24,172.10.0.1\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nwhitelist-source-range\n value may be set in the NGINX ConfigMap.\n\n\nNote:\n Adding an annotation to an Ingress rule overrides any global restriction.\n\n\nCookie affinity\n\u00b6\n\n\nIf you use the \ncookie\n type you can also specify the name of the cookie that will be used to route the requests with the annotation \nnginx.ingress.kubernetes.io/session-cookie-name\n. The default is to create a cookie named 'INGRESSCOOKIE'.\n\n\nIn case of NGINX the annotation \nnginx.ingress.kubernetes.io/session-cookie-hash\n defines which algorithm will be used to 'hash' the used upstream. Default value is \nmd5\n and possible values are \nmd5\n, \nsha1\n and \nindex\n.\nThe \nindex\n option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!\n\n\nIn NGINX this feature is implemented by the third party module \nnginx-sticky-module-ng\n. The workflow used to define which upstream server will be used is explained \nhere\n\n\nCustom timeouts\n\u00b6\n\n\nUsing the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.\nIn some scenarios is required to have different values. To allow this we provide annotations that allows this customization:\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\n\n\nProxy redirect\n\u00b6\n\n\nWith the annotations \nnginx.ingress.kubernetes.io/proxy-redirect-from\n and \nnginx.ingress.kubernetes.io/proxy-redirect-to\n it is possible to set the text that should be changed in the \nLocation\n and \nRefresh\n header fields of a proxied server response (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)\nSetting \"off\" or \"default\" in the annotation \nnginx.ingress.kubernetes.io/proxy-redirect-from\n disables \nnginx.ingress.kubernetes.io/proxy-redirect-to\n\nBoth annotations will be used in any other case\nBy default the value is \"off\".\n\n\nCustom max body size\n\u00b6\n\n\nFor NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter \nclient_max_body_size\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-body-size\n value may be set in the NGINX ConfigMap.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n:\n \n8m\n\n\n\n\n\n\nProxy buffering\n\u00b6\n\n\nEnable or disable proxy buffering \nproxy_buffering\n.\nBy default proxy buffering is disabled in the nginx config.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-buffering\n value may be set in the NGINX ConfigMap.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n:\n \n\"on\"\n\n\n\n\n\n\nSSL ciphers\n\u00b6\n\n\nSpecifies the \nenabled ciphers\n.\n\n\nUsing this annotation will set the \nssl_ciphers\n directive at the server level. This configuration is active for all the paths in the host.\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n:\n \n\"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n\n\n\n\n\n\nConnection proxy header\n\u00b6\n\n\nUsing this annotation will override the default connection header set by nginx. To use custom values in an Ingress rule, define the annotation:\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n:\n \n\"keep-alive\"\n\n\n\n\n\n\nEnable Access Log\n\u00b6\n\n\nIn some scenarios could be required to disable NGINX access logs. To enable this feature use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n:\n \n\"false\"\n\n\n\n\n\n\nLua Resty WAF\n\u00b6\n\n\nUsing \nlua-resty-waf-*\n annotations we can enable and control \nlua-resty-waf\n per location.\nFollowing configuration will enable WAF for the paths defined in the corresponding ingress:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n:\n \n\"active\"\n\n\n\n\n\n\nIn order to run it in debugging mode you can set \nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n to \n\"true\"\n in addition to the above configuration.\nThe other possible values for \nnginx.ingress.kubernetes.io/lua-resty-waf\n are \ninactive\n and \nsimulate\n. In \ninactive\n mode WAF won't do anything, whereas\nin \nsimulate\n mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.\n\n\nlua-resty-waf\n comes with predefined set of rules(https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules) that covers ModSecurity CRS.\nYou can use \nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n to ignore subset of those rulesets. For an example:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n:\n \n\"41000_sqli,\n \n42000_xss\"\n\n\n\n\n\n\nwill ignore the two mentioned rulesets.\n\n\nIt is also possible to configure custom WAF rules per ingress using \nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n annotation. For an example the following snippet will\nconfigure a WAF rule to deny requests with query string value that contains word \nfoo\n:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n:\n \n'[=[\n \n{\n \n\"access\":\n \n[\n \n{\n \n\"actions\":\n \n{\n \n\"disrupt\"\n \n:\n \n\"DENY\"\n \n},\n \n\"id\":\n \n10001,\n \n\"msg\":\n \n\"my\n \ncustom\n \nrule\",\n \n\"operator\":\n \n\"STR_CONTAINS\",\n \n\"pattern\":\n \n\"foo\",\n \n\"vars\":\n \n[\n \n{\n \n\"parse\":\n \n[\n \n\"values\",\n \n1\n \n],\n \n\"type\":\n \n\"REQUEST_ARGS\"\n \n}\n \n]\n \n}\n \n],\n \n\"body_filter\":\n \n[],\n \n\"header_filter\":[]\n \n}\n \n]=]'\n\n\n\n\n\n\nFor details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf.",
+            "text": "Annotations\n\u00b6\n\n\nYou can add these Kubernetes annotations to specific Ingress objects to customize their behavior.\n\n\n\n\nTip\n\n\nAnnotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e. \n\"true\"\n, \n\"false\"\n, \n\"100\"\n.\n\n\n\n\n\n\n\n\n\n\nName\n\n\ntype\n\n\n\n\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/add-base-url\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/app-root\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/affinity\n\n\ncookie\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-realm\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-type\n\n\nbasic or digest\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-url\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/base-url-scheme\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/default-backend\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-cors\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/force-ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/from-to-www-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-rps\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-from\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-to\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-log\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-target\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-backends\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-alias\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/service-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-name\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-hash\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-passthrough\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/load-balance\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-vhost\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/whitelist-source-range\n\n\nCIDR\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n\n\nstring\n\n\n\n\n\n\n\n\nRewrite\n\u00b6\n\n\nIn some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.\nSet the annotation \nnginx.ingress.kubernetes.io/rewrite-target\n to the path expected by the service.\n\n\nIf the application contains relative links it is possible to add an additional annotation \nnginx.ingress.kubernetes.io/add-base-url\n that will prepend a \nbase\n tag\n in the header of the returned HTML from the backend.\n\n\nIf the scheme of \nbase\n tag\n need to be specific, set the annotation \nnginx.ingress.kubernetes.io/base-url-scheme\n to the scheme such as \nhttp\n and \nhttps\n.\n\n\nIf the Application Root is exposed in a different path and needs to be redirected, set the annotation \nnginx.ingress.kubernetes.io/app-root\n to redirect requests for \n/\n.\n\n\nPlease check the \nrewrite\n example.\n\n\nSession Affinity\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/affinity\n enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.\nThe only affinity type available for NGINX is \ncookie\n.\n\n\nPlease check the \naffinity\n example.\n\n\nAuthentication\n\u00b6\n\n\nIs possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key \nauth\n.\n\n\nThe annotations are:\n\n\nnginx.ingress.kubernetes.io/auth-type: [basic|digest]\n\n\n\n\n\nIndicates the \nHTTP Authentication Type: Basic or Digest Access Authentication\n.\n\n\nnginx.ingress.kubernetes.io/auth-secret: secretName\n\n\n\n\n\nThe name of the Secret that contains the usernames and passwords which are granted access to the \npath\ns defined in the Ingress rules.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n\n\n\n\n\nPlease check the \nauth\n example.\n\n\nCustom NGINX upstream checks\n\u00b6\n\n\nNGINX exposes some flags in the \nupstream configuration\n that enable the configuration of each server in the upstream. The Ingress controller allows custom \nmax_fails\n and \nfail_timeout\n parameters in a global context using \nupstream-max-fails\n and \nupstream-fail-timeout\n in the NGINX ConfigMap or in a particular Ingress rule. \nupstream-max-fails\n defaults to 0. This means NGINX will respect the container's \nreadinessProbe\n if it is defined. If there is no probe and no values for \nupstream-max-fails\n NGINX will continue to send traffic to the container.\n\n\n\n\nTip\n\n\nWith the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**\n\n\n\n\nTo use custom values in an Ingress rule define these annotations:\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the \nupstream-fail-timeout\n parameter to consider the server unavailable.\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.\n\n\nIn NGINX, backend server pools are called \"\nupstreams\n\". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.\n\n\n\n\nImportant\n\n\nAll Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.\n\n\n\n\nPlease check the \ncustom upstream check\n example.\n\n\nCustom NGINX upstream hashing\n\u00b6\n\n\nNGINX supports load balancing by client-server mapping based on \nconsistent hashing\n for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The \nketama\n consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.\n\n\nTo enable consistent hashing for a backend:\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n: the nginx variable, text value or any combination thereof to use for consistent hashing. For example \nnginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\"\n to consistently hash upstream requests by the current request URI.\n\n\nCustom NGINX load balancing\n\u00b6\n\n\nThis is similar to (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance) but configures load balancing algorithm per ingress.\n\n\n\n\nNote that \nnginx.ingress.kubernetes.io/upstream-hash-by\n takes preference over this. If this and \nnginx.ingress.kubernetes.io/upstream-hash-by\n are not set then we fallback to using globally configured load balancing algorithm.\n\n\n\n\nCustom NGINX upstream vhost\n\u00b6\n\n\nThis configuration setting allows you to control the value for host in the following statement: \nproxy_set_header Host $host\n, which forms part of the location block.  This is useful if you need to call the upstream server by something other than \n$host\n.\n\n\nClient Certificate Authentication\n\u00b6\n\n\nIt is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.\n\n\nThe annotations are:\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret: secretName\n\n\n\n\n\nThe name of the Secret that contains the full Certificate Authority chain \nca.crt\n that is enabled to authenticate against this Ingress.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\n\n\n\nThe validation depth between the provided client certificate and the Certification Authority chain.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\n\n\n\nEnables verification of client certificates.\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\n\n\n\nThe URL/Page that user should be redirected in case of a Certificate Authentication Error\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\n\n\nIndicates if the received certificates should be passed or not to the upstream server.\nBy default this is disabled.\n\n\nPlease check the \nclient-certs\n example.\n\n\n\n\nImportant\n\n\nTLS with Client Authentication is NOT possible in Cloudflare as is not allowed it and might result in unexpected behavior.\n\n\nCloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: \nhttps://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/\n\n\nOnly Authenticated Origin Pulls are allowed and can be configured by following their tutorial: \nhttps://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls\n\n\n\n\nConfiguration snippet\n\u00b6\n\n\nUsing this annotation you can add additional configuration to the NGINX location. For example:\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n:\n \n|\n\n  \nmore_set_headers \"Request-Id: $req_id\";\n\n\n\n\n\n\nDefault Backend\n\u00b6\n\n\nThe ingress controller requires a default backend. This service handles the response when the service in the Ingress rule does not have endpoints.\nThis is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation \nnginx.ingress.kubernetes.io/default-backend: \n to specify a custom default backend.\n\n\nEnable CORS\n\u00b6\n\n\nTo enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/enable-cors: \"true\"\n. This will add a section in the server location enabling this functionality.\n\n\nCORS can be controlled with the following annotations:\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n controls what's the accepted Origin for CORS and defaults to '*'. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n controls if credentials can be passed during CORS operations.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-credentials: \"true\"\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n controls how long preflight requests can be cached.\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-max-age: 600\n\n\nFor more information please see \nhttps://enable-cors.org\n\n\nServer Alias\n\u00b6\n\n\nTo add Server Aliases to an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/server-alias: \"\"\n.\nThis will create a server with the same configuration, but a different server_name as the provided host.\n\n\n\n\nNote\n\n\nA server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created the new server configuration will take place over the alias configuration.\n\n\n\n\nFor more information please see \nhttp://nginx.org\n\n\nServer snippet\n\u00b6\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/server-snippet\n it is possible to add custom configuration in the server configuration block.\n\n\napiVersion\n:\n \nextensions/v1beta1\n\n\nkind\n:\n \nIngress\n\n\nmetadata\n:\n\n  \nannotations\n:\n\n    \nnginx.ingress.kubernetes.io/server-snippet\n:\n \n|\n\n\nset $agentflag 0;\n\n\n\nif ($http_user_agent ~* \"(Mobile)\" ){\n\n  \nset $agentflag 1;\n\n\n}\n\n\n\nif ( $agentflag = 1 ) {\n\n  \nreturn 301 https://m.example.com;\n\n\n}\n\n\n\n\n\n\n\n\nImportant\n\n\nThis annotation can be used only once per host\n\n\n\n\nClient Body Buffer Size\n\u00b6\n\n\nSets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.\n\n\nNote:\n The annotation value must be given in a valid format otherwise the\nFor example to set the client-body-buffer-size the following can be done:\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"\n # 1000 bytes\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1k\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1K\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1m\n # 1 megabyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1M\n # 1 megabyte\n\n\n\n\nFor more information please see \nhttp://nginx.org\n\n\nExternal Authentication\n\u00b6\n\n\nTo use an existing service that provides authentication the Ingress rule can be annotated with \nnginx.ingress.kubernetes.io/auth-url\n to indicate the URL where the HTTP request should be sent.\n\n\nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"URL\n \nto\n \nthe\n \nauthentication\n \nservice\"\n\n\n\n\n\n\nAdditionally it is possible to set:\n\n\nnginx.ingress.kubernetes.io/auth-method\n: \n\n to specify the HTTP method to use.\n\n\nnginx.ingress.kubernetes.io/auth-signin\n: \n\n to specify the location of the error page.\n\n\nnginx.ingress.kubernetes.io/auth-response-headers\n: \n\n to specify headers to pass to backend once authorization request completes.\n\n\nnginx.ingress.kubernetes.io/auth-request-redirect\n: \n\n  to specify the X-Auth-Request-Redirect header value.\n\n\nPlease check the \nexternal-auth\n example.\n\n\nRate limiting\n\u00b6\n\n\nThe annotations \nnginx.ingress.kubernetes.io/limit-connections\n, \nnginx.ingress.kubernetes.io/limit-rps\n, and \nnginx.ingress.kubernetes.io/limit-rpm\n define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate \nDDoS Attacks\n.\n\n\nnginx.ingress.kubernetes.io/limit-connections\n: number of concurrent connections allowed from a single IP address.\n\n\nnginx.ingress.kubernetes.io/limit-rps\n: number of connections that may be accepted from a given IP each second.\n\n\nnginx.ingress.kubernetes.io/limit-rpm\n: number of connections that may be accepted from a given IP each minute.\n\n\nYou can specify the client IP source ranges to be excluded from rate-limiting through the \nnginx.ingress.kubernetes.io/limit-whitelist\n annotation. The value is a comma separated list of CIDRs.\n\n\nIf you specify multiple annotations in a single Ingress rule, \nlimit-rpm\n, and then \nlimit-rps\n takes precedence.\n\n\nThe annotation \nnginx.ingress.kubernetes.io/limit-rate\n, \nnginx.ingress.kubernetes.io/limit-rate-after\n define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nnginx.ingress.kubernetes.io/limit-rate-after\n: sets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nnginx.ingress.kubernetes.io/limit-rate\n: rate of request that accepted from a client each second.\n\n\nTo configure this setting globally for all Ingress rules, the \nlimit-rate-after\n and \nlimit-rate\n value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.\n\n\nPermanent Redirect\n\u00b6\n\n\nThis annotation allows to return a permanent redirect instead of sending data to the upstream.  For example \nnginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com\n would redirect everything to Google.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n allows to configure TLS termination in the pod and not in NGINX.\n\n\n\n\nImportant\n\n\n\n\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).\n\n\n\n\n\n\nThe use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described \nhere\n. If you're using ingress-controller without load balancer then the flag \n--enable-ssl-passthrough\n is required (by default it is disabled).\n\n\n\n\n\n\n\n\nSecure backends\n\u00b6\n\n\nBy default NGINX uses \nhttp\n to reach the services. Adding the annotation \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n in the Ingress rule changes the protocol to \nhttps\n.\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation \nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n.\n\n\n\n\nNote that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the \nsecure-backends\n annotation.\n\n\n\n\nService Upstream\n\u00b6\n\n\nBy default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue \n#257\n.\n\n\nKnown Issues\n\u00b6\n\n\nIf the \nservice-upstream\n annotation is specified the following things should be taken into consideration:\n\n\n\n\nSticky Sessions will not work as only round-robin load balancing is supported.\n\n\nThe \nproxy_next_upstream\n directive will not have any effect meaning on error the request will not be dispatched to another upstream.\n\n\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects (301) to \nHTTPS\n if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use \nssl-redirect: \"false\"\n in the NGINX config map.\n\n\nTo configure this feature for specific ingress resources, you can use the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n annotation in the particular resource.\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to \nHTTPS\n even when there is not TLS cert available. This can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n annotation in the particular resource.\n\n\nRedirect from to www\n\u00b6\n\n\nIn some scenarios is required to redirect from \nwww.domain.com\n to \ndomain.com\n or viceversa.\nTo enable this feature use the annotation \nnginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"\n\n\n\n\nImportant\n\n\nIf at some point a new Ingress is created with a host equal to one of the options (like \ndomain.com\n) the annotation will be omitted.\n\n\n\n\nWhitelist source range\n\u00b6\n\n\nYou can specify the allowed client IP source ranges through the \nnginx.ingress.kubernetes.io/whitelist-source-range\n annotation. The value is a comma separated list of \nCIDRs\n, e.g.  \n10.0.0.0/24,172.10.0.1\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nwhitelist-source-range\n value may be set in the NGINX ConfigMap.\n\n\nNote:\n Adding an annotation to an Ingress rule overrides any global restriction.\n\n\nCookie affinity\n\u00b6\n\n\nIf you use the \ncookie\n type you can also specify the name of the cookie that will be used to route the requests with the annotation \nnginx.ingress.kubernetes.io/session-cookie-name\n. The default is to create a cookie named 'INGRESSCOOKIE'.\n\n\nIn case of NGINX the annotation \nnginx.ingress.kubernetes.io/session-cookie-hash\n defines which algorithm will be used to 'hash' the used upstream. Default value is \nmd5\n and possible values are \nmd5\n, \nsha1\n and \nindex\n.\nThe \nindex\n option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! \nUSE IT WITH CAUTION\n and only if you need to!\n\n\nIn NGINX this feature is implemented by the third party module \nnginx-sticky-module-ng\n. The workflow used to define which upstream server will be used is explained \nhere\n\n\nCustom timeouts\n\u00b6\n\n\nUsing the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.\nIn some scenarios is required to have different values. To allow this we provide annotations that allows this customization:\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\n\n\nProxy redirect\n\u00b6\n\n\nWith the annotations \nnginx.ingress.kubernetes.io/proxy-redirect-from\n and \nnginx.ingress.kubernetes.io/proxy-redirect-to\n it is possible to set the text that should be changed in the \nLocation\n and \nRefresh\n header fields of a proxied server response (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)\nSetting \"off\" or \"default\" in the annotation \nnginx.ingress.kubernetes.io/proxy-redirect-from\n disables \nnginx.ingress.kubernetes.io/proxy-redirect-to\n\nBoth annotations will be used in any other case\nBy default the value is \"off\".\n\n\nCustom max body size\n\u00b6\n\n\nFor NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter \nclient_max_body_size\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-body-size\n value may be set in the NGINX ConfigMap.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n:\n \n8m\n\n\n\n\n\n\nProxy buffering\n\u00b6\n\n\nEnable or disable proxy buffering \nproxy_buffering\n.\nBy default proxy buffering is disabled in the nginx config.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-buffering\n value may be set in the NGINX ConfigMap.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n:\n \n\"on\"\n\n\n\n\n\n\nSSL ciphers\n\u00b6\n\n\nSpecifies the \nenabled ciphers\n.\n\n\nUsing this annotation will set the \nssl_ciphers\n directive at the server level. This configuration is active for all the paths in the host.\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n:\n \n\"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n\n\n\n\n\n\nConnection proxy header\n\u00b6\n\n\nUsing this annotation will override the default connection header set by nginx. To use custom values in an Ingress rule, define the annotation:\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n:\n \n\"keep-alive\"\n\n\n\n\n\n\nEnable Access Log\n\u00b6\n\n\nIn some scenarios could be required to disable NGINX access logs. To enable this feature use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n:\n \n\"false\"\n\n\n\n\n\n\nEnable Rewrite Log\n\u00b6\n\n\nIn some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-rewrite-log\n:\n \n\"true\"\n\n\n\n\n\n\nLua Resty WAF\n\u00b6\n\n\nUsing \nlua-resty-waf-*\n annotations we can enable and control \nlua-resty-waf\n per location.\nFollowing configuration will enable WAF for the paths defined in the corresponding ingress:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n:\n \n\"active\"\n\n\n\n\n\n\nIn order to run it in debugging mode you can set \nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n to \n\"true\"\n in addition to the above configuration.\nThe other possible values for \nnginx.ingress.kubernetes.io/lua-resty-waf\n are \ninactive\n and \nsimulate\n. In \ninactive\n mode WAF won't do anything, whereas\nin \nsimulate\n mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.\n\n\nlua-resty-waf\n comes with predefined set of rules \nhttps://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules\n that covers ModSecurity CRS.\nYou can use \nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n to ignore subset of those rulesets. For an example:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n:\n \n\"41000_sqli,\n \n42000_xss\"\n\n\n\n\n\n\nwill ignore the two mentioned rulesets.\n\n\nIt is also possible to configure custom WAF rules per ingress using \nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n annotation. For an example the following snippet will\nconfigure a WAF rule to deny requests with query string value that contains word \nfoo\n:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n:\n \n'[=[\n \n{\n \n\"access\":\n \n[\n \n{\n \n\"actions\":\n \n{\n \n\"disrupt\"\n \n:\n \n\"DENY\"\n \n},\n \n\"id\":\n \n10001,\n \n\"msg\":\n \n\"my\n \ncustom\n \nrule\",\n \n\"operator\":\n \n\"STR_CONTAINS\",\n \n\"pattern\":\n \n\"foo\",\n \n\"vars\":\n \n[\n \n{\n \n\"parse\":\n \n[\n \n\"values\",\n \n1\n \n],\n \n\"type\":\n \n\"REQUEST_ARGS\"\n \n}\n \n]\n \n}\n \n],\n \n\"body_filter\":\n \n[],\n \n\"header_filter\":[]\n \n}\n \n]=]'\n\n\n\n\n\n\nFor details on how to write WAF rules, please refer to \nhttps://github.com/p0pr0ck5/lua-resty-waf\n.",
             "title": "Annotations"
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#annotations",
-            "text": "You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.   Tip  Annotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e.  \"true\" ,  \"false\" ,  \"100\" .      Name  type      nginx.ingress.kubernetes.io/add-base-url  \"true\" or \"false\"    nginx.ingress.kubernetes.io/app-root  string    nginx.ingress.kubernetes.io/affinity  cookie    nginx.ingress.kubernetes.io/auth-realm  string    nginx.ingress.kubernetes.io/auth-secret  string    nginx.ingress.kubernetes.io/auth-type  basic or digest    nginx.ingress.kubernetes.io/auth-tls-secret  string    nginx.ingress.kubernetes.io/auth-tls-verify-depth  number    nginx.ingress.kubernetes.io/auth-tls-verify-client  string    nginx.ingress.kubernetes.io/auth-tls-error-page  string    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream  \"true\" or \"false\"    nginx.ingress.kubernetes.io/auth-url  string    nginx.ingress.kubernetes.io/base-url-scheme  string    nginx.ingress.kubernetes.io/client-body-buffer-size  string    nginx.ingress.kubernetes.io/configuration-snippet  string    nginx.ingress.kubernetes.io/default-backend  string    nginx.ingress.kubernetes.io/enable-cors  \"true\" or \"false\"    nginx.ingress.kubernetes.io/cors-allow-origin  string    nginx.ingress.kubernetes.io/cors-allow-methods  string    nginx.ingress.kubernetes.io/cors-allow-headers  string    nginx.ingress.kubernetes.io/cors-allow-credentials  \"true\" or \"false\"    nginx.ingress.kubernetes.io/cors-max-age  number    nginx.ingress.kubernetes.io/force-ssl-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/from-to-www-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/limit-connections  number    nginx.ingress.kubernetes.io/limit-rps  number    nginx.ingress.kubernetes.io/permanent-redirect  string    nginx.ingress.kubernetes.io/proxy-body-size  string    nginx.ingress.kubernetes.io/proxy-connect-timeout  number    nginx.ingress.kubernetes.io/proxy-send-timeout  number    nginx.ingress.kubernetes.io/proxy-read-timeout  number    nginx.ingress.kubernetes.io/proxy-next-upstream  string    nginx.ingress.kubernetes.io/proxy-next-upstream-tries  number    nginx.ingress.kubernetes.io/proxy-request-buffering  string    nginx.ingress.kubernetes.io/proxy-redirect-from  string    nginx.ingress.kubernetes.io/proxy-redirect-to  string    nginx.ingress.kubernetes.io/rewrite-target  URI    nginx.ingress.kubernetes.io/secure-backends  \"true\" or \"false\"    nginx.ingress.kubernetes.io/secure-verify-ca-secret  string    nginx.ingress.kubernetes.io/server-alias  string    nginx.ingress.kubernetes.io/server-snippet  string    nginx.ingress.kubernetes.io/service-upstream  \"true\" or \"false\"    nginx.ingress.kubernetes.io/session-cookie-name  string    nginx.ingress.kubernetes.io/session-cookie-hash  string    nginx.ingress.kubernetes.io/ssl-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/ssl-passthrough  \"true\" or \"false\"    nginx.ingress.kubernetes.io/upstream-max-fails  number    nginx.ingress.kubernetes.io/upstream-fail-timeout  number    nginx.ingress.kubernetes.io/upstream-hash-by  string    nginx.ingress.kubernetes.io/load-balance  string    nginx.ingress.kubernetes.io/upstream-vhost  string    nginx.ingress.kubernetes.io/whitelist-source-range  CIDR    nginx.ingress.kubernetes.io/proxy-buffering  string    nginx.ingress.kubernetes.io/ssl-ciphers  string    nginx.ingress.kubernetes.io/connection-proxy-header  string    nginx.ingress.kubernetes.io/enable-access-log  \"true\" or \"false\"    nginx.ingress.kubernetes.io/lua-resty-waf  string    nginx.ingress.kubernetes.io/lua-resty-waf-debug  \"true\" or \"false\"    nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets  string    nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules  string",
+            "text": "You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.   Tip  Annotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e.  \"true\" ,  \"false\" ,  \"100\" .      Name  type      nginx.ingress.kubernetes.io/add-base-url  \"true\" or \"false\"    nginx.ingress.kubernetes.io/app-root  string    nginx.ingress.kubernetes.io/affinity  cookie    nginx.ingress.kubernetes.io/auth-realm  string    nginx.ingress.kubernetes.io/auth-secret  string    nginx.ingress.kubernetes.io/auth-type  basic or digest    nginx.ingress.kubernetes.io/auth-tls-secret  string    nginx.ingress.kubernetes.io/auth-tls-verify-depth  number    nginx.ingress.kubernetes.io/auth-tls-verify-client  string    nginx.ingress.kubernetes.io/auth-tls-error-page  string    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream  \"true\" or \"false\"    nginx.ingress.kubernetes.io/auth-url  string    nginx.ingress.kubernetes.io/base-url-scheme  string    nginx.ingress.kubernetes.io/client-body-buffer-size  string    nginx.ingress.kubernetes.io/configuration-snippet  string    nginx.ingress.kubernetes.io/default-backend  string    nginx.ingress.kubernetes.io/enable-cors  \"true\" or \"false\"    nginx.ingress.kubernetes.io/cors-allow-origin  string    nginx.ingress.kubernetes.io/cors-allow-methods  string    nginx.ingress.kubernetes.io/cors-allow-headers  string    nginx.ingress.kubernetes.io/cors-allow-credentials  \"true\" or \"false\"    nginx.ingress.kubernetes.io/cors-max-age  number    nginx.ingress.kubernetes.io/force-ssl-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/from-to-www-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/limit-connections  number    nginx.ingress.kubernetes.io/limit-rps  number    nginx.ingress.kubernetes.io/permanent-redirect  string    nginx.ingress.kubernetes.io/proxy-body-size  string    nginx.ingress.kubernetes.io/proxy-connect-timeout  number    nginx.ingress.kubernetes.io/proxy-send-timeout  number    nginx.ingress.kubernetes.io/proxy-read-timeout  number    nginx.ingress.kubernetes.io/proxy-next-upstream  string    nginx.ingress.kubernetes.io/proxy-next-upstream-tries  number    nginx.ingress.kubernetes.io/proxy-request-buffering  string    nginx.ingress.kubernetes.io/proxy-redirect-from  string    nginx.ingress.kubernetes.io/proxy-redirect-to  string    nginx.ingress.kubernetes.io/rewrite-log  URI    nginx.ingress.kubernetes.io/rewrite-target  URI    nginx.ingress.kubernetes.io/secure-backends  \"true\" or \"false\"    nginx.ingress.kubernetes.io/secure-verify-ca-secret  string    nginx.ingress.kubernetes.io/server-alias  string    nginx.ingress.kubernetes.io/server-snippet  string    nginx.ingress.kubernetes.io/service-upstream  \"true\" or \"false\"    nginx.ingress.kubernetes.io/session-cookie-name  string    nginx.ingress.kubernetes.io/session-cookie-hash  string    nginx.ingress.kubernetes.io/ssl-redirect  \"true\" or \"false\"    nginx.ingress.kubernetes.io/ssl-passthrough  \"true\" or \"false\"    nginx.ingress.kubernetes.io/upstream-max-fails  number    nginx.ingress.kubernetes.io/upstream-fail-timeout  number    nginx.ingress.kubernetes.io/upstream-hash-by  string    nginx.ingress.kubernetes.io/load-balance  string    nginx.ingress.kubernetes.io/upstream-vhost  string    nginx.ingress.kubernetes.io/whitelist-source-range  CIDR    nginx.ingress.kubernetes.io/proxy-buffering  string    nginx.ingress.kubernetes.io/ssl-ciphers  string    nginx.ingress.kubernetes.io/connection-proxy-header  string    nginx.ingress.kubernetes.io/enable-access-log  \"true\" or \"false\"    nginx.ingress.kubernetes.io/lua-resty-waf  string    nginx.ingress.kubernetes.io/lua-resty-waf-debug  \"true\" or \"false\"    nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets  string    nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules  string",
             "title": "Annotations"
         },
         {
@@ -192,7 +192,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#custom-nginx-upstream-checks",
-            "text": "NGINX exposes some flags in the  upstream configuration  that enable the configuration of each server in the upstream. The Ingress controller allows custom  max_fails  and  fail_timeout  parameters in a global context using  upstream-max-fails  and  upstream-fail-timeout  in the NGINX ConfigMap or in a particular Ingress rule.  upstream-max-fails  defaults to 0. This means NGINX will respect the container's  readinessProbe  if it is defined. If there is no probe and no values for  upstream-max-fails  NGINX will continue to send traffic to the container.  With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.  To use custom values in an Ingress rule define these annotations:  nginx.ingress.kubernetes.io/upstream-max-fails : number of unsuccessful attempts to communicate with the server that should occur in the duration set by the  upstream-fail-timeout  parameter to consider the server unavailable.  nginx.ingress.kubernetes.io/upstream-fail-timeout : time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.  In NGINX, backend server pools are called \" upstreams \". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.  Important:  All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.  Please check the  custom upstream check  example.",
+            "text": "NGINX exposes some flags in the  upstream configuration  that enable the configuration of each server in the upstream. The Ingress controller allows custom  max_fails  and  fail_timeout  parameters in a global context using  upstream-max-fails  and  upstream-fail-timeout  in the NGINX ConfigMap or in a particular Ingress rule.  upstream-max-fails  defaults to 0. This means NGINX will respect the container's  readinessProbe  if it is defined. If there is no probe and no values for  upstream-max-fails  NGINX will continue to send traffic to the container.   Tip  With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**   To use custom values in an Ingress rule define these annotations:  nginx.ingress.kubernetes.io/upstream-max-fails : number of unsuccessful attempts to communicate with the server that should occur in the duration set by the  upstream-fail-timeout  parameter to consider the server unavailable.  nginx.ingress.kubernetes.io/upstream-fail-timeout : time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.  In NGINX, backend server pools are called \" upstreams \". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.   Important  All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.   Please check the  custom upstream check  example.",
             "title": "Custom NGINX upstream checks"
         },
         {
@@ -202,7 +202,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#custom-nginx-load-balancing",
-            "text": "This is similar to https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance but configures load balancing algorithm per ingress.\nNote that  nginx.ingress.kubernetes.io/upstream-hash-by  takes preference over this. If this and  nginx.ingress.kubernetes.io/upstream-hash-by  are not set then we fallback to using globally configured load balancing algorithm.",
+            "text": "This is similar to (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance) but configures load balancing algorithm per ingress.   Note that  nginx.ingress.kubernetes.io/upstream-hash-by  takes preference over this. If this and  nginx.ingress.kubernetes.io/upstream-hash-by  are not set then we fallback to using globally configured load balancing algorithm.",
             "title": "Custom NGINX load balancing"
         },
         {
@@ -212,7 +212,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#client-certificate-authentication",
-            "text": "It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.  The annotations are:  nginx.ingress.kubernetes.io/auth-tls-secret: secretName  The name of the Secret that contains the full Certificate Authority chain  ca.crt  that is enabled to authenticate against this Ingress.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.  nginx.ingress.kubernetes.io/auth-tls-verify-depth  The validation depth between the provided client certificate and the Certification Authority chain.  nginx.ingress.kubernetes.io/auth-tls-verify-client  Enables verification of client certificates.  nginx.ingress.kubernetes.io/auth-tls-error-page  The URL/Page that user should be redirected in case of a Certificate Authentication Error  nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream  Indicates if the received certificates should be passed or not to the upstream server.\nBy default this is disabled.  Please check the  client-certs  example.  Important:  TLS with Client Authentication is NOT possible in Cloudflare as is not allowed it and might result in unexpected behavior.  Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate:\nhttps://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/  Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial:\nhttps://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls",
+            "text": "It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.  The annotations are:  nginx.ingress.kubernetes.io/auth-tls-secret: secretName  The name of the Secret that contains the full Certificate Authority chain  ca.crt  that is enabled to authenticate against this Ingress.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.  nginx.ingress.kubernetes.io/auth-tls-verify-depth  The validation depth between the provided client certificate and the Certification Authority chain.  nginx.ingress.kubernetes.io/auth-tls-verify-client  Enables verification of client certificates.  nginx.ingress.kubernetes.io/auth-tls-error-page  The URL/Page that user should be redirected in case of a Certificate Authentication Error  nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream  Indicates if the received certificates should be passed or not to the upstream server.\nBy default this is disabled.  Please check the  client-certs  example.   Important  TLS with Client Authentication is NOT possible in Cloudflare as is not allowed it and might result in unexpected behavior.  Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate:  https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/  Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial:  https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls",
             "title": "Client Certificate Authentication"
         },
         {
@@ -227,22 +227,22 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#enable-cors",
-            "text": "To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation  nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality.  CORS can be controlled with the following annotations:   nginx.ingress.kubernetes.io/cors-allow-methods  controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).   Example:  nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"   nginx.ingress.kubernetes.io/cors-allow-headers  controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.   Example:  nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"   nginx.ingress.kubernetes.io/cors-allow-origin  controls what's the accepted Origin for CORS and defaults to '*'. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port   Example:  nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"   nginx.ingress.kubernetes.io/cors-allow-credentials  controls if credentials can be passed during CORS operations.   Example:  nginx.ingress.kubernetes.io/cors-allow-credentials: \"true\"   nginx.ingress.kubernetes.io/cors-max-age  controls how long preflight requests can be cached.   Example:  nginx.ingress.kubernetes.io/cors-max-age: 600  For more information please check https://enable-cors.org/server_nginx.html",
+            "text": "To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation  nginx.ingress.kubernetes.io/enable-cors: \"true\" . This will add a section in the server location enabling this functionality.  CORS can be controlled with the following annotations:   nginx.ingress.kubernetes.io/cors-allow-methods  controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).   Example:  nginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"   nginx.ingress.kubernetes.io/cors-allow-headers  controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.   Example:  nginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"   nginx.ingress.kubernetes.io/cors-allow-origin  controls what's the accepted Origin for CORS and defaults to '*'. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:port   Example:  nginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"   nginx.ingress.kubernetes.io/cors-allow-credentials  controls if credentials can be passed during CORS operations.   Example:  nginx.ingress.kubernetes.io/cors-allow-credentials: \"true\"   nginx.ingress.kubernetes.io/cors-max-age  controls how long preflight requests can be cached.   Example:  nginx.ingress.kubernetes.io/cors-max-age: 600  For more information please see  https://enable-cors.org",
             "title": "Enable CORS"
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#server-alias",
-            "text": "To add Server Aliases to an Ingress rule add the annotation  nginx.ingress.kubernetes.io/server-alias: \"\" .\nThis will create a server with the same configuration, but a different server_name as the provided host.  Note:  A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias\nannotation will be ignored. If a server-alias is created and later a new server with the same hostname is created\nthe new server configuration will take place over the alias configuration.  For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name",
+            "text": "To add Server Aliases to an Ingress rule add the annotation  nginx.ingress.kubernetes.io/server-alias: \"\" .\nThis will create a server with the same configuration, but a different server_name as the provided host.   Note  A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created the new server configuration will take place over the alias configuration.   For more information please see  http://nginx.org",
             "title": "Server Alias"
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#server-snippet",
-            "text": "Using the annotation  nginx.ingress.kubernetes.io/server-snippet  it is possible to add custom configuration in the server configuration block.  apiVersion :   extensions/v1beta1  kind :   Ingress  metadata : \n   annotations : \n     nginx.ingress.kubernetes.io/server-snippet :   |  set $agentflag 0;  if ($http_user_agent ~* \"(Mobile)\" ){ \n   set $agentflag 1;  }  if ( $agentflag = 1 ) { \n   return 301 https://m.example.com;  }   Important:  This annotation can be used only once per host",
+            "text": "Using the annotation  nginx.ingress.kubernetes.io/server-snippet  it is possible to add custom configuration in the server configuration block.  apiVersion :   extensions/v1beta1  kind :   Ingress  metadata : \n   annotations : \n     nginx.ingress.kubernetes.io/server-snippet :   |  set $agentflag 0;  if ($http_user_agent ~* \"(Mobile)\" ){ \n   set $agentflag 1;  }  if ( $agentflag = 1 ) { \n   return 301 https://m.example.com;  }    Important  This annotation can be used only once per host",
             "title": "Server snippet"
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#client-body-buffer-size",
-            "text": "Sets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.  Note:  The annotation value must be given in a valid format otherwise the\nFor example to set the client-body-buffer-size the following can be done:   nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"  # 1000 bytes  nginx.ingress.kubernetes.io/client-body-buffer-size: 1k  # 1 kilobyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1K  # 1 kilobyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1m  # 1 megabyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1M  # 1 megabyte   For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size",
+            "text": "Sets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.  Note:  The annotation value must be given in a valid format otherwise the\nFor example to set the client-body-buffer-size the following can be done:   nginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"  # 1000 bytes  nginx.ingress.kubernetes.io/client-body-buffer-size: 1k  # 1 kilobyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1K  # 1 kilobyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1m  # 1 megabyte  nginx.ingress.kubernetes.io/client-body-buffer-size: 1M  # 1 megabyte   For more information please see  http://nginx.org",
             "title": "Client Body Buffer Size"
         },
         {
@@ -262,12 +262,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#ssl-passthrough",
-            "text": "The annotation  nginx.ingress.kubernetes.io/ssl-passthrough  allows to configure TLS termination in the pod and not in NGINX.  Important:   Using the annotation  nginx.ingress.kubernetes.io/ssl-passthrough  invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).  The use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described  here . If you're using ingress-controller without load balancer then the flag  --enable-ssl-passthrough  is required (by default it is disabled).",
+            "text": "The annotation  nginx.ingress.kubernetes.io/ssl-passthrough  allows to configure TLS termination in the pod and not in NGINX.   Important    Using the annotation  nginx.ingress.kubernetes.io/ssl-passthrough  invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).    The use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described  here . If you're using ingress-controller without load balancer then the flag  --enable-ssl-passthrough  is required (by default it is disabled).",
             "title": "SSL Passthrough"
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#secure-backends",
-            "text": "By default NGINX uses  http  to reach the services. Adding the annotation  nginx.ingress.kubernetes.io/secure-backends: \"true\"  in the Ingress rule changes the protocol to  https .\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation  nginx.ingress.kubernetes.io/secure-verify-ca-secret .  Please note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the  secure-backends  annotation.",
+            "text": "By default NGINX uses  http  to reach the services. Adding the annotation  nginx.ingress.kubernetes.io/secure-backends: \"true\"  in the Ingress rule changes the protocol to  https .\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation  nginx.ingress.kubernetes.io/secure-verify-ca-secret .   Note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the  secure-backends  annotation.",
             "title": "Secure backends"
         },
         {
@@ -287,7 +287,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#redirect-from-to-www",
-            "text": "In some scenarios is required to redirect from  www.domain.com  to  domain.com  or viceversa.\nTo enable this feature use the annotation  nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"  Important: \nIf at some point a new Ingress is created with a host equal to one of the options (like  domain.com ) the annotation will be omitted.",
+            "text": "In some scenarios is required to redirect from  www.domain.com  to  domain.com  or viceversa.\nTo enable this feature use the annotation  nginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"   Important  If at some point a new Ingress is created with a host equal to one of the options (like  domain.com ) the annotation will be omitted.",
             "title": "Redirect from to www"
         },
         {
@@ -297,7 +297,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/annotations/#cookie-affinity",
-            "text": "If you use the  cookie  type you can also specify the name of the cookie that will be used to route the requests with the annotation  nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'.  In case of NGINX the annotation  nginx.ingress.kubernetes.io/session-cookie-hash  defines which algorithm will be used to 'hash' the used upstream. Default value is  md5  and possible values are  md5 ,  sha1  and  index .\nThe  index  option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!  In NGINX this feature is implemented by the third party module  nginx-sticky-module-ng . The workflow used to define which upstream server will be used is explained  here",
+            "text": "If you use the  cookie  type you can also specify the name of the cookie that will be used to route the requests with the annotation  nginx.ingress.kubernetes.io/session-cookie-name . The default is to create a cookie named 'INGRESSCOOKIE'.  In case of NGINX the annotation  nginx.ingress.kubernetes.io/session-cookie-hash  defines which algorithm will be used to 'hash' the used upstream. Default value is  md5  and possible values are  md5 ,  sha1  and  index .\nThe  index  option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before!  USE IT WITH CAUTION  and only if you need to!  In NGINX this feature is implemented by the third party module  nginx-sticky-module-ng . The workflow used to define which upstream server will be used is explained  here",
             "title": "Cookie affinity"
         },
         {
@@ -335,19 +335,24 @@
             "text": "In some scenarios could be required to disable NGINX access logs. To enable this feature use the annotation:  nginx.ingress.kubernetes.io/enable-access-log :   \"false\"",
             "title": "Enable Access Log"
         },
+        {
+            "location": "/user-guide/nginx-configuration/annotations/#enable-rewrite-log",
+            "text": "In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:  nginx.ingress.kubernetes.io/enable-rewrite-log :   \"true\"",
+            "title": "Enable Rewrite Log"
+        },
         {
             "location": "/user-guide/nginx-configuration/annotations/#lua-resty-waf",
-            "text": "Using  lua-resty-waf-*  annotations we can enable and control  lua-resty-waf  per location.\nFollowing configuration will enable WAF for the paths defined in the corresponding ingress:  nginx.ingress.kubernetes.io/lua-resty-waf :   \"active\"   In order to run it in debugging mode you can set  nginx.ingress.kubernetes.io/lua-resty-waf-debug  to  \"true\"  in addition to the above configuration.\nThe other possible values for  nginx.ingress.kubernetes.io/lua-resty-waf  are  inactive  and  simulate . In  inactive  mode WAF won't do anything, whereas\nin  simulate  mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.  lua-resty-waf  comes with predefined set of rules(https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules) that covers ModSecurity CRS.\nYou can use  nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets  to ignore subset of those rulesets. For an example:  nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets :   \"41000_sqli,   42000_xss\"   will ignore the two mentioned rulesets.  It is also possible to configure custom WAF rules per ingress using  nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules  annotation. For an example the following snippet will\nconfigure a WAF rule to deny requests with query string value that contains word  foo :  nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules :   '[=[   {   \"access\":   [   {   \"actions\":   {   \"disrupt\"   :   \"DENY\"   },   \"id\":   10001,   \"msg\":   \"my   custom   rule\",   \"operator\":   \"STR_CONTAINS\",   \"pattern\":   \"foo\",   \"vars\":   [   {   \"parse\":   [   \"values\",   1   ],   \"type\":   \"REQUEST_ARGS\"   }   ]   }   ],   \"body_filter\":   [],   \"header_filter\":[]   }   ]=]'   For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf.",
+            "text": "Using  lua-resty-waf-*  annotations we can enable and control  lua-resty-waf  per location.\nFollowing configuration will enable WAF for the paths defined in the corresponding ingress:  nginx.ingress.kubernetes.io/lua-resty-waf :   \"active\"   In order to run it in debugging mode you can set  nginx.ingress.kubernetes.io/lua-resty-waf-debug  to  \"true\"  in addition to the above configuration.\nThe other possible values for  nginx.ingress.kubernetes.io/lua-resty-waf  are  inactive  and  simulate . In  inactive  mode WAF won't do anything, whereas\nin  simulate  mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.  lua-resty-waf  comes with predefined set of rules  https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules  that covers ModSecurity CRS.\nYou can use  nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets  to ignore subset of those rulesets. For an example:  nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets :   \"41000_sqli,   42000_xss\"   will ignore the two mentioned rulesets.  It is also possible to configure custom WAF rules per ingress using  nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules  annotation. For an example the following snippet will\nconfigure a WAF rule to deny requests with query string value that contains word  foo :  nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules :   '[=[   {   \"access\":   [   {   \"actions\":   {   \"disrupt\"   :   \"DENY\"   },   \"id\":   10001,   \"msg\":   \"my   custom   rule\",   \"operator\":   \"STR_CONTAINS\",   \"pattern\":   \"foo\",   \"vars\":   [   {   \"parse\":   [   \"values\",   1   ],   \"type\":   \"REQUEST_ARGS\"   }   ]   }   ],   \"body_filter\":   [],   \"header_filter\":[]   }   ]=]'   For details on how to write WAF rules, please refer to  https://github.com/p0pr0ck5/lua-resty-waf .",
             "title": "Lua Resty WAF"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/",
-            "text": "ConfigMaps\n\u00b6\n\n\nConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.\n\n\nThe ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be \ndeployed\n.\n\n\nIn order to overwrite nginx-controller configuration values as seen in \nconfig.go\n,\nyou can add key-value pairs to the data section of the config-map. For Example:\n\n\ndata\n:\n\n  \nmap-hash-bucket-size\n:\n \n\"128\"\n\n  \nssl-protocols\n:\n \nSSLv2\n\n\n\n\n\n\nIMPORTANT:\n\n\nThe key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".\n\n\n\"Slice\" types (defined below as \n[]string\n or \n[]int\n can be provided as a comma-delimited string.\n\n\nConfiguration options\n\u00b6\n\n\nThe following table shows a configuration option's name, type, and the default value:\n\n\n\n\n\n\n\n\nname\n\n\ntype\n\n\ndefault\n\n\n\n\n\n\n\n\n\n\nadd-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nallow-backend-server-header\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nhide-headers\n\n\nstring array\n\n\nempty\n\n\n\n\n\n\naccess-log-path\n\n\nstring\n\n\n\"/var/log/nginx/access.log\"\n\n\n\n\n\n\nerror-log-path\n\n\nstring\n\n\n\"/var/log/nginx/error.log\"\n\n\n\n\n\n\nenable-dynamic-tls-records\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-modsecurity\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nenable-owasp-modsecurity-crs\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nclient-header-buffer-size\n\n\nstring\n\n\n\"1k\"\n\n\n\n\n\n\nclient-header-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nclient-body-buffer-size\n\n\nstring\n\n\n\"8k\"\n\n\n\n\n\n\nclient-body-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\ndisable-access-log\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6-dns\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nenable-underscores-in-headers\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nignore-invalid-headers\n\n\nbool\n\n\ntrue\n\n\n\n\n\n\nenable-vts-status\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nvts-status-zone-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nvts-sum-key\n\n\nstring\n\n\n\"*\"\n\n\n\n\n\n\nvts-default-filter-key\n\n\nstring\n\n\n\"$geoip_country_code country::*\"\n\n\n\n\n\n\nretry-non-idempotent\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nerror-log-level\n\n\nstring\n\n\n\"notice\"\n\n\n\n\n\n\nhttp2-max-field-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nhttp2-max-header-size\n\n\nstring\n\n\n\"16k\"\n\n\n\n\n\n\nhsts\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-include-subdomains\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-max-age\n\n\nstring\n\n\n\"15724800\"\n\n\n\n\n\n\nhsts-preload\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nkeep-alive\n\n\nint\n\n\n75\n\n\n\n\n\n\nkeep-alive-requests\n\n\nint\n\n\n100\n\n\n\n\n\n\nlarge-client-header-buffers\n\n\nstring\n\n\n\"4 8k\"\n\n\n\n\n\n\nlog-format-escape-json\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nlog-format-upstream\n\n\nstring\n\n\n%v\n \n-\n \n[\n$the_real_ip\n]\n \n-\n \n$remote_user\n \n[\n$time_local\n]\n \n\"$request\"\n \n$status\n \n$body_bytes_sent\n \n\"$http_referer\"\n \n\"$http_user_agent\"\n \n$request_length\n \n$request_time\n \n[\n$proxy_upstream_name\n]\n \n$upstream_addr\n \n$upstream_response_length\n \n$upstream_response_time\n \n$upstream_status\n\n\n\n\n\n\nlog-format-stream\n\n\nstring\n\n\n[$time_local] $protocol $status $bytes_sent $bytes_received $session_time\n\n\n\n\n\n\nmax-worker-connections\n\n\nint\n\n\n16384\n\n\n\n\n\n\nmap-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nnginx-status-ipv4-whitelist\n\n\n[]string\n\n\n\"127.0.0.1\"\n\n\n\n\n\n\nnginx-status-ipv6-whitelist\n\n\n[]string\n\n\n\"::1\"\n\n\n\n\n\n\nproxy-real-ip-cidr\n\n\n[]string\n\n\n\"0.0.0.0/0\"\n\n\n\n\n\n\nproxy-set-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-name-hash-max-size\n\n\nint\n\n\n1024\n\n\n\n\n\n\nserver-name-hash-bucket-size\n\n\nint\n\n\n\n\n\n\n\n\n\nproxy-headers-hash-max-size\n\n\nint\n\n\n512\n\n\n\n\n\n\nproxy-headers-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nserver-tokens\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-ciphers\n\n\nstring\n\n\n\"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\"\n\n\n\n\n\n\nssl-ecdh-curve\n\n\nstring\n\n\n\"auto\"\n\n\n\n\n\n\nssl-dh-param\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nssl-protocols\n\n\nstring\n\n\n\"TLSv1.2\"\n\n\n\n\n\n\nssl-session-cache\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-cache-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-session-tickets\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-ticket-key\n\n\nstring\n\n\n\n\n\n\n\n\n\nssl-session-timeout\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nuse-proxy-protocol\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nuse-gzip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nuse-geoip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-brotli\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nbrotli-level\n\n\nint\n\n\n4\n\n\n\n\n\n\nbrotli-types\n\n\nstring\n\n\n\"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nuse-http2\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngzip-types\n\n\nstring\n\n\n\"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nworker-processes\n\n\nstring\n\n\n\n\n\n\n\n\n\nworker-cpu-affinity\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nworker-shutdown-timeout\n\n\nstring\n\n\n\"10s\"\n\n\n\n\n\n\nload-balance\n\n\nstring\n\n\n\"least_conn\"\n\n\n\n\n\n\nvariables-hash-bucket-size\n\n\nint\n\n\n128\n\n\n\n\n\n\nvariables-hash-max-size\n\n\nint\n\n\n2048\n\n\n\n\n\n\nupstream-keepalive-connections\n\n\nint\n\n\n32\n\n\n\n\n\n\nlimit-conn-zone-variable\n\n\nstring\n\n\n\"$binary_remote_addr\"\n\n\n\n\n\n\nproxy-stream-timeout\n\n\nstring\n\n\n\"600s\"\n\n\n\n\n\n\nproxy-stream-responses\n\n\nint\n\n\n1\n\n\n\n\n\n\nbind-address-ipv4\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nbind-address-ipv6\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nforwarded-for-header\n\n\nstring\n\n\n\"X-Forwarded-For\"\n\n\n\n\n\n\ncompute-full-forwarded-for\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-add-original-uri-header\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-opentracing\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nzipkin-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nzipkin-collector-port\n\n\nint\n\n\n9411\n\n\n\n\n\n\nzipkin-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\njaeger-collector-port\n\n\nint\n\n\n6831\n\n\n\n\n\n\njaeger-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-sampler-type\n\n\nstring\n\n\n\"const\"\n\n\n\n\n\n\njaeger-sampler-param\n\n\nstring\n\n\n\"1\"\n\n\n\n\n\n\nhttp-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nlocation-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\ncustom-http-errors\n\n\n[]int]\n\n\n[]int{}\n\n\n\n\n\n\nproxy-body-size\n\n\nstring\n\n\n\"1m\"\n\n\n\n\n\n\nproxy-connect-timeout\n\n\nint\n\n\n5\n\n\n\n\n\n\nproxy-read-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-send-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nproxy-cookie-path\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-cookie-domain\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-next-upstream\n\n\nstring\n\n\n\"error timeout invalid_header http_502 http_503 http_504\"\n\n\n\n\n\n\nproxy-next-upstream-tries\n\n\nint\n\n\n0\n\n\n\n\n\n\nproxy-redirect-from\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-request-buffering\n\n\nstring\n\n\n\"on\"\n\n\n\n\n\n\nssl-redirect\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nwhitelist-source-range\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nskip-access-log-urls\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nlimit-rate\n\n\nint\n\n\n0\n\n\n\n\n\n\nlimit-rate-after\n\n\nint\n\n\n0\n\n\n\n\n\n\nhttp-redirect-code\n\n\nint\n\n\n308\n\n\n\n\n\n\nproxy-buffering\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nlimit-req-status-code\n\n\nint\n\n\n503\n\n\n\n\n\n\nno-tls-redirect-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\nno-auth-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\n\n\nadd-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to the client. See \nproxy-set-headers\n. \nexample\n\n\nallow-backend-server-header\n\u00b6\n\n\nEnables the return of the header Server from the backend instead of the generic nginx string. By default this is disabled.\n\n\nhide-headers\n\u00b6\n\n\nSets additional header that will not be passed from the upstream server to the client response.\nDefault: empty\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header\n\n\naccess-log-path\n\u00b6\n\n\nAccess log path. Goes to \n/var/log/nginx/access.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/access.log\n is a symlink to \n/dev/stdout\n\n\nerror-log-path\n\u00b6\n\n\nError log path. Goes to \n/var/log/nginx/error.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/error.log\n is a symlink to \n/dev/stderr\n\n\nReferences:\n\n- http://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nenable-dynamic-tls-records\n\u00b6\n\n\nEnables dynamically sized TLS records to improve time-to-first-byte. By default this is enabled. See \nCloudFlare's blog\n for more information.\n\n\nenable-modsecurity\n\u00b6\n\n\nEnables the modsecurity module for NGINX. By default this is disabled.\n\n\nenable-owasp-modsecurity-crs\n\u00b6\n\n\nEnables the OWASP ModSecurity Core Rule Set (CRS). By default this is disabled.\n\n\nclient-header-buffer-size\n\u00b6\n\n\nAllows to configure a custom buffer size for reading client request header.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size\n\n\nclient-header-timeout\n\u00b6\n\n\nDefines a timeout for reading client request header, in seconds.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout\n\n\nclient-body-buffer-size\n\u00b6\n\n\nSets buffer size for reading client request body.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size\n\n\nclient-body-timeout\n\u00b6\n\n\nDefines a timeout for reading client request body, in seconds.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout\n\n\ndisable-access-log\n\u00b6\n\n\nDisables the Access Log from the entire Ingress Controller. This is '\"false\"' by default.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log\n\n\ndisable-ipv6\n\u00b6\n\n\nDisable listening on IPV6. By default this is disabled.\n\n\ndisable-ipv6-dns\n\u00b6\n\n\nDisable IPV6 for nginx DNS resolver. By default this is disabled.\n\n\nenable-underscores-in-headers\n\u00b6\n\n\nEnables underscores in header names. By default this is disabled.\n\n\nignore-invalid-headers\n\u00b6\n\n\nSet if header fields with invalid names should be ignored.\nBy default this is enabled.\n\n\nenable-vts-status\n\u00b6\n\n\nAllows the replacement of the default status page with a third party module named \nnginx-module-vts\n.\nBy default this is disabled.\n\n\nvts-status-zone-size\n\u00b6\n\n\nVts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. Default value is 10m\n\n\nReferences:\n\n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone\n\n\nvts-default-filter-key\n\u00b6\n\n\nVts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. Default value is $geoip_country_code country::*\n\n\nReferences:\n\n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key\n\n\nvts-sum-key\n\u00b6\n\n\nFor metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. Default value is *\n\n\nReferences:\n\n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key\n\n\nretry-non-idempotent\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".\n\n\nerror-log-level\n\u00b6\n\n\nConfigures the logging level of errors. Log levels above are listed in the order of increasing severity.\n\n\nReferences:\n\n- http://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nhttp2-max-field-size\n\u00b6\n\n\nLimits the maximum size of an HPACK-compressed request header field.\n\n\nReferences:\n\n- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size\n\n\nhttp2-max-header-size\n\u00b6\n\n\nLimits the maximum size of the entire request header list after HPACK decompression.\n\n\nReferences:\n\n- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size\n\n\nhsts\n\u00b6\n\n\nEnables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.\n\n\nReferences:\n\n- https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security\n- https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server\n\n\nhsts-include-subdomains\n\u00b6\n\n\nEnables or disables the use of HSTS in all the subdomains of the server-name.\n\n\nhsts-max-age\n\u00b6\n\n\nSets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.\n\n\nhsts-preload\n\u00b6\n\n\nEnables or disables the preload attribute in the HSTS feature (when it is enabled) dd\n\n\nkeep-alive\n\u00b6\n\n\nSets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout\n\n\nkeep-alive-requests\n\u00b6\n\n\nSets the maximum number of requests that can be served through one keep-alive connection.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests\n\n\nlarge-client-header-buffers\n\u00b6\n\n\nSets the maximum number and size of buffers used for reading large client request header. Default: 4 8k.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers\n\n\nlog-format-escape-json\n\u00b6\n\n\nSets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx \nlog format\n.\n\n\nlog-format-upstream\n\u00b6\n\n\nSets the nginx \nlog format\n.\nExample for json output:\n\n\nconsolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'\n\n\nPlease check \nlog-format\n for definition of each field.\n\n\nlog-format-stream\n\u00b6\n\n\nSets the nginx \nstream format\n.\n\n\nmax-worker-connections\n\u00b6\n\n\nSets the maximum number of simultaneous connections that can be opened by each \nworker process\n\n\nmap-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the \nmap variables hash tables\n. The details of setting up hash tables are provided in a separate \ndocument\n.\n\n\nproxy-real-ip-cidr\n\u00b6\n\n\nIf use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.\n\n\nproxy-set-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to backends. The value format is namespace/name.  See \nexample\n\n\nserver-name-hash-max-size\n\u00b6\n\n\nSets the maximum size of the \nserver names hash tables\n used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.\n\n\nReferences:\n\n- http://nginx.org/en/docs/hash.html\n\n\nserver-name-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the server names hash tables.\n\n\nReferences:\n\n- http://nginx.org/en/docs/hash.html\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size\n\n\nproxy-headers-hash-max-size\n\u00b6\n\n\nSets the maximum size of the proxy headers hash tables.\n\n\nReferences:\n\n- http://nginx.org/en/docs/hash.html\n- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size\n\n\nproxy-headers-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the proxy headers hash tables.\n\n\nReferences:\n\n- http://nginx.org/en/docs/hash.html\n- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size\n\n\nserver-tokens\n\u00b6\n\n\nSend NGINX Server header in responses and display NGINX version in error pages. By default this is enabled.\n\n\nssl-ciphers\n\u00b6\n\n\nSets the \nciphers\n list to enable. The ciphers are specified in the format understood by the OpenSSL library.\n\n\nThe default cipher list is:\n \nECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n.\n\n\nThe ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect \nforward secrecy\n.\n\n\nPlease check the \nMozilla SSL Configuration Generator\n.\n\n\nssl-ecdh-curve\n\u00b6\n\n\nSpecifies a curve for ECDHE ciphers.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve\n\n\nssl-dh-param\n\u00b6\n\n\nSets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".\n\n\nReferences:\n\n- https://wiki.openssl.org/index.php/Diffie-Hellman_parameters\n- https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam\n- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam\n\n\nssl-protocols\n\u00b6\n\n\nSets the \nSSL protocols\n to use. The default is: \nTLSv1.2\n.\n\n\nPlease check the result of the configuration using \nhttps://ssllabs.com/ssltest/analyze.html\n or \nhttps://testssl.sh\n.\n\n\nssl-session-cache\n\u00b6\n\n\nEnables or disables the use of shared \nSSL cache\n among worker processes.\n\n\nssl-session-cache-size\n\u00b6\n\n\nSets the size of the \nSSL shared session cache\n between all worker processes.\n\n\nssl-session-tickets\n\u00b6\n\n\nEnables or disables session resumption through \nTLS session tickets\n.\n\n\nssl-session-ticket-key\n\u00b6\n\n\nSets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.\n\n\nTLS session ticket-key\n, by default, a randomly generated key is used. To create a ticket: \nopenssl rand 80 | base64 -w0\n\n\nssl-session-timeout\n\u00b6\n\n\nSets the time during which a client may \nreuse the session\n parameters stored in a cache.\n\n\nssl-buffer-size\n\u00b6\n\n\nSets the size of the \nSSL buffer\n used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).\n\n\nReferences:\n\n- https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/\n\n\nuse-proxy-protocol\n\u00b6\n\n\nEnables or disables the \nPROXY protocol\n to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).\n\n\nuse-gzip\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"gzip\" module\n.\nThe default mime type list to compress is: \napplication/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n.\n\n\nuse-geoip\n\u00b6\n\n\nEnables or disables \n\"geoip\" module\n that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\nThe default value is true.\n\n\nenable-brotli\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"brotli\" module\n.\nThe default mime type list to compress is: \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n. This is \ndisabled\n by default.\n\n\nNote:\n Brotli does not works in Safari < 11 https://caniuse.com/#feat=brotli\n\n\nbrotli-level\n\u00b6\n\n\nSets the Brotli Compression Level that will be used. \nDefaults to\n 4.\n\n\nbrotli-types\n\u00b6\n\n\nSets the MIME Types that will be compressed on-the-fly by brotli.\n\nDefaults to\n \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n.\n\n\nuse-http2\n\u00b6\n\n\nEnables or disables \nHTTP/2\n support in secure connections.\n\n\ngzip-types\n\u00b6\n\n\nSets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if \nuse-gzip\n is enabled.\n\n\nworker-processes\n\u00b6\n\n\nSets the number of \nworker processes\n.\nThe default of \"auto\" means number of available CPU cores.\n\n\nworker-cpu-affinity\n\u00b6\n\n\nBinds worker processes to the sets of CPUs. \nworker_cpu_affinity\n.\nBy default worker processes are not bound to any specific CPUs. The value can be:\n\n\n\n\n\"\": empty string indicate no affinity is applied.\n\n\ncpumask: e.g. \n0001 0010 0100 1000\n to bind processes to specific cpus.\n\n\nauto: binding worker processes automatically to available CPUs.\n\n\n\n\nworker-shutdown-timeout\n\u00b6\n\n\nSets a timeout for Nginx to \nwait for worker to gracefully shutdown\n. The default is \"10s\".\n\n\nload-balance\n\u00b6\n\n\nSets the algorithm to use for load balancing.\nThe value can either be:\n\n\n\n\nround_robin: to use the default round robin loadbalancer\n\n\nleast_conn: to use the least connected method\n\n\nip_hash: to use a hash of the server for routing.\n\n\newma: to use the peak ewma method for routing (only available with \nenable-dynamic-configuration\n flag) \n\n\n\n\nThe default is least_conn.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/load_balancing.html.\n\n\nvariables-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the variables hash table.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size\n\n\nvariables-hash-max-size\n\u00b6\n\n\nSets the maximum size of the variables hash table.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size\n\n\nupstream-keepalive-connections\n\u00b6\n\n\nActivates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed. Default: 32\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive\n\n\nlimit-conn-zone-variable\n\u00b6\n\n\nSets parameters for a shared memory zone that will keep states for various keys of \nlimit_conn_zone\n. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.\n\n\nproxy-stream-timeout\n\u00b6\n\n\nSets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.\n\n\nReferences:\n\n- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout\n\n\nproxy-stream-responses\n\u00b6\n\n\nSets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.\n\n\nReferences:\n\n- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses\n\n\nbind-address-ipv4\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nbind-address-ipv6\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nforwarded-for-header\n\u00b6\n\n\nSets the header field for identifying the originating IP address of a client. Default is X-Forwarded-For\n\n\ncompute-full-forwarded-for\n\u00b6\n\n\nAppend the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.\n\n\nproxy-add-original-uri-header\n\u00b6\n\n\nAdds an X-Original-Uri header with the original request URI to the backend request\n\n\nenable-opentracing\n\u00b6\n\n\nEnables the nginx Opentracing extension. By default this is disabled.\n\n\nReferences:\n\n- https://github.com/opentracing-contrib/nginx-opentracing\n\n\nzipkin-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\nzipkin-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. Default: 9411\n\n\nzipkin-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. Default: nginx\n\n\njaeger-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\njaeger-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. Default: 6831\n\n\njaeger-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. Default: nginx\n\n\njaeger-sampler-type\n\u00b6\n\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. Default const.\n\n\njaeger-sampler-param\n\u00b6\n\n\nSpecifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. Default: 1\n\n\nhttp-snippet\n\u00b6\n\n\nAdds custom configuration to the http section of the nginx configuration.\nDefault: \"\"\n\n\nserver-snippet\n\u00b6\n\n\nAdds custom configuration to all the servers in the nginx configuration.\nDefault: \"\"\n\n\nlocation-snippet\n\u00b6\n\n\nAdds custom configuration to all the locations in the nginx configuration.\nDefault: \"\"\n\n\ncustom-http-errors\n\u00b6\n\n\nEnables which HTTP codes should be passed for processing with the \nerror_page directive\n\n\nSetting at least one code also enables \nproxy_intercept_errors\n which are required to process error_page.\n\n\nExample usage: \ncustom-http-errors: 404,415\n\n\nproxy-body-size\n\u00b6\n\n\nSets the maximum allowed size of the client request body.\nSee NGINX \nclient_max_body_size\n.\n\n\nproxy-connect-timeout\n\u00b6\n\n\nSets the timeout for \nestablishing a connection with a proxied server\n. It should be noted that this timeout cannot usually exceed 75 seconds.\n\n\nproxy-read-timeout\n\u00b6\n\n\nSets the timeout in seconds for \nreading a response from the proxied server\n. The timeout is set only between two successive read operations, not for the transmission of the whole response.\n\n\nproxy-send-timeout\n\u00b6\n\n\nSets the timeout in seconds for \ntransmitting a request to the proxied server\n. The timeout is set only between two successive write operations, not for the transmission of the whole request.\n\n\nproxy-buffer-size\n\u00b6\n\n\nSets the size of the buffer used for \nreading the first part of the response\n received from the proxied server. This part usually contains a small response header.\n\n\nproxy-cookie-path\n\u00b6\n\n\nSets a text that \nshould be changed in the path attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-cookie-domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-next-upstream\n\u00b6\n\n\nSpecifies in \nwhich cases\n a request should be passed to the next server.\n\n\nproxy-next-upstream-tries\n\u00b6\n\n\nLimit the number of \npossible tries\n a request should be passed to the next server.\n\n\nproxy-redirect-from\n\u00b6\n\n\nSets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. Default: off.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect\n\n\nproxy-request-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of a client request body\n.\n\n\nssl-redirect\n\u00b6\n\n\nSets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\nDefault is \"true\".\n\n\nwhitelist-source-range\n\u00b6\n\n\nSets the default whitelisted IPs for each \nserver\n block. This can be overwritten by an annotation on an Ingress rule.\nSee \nngx_http_access_module\n.\n\n\nskip-access-log-urls\n\u00b6\n\n\nSets a list of URLs that should not appear in the NGINX access log. This is useful with urls like \n/health\n or \nhealth-check\n that make \"complex\" reading the logs. By default this list is empty\n\n\nlimit-rate\n\u00b6\n\n\nLimits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate\n\n\nlimit-rate-after\n\u00b6\n\n\nSets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nReferences:\n\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after\n\n\nhttp-redirect-code\n\u00b6\n\n\nSets the HTTP status code to be used in redirects.\nSupported codes are \n301\n,\n302\n,\n307\n and \n308\n\nDefault code is 308.\n\n\nWhy the default code is 308?\n\n\nRFC 7238\n was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.\n\n\nproxy-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of responses from the proxied server\n.\n\n\nlimit-req-status-code\n\u00b6\n\n\nSets the \nstatus code to return in response to rejected requests\n.Default: 503\n\n\nno-tls-redirect-locations\n\u00b6\n\n\nA comma-separated list of locations on which http requests will never get redirected to their https counterpart.\nDefault: \"/.well-known/acme-challenge\"\n\n\nno-auth-locations\n\u00b6\n\n\nA comma-separated list of locations that should not get authenticated.\nDefault: \"/.well-known/acme-challenge\"",
+            "text": "ConfigMaps\n\u00b6\n\n\nConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.\n\n\nThe ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be \ndeployed\n.\n\n\nIn order to overwrite nginx-controller configuration values as seen in \nconfig.go\n,\nyou can add key-value pairs to the data section of the config-map. For Example:\n\n\ndata\n:\n\n  \nmap-hash-bucket-size\n:\n \n\"128\"\n\n  \nssl-protocols\n:\n \nSSLv2\n\n\n\n\n\n\n\n\nImportant\n\n\nThe key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".\n\n\n\"Slice\" types (defined below as \n[]string\n or \n[]int\n can be provided as a comma-delimited string.\n\n\n\n\nConfiguration options\n\u00b6\n\n\nThe following table shows a configuration option's name, type, and the default value:\n\n\n\n\n\n\n\n\nname\n\n\ntype\n\n\ndefault\n\n\n\n\n\n\n\n\n\n\nadd-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nallow-backend-server-header\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nhide-headers\n\n\nstring array\n\n\nempty\n\n\n\n\n\n\naccess-log-path\n\n\nstring\n\n\n\"/var/log/nginx/access.log\"\n\n\n\n\n\n\nerror-log-path\n\n\nstring\n\n\n\"/var/log/nginx/error.log\"\n\n\n\n\n\n\nenable-dynamic-tls-records\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-modsecurity\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nenable-owasp-modsecurity-crs\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nclient-header-buffer-size\n\n\nstring\n\n\n\"1k\"\n\n\n\n\n\n\nclient-header-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nclient-body-buffer-size\n\n\nstring\n\n\n\"8k\"\n\n\n\n\n\n\nclient-body-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\ndisable-access-log\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6-dns\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nenable-underscores-in-headers\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nignore-invalid-headers\n\n\nbool\n\n\ntrue\n\n\n\n\n\n\nenable-vts-status\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nvts-status-zone-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nvts-sum-key\n\n\nstring\n\n\n\"*\"\n\n\n\n\n\n\nvts-default-filter-key\n\n\nstring\n\n\n\"$geoip_country_code country::*\"\n\n\n\n\n\n\nretry-non-idempotent\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nerror-log-level\n\n\nstring\n\n\n\"notice\"\n\n\n\n\n\n\nhttp2-max-field-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nhttp2-max-header-size\n\n\nstring\n\n\n\"16k\"\n\n\n\n\n\n\nhsts\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-include-subdomains\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-max-age\n\n\nstring\n\n\n\"15724800\"\n\n\n\n\n\n\nhsts-preload\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nkeep-alive\n\n\nint\n\n\n75\n\n\n\n\n\n\nkeep-alive-requests\n\n\nint\n\n\n100\n\n\n\n\n\n\nlarge-client-header-buffers\n\n\nstring\n\n\n\"4 8k\"\n\n\n\n\n\n\nlog-format-escape-json\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nlog-format-upstream\n\n\nstring\n\n\n%v\n \n-\n \n[\n$the_real_ip\n]\n \n-\n \n$remote_user\n \n[\n$time_local\n]\n \n\"$request\"\n \n$status\n \n$body_bytes_sent\n \n\"$http_referer\"\n \n\"$http_user_agent\"\n \n$request_length\n \n$request_time\n \n[\n$proxy_upstream_name\n]\n \n$upstream_addr\n \n$upstream_response_length\n \n$upstream_response_time\n \n$upstream_status\n\n\n\n\n\n\nlog-format-stream\n\n\nstring\n\n\n[$time_local] $protocol $status $bytes_sent $bytes_received $session_time\n\n\n\n\n\n\nmax-worker-connections\n\n\nint\n\n\n16384\n\n\n\n\n\n\nmap-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nnginx-status-ipv4-whitelist\n\n\n[]string\n\n\n\"127.0.0.1\"\n\n\n\n\n\n\nnginx-status-ipv6-whitelist\n\n\n[]string\n\n\n\"::1\"\n\n\n\n\n\n\nproxy-real-ip-cidr\n\n\n[]string\n\n\n\"0.0.0.0/0\"\n\n\n\n\n\n\nproxy-set-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-name-hash-max-size\n\n\nint\n\n\n1024\n\n\n\n\n\n\nserver-name-hash-bucket-size\n\n\nint\n\n\n\n\n\n\n\n\n\nproxy-headers-hash-max-size\n\n\nint\n\n\n512\n\n\n\n\n\n\nproxy-headers-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nserver-tokens\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-ciphers\n\n\nstring\n\n\n\"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\"\n\n\n\n\n\n\nssl-ecdh-curve\n\n\nstring\n\n\n\"auto\"\n\n\n\n\n\n\nssl-dh-param\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nssl-protocols\n\n\nstring\n\n\n\"TLSv1.2\"\n\n\n\n\n\n\nssl-session-cache\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-cache-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-session-tickets\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-ticket-key\n\n\nstring\n\n\n\n\n\n\n\n\n\nssl-session-timeout\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nuse-proxy-protocol\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nuse-gzip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nuse-geoip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-brotli\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nbrotli-level\n\n\nint\n\n\n4\n\n\n\n\n\n\nbrotli-types\n\n\nstring\n\n\n\"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nuse-http2\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngzip-types\n\n\nstring\n\n\n\"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nworker-processes\n\n\nstring\n\n\n\n\n\n\n\n\n\nworker-cpu-affinity\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nworker-shutdown-timeout\n\n\nstring\n\n\n\"10s\"\n\n\n\n\n\n\nload-balance\n\n\nstring\n\n\n\"least_conn\"\n\n\n\n\n\n\nvariables-hash-bucket-size\n\n\nint\n\n\n128\n\n\n\n\n\n\nvariables-hash-max-size\n\n\nint\n\n\n2048\n\n\n\n\n\n\nupstream-keepalive-connections\n\n\nint\n\n\n32\n\n\n\n\n\n\nlimit-conn-zone-variable\n\n\nstring\n\n\n\"$binary_remote_addr\"\n\n\n\n\n\n\nproxy-stream-timeout\n\n\nstring\n\n\n\"600s\"\n\n\n\n\n\n\nproxy-stream-responses\n\n\nint\n\n\n1\n\n\n\n\n\n\nbind-address-ipv4\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nbind-address-ipv6\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nforwarded-for-header\n\n\nstring\n\n\n\"X-Forwarded-For\"\n\n\n\n\n\n\ncompute-full-forwarded-for\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-add-original-uri-header\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-opentracing\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nzipkin-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nzipkin-collector-port\n\n\nint\n\n\n9411\n\n\n\n\n\n\nzipkin-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\njaeger-collector-port\n\n\nint\n\n\n6831\n\n\n\n\n\n\njaeger-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-sampler-type\n\n\nstring\n\n\n\"const\"\n\n\n\n\n\n\njaeger-sampler-param\n\n\nstring\n\n\n\"1\"\n\n\n\n\n\n\nhttp-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nlocation-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\ncustom-http-errors\n\n\n[]int]\n\n\n[]int{}\n\n\n\n\n\n\nproxy-body-size\n\n\nstring\n\n\n\"1m\"\n\n\n\n\n\n\nproxy-connect-timeout\n\n\nint\n\n\n5\n\n\n\n\n\n\nproxy-read-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-send-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nproxy-cookie-path\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-cookie-domain\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-next-upstream\n\n\nstring\n\n\n\"error timeout invalid_header http_502 http_503 http_504\"\n\n\n\n\n\n\nproxy-next-upstream-tries\n\n\nint\n\n\n0\n\n\n\n\n\n\nproxy-redirect-from\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-request-buffering\n\n\nstring\n\n\n\"on\"\n\n\n\n\n\n\nssl-redirect\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nwhitelist-source-range\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nskip-access-log-urls\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nlimit-rate\n\n\nint\n\n\n0\n\n\n\n\n\n\nlimit-rate-after\n\n\nint\n\n\n0\n\n\n\n\n\n\nhttp-redirect-code\n\n\nint\n\n\n308\n\n\n\n\n\n\nproxy-buffering\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nlimit-req-status-code\n\n\nint\n\n\n503\n\n\n\n\n\n\nno-tls-redirect-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\nno-auth-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\n\n\nadd-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to the client. See \nproxy-set-headers\n. \nexample\n\n\nallow-backend-server-header\n\u00b6\n\n\nEnables the return of the header Server from the backend instead of the generic nginx string. \ndefault:\n is disabled\n\n\nhide-headers\n\u00b6\n\n\nSets additional header that will not be passed from the upstream server to the client response.\n\ndefault:\n empty\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header\n\n\naccess-log-path\n\u00b6\n\n\nAccess log path. Goes to \n/var/log/nginx/access.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/access.log\n is a symlink to \n/dev/stdout\n\n\nerror-log-path\n\u00b6\n\n\nError log path. Goes to \n/var/log/nginx/error.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/error.log\n is a symlink to \n/dev/stderr\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nenable-dynamic-tls-records\n\u00b6\n\n\nEnables dynamically sized TLS records to improve time-to-first-byte. \ndefault:\n is enabled\n\n\nReferences:\n\n\nhttps://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency\n\n\nenable-modsecurity\n\u00b6\n\n\nEnables the modsecurity module for NGINX. \ndefault:\n is disabled\n\n\nenable-owasp-modsecurity-crs\n\u00b6\n\n\nEnables the OWASP ModSecurity Core Rule Set (CRS). \ndefault:\n is disabled\n\n\nclient-header-buffer-size\n\u00b6\n\n\nAllows to configure a custom buffer size for reading client request header.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size\n\n\nclient-header-timeout\n\u00b6\n\n\nDefines a timeout for reading client request header, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout\n\n\nclient-body-buffer-size\n\u00b6\n\n\nSets buffer size for reading client request body.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size\n\n\nclient-body-timeout\n\u00b6\n\n\nDefines a timeout for reading client request body, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout\n\n\ndisable-access-log\n\u00b6\n\n\nDisables the Access Log from the entire Ingress Controller. \ndefault:\n '\"false\"'\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_log_module.html#access_log\n\n\ndisable-ipv6\n\u00b6\n\n\nDisable listening on IPV6. \ndefault:\n is disabled\n\n\ndisable-ipv6-dns\n\u00b6\n\n\nDisable IPV6 for nginx DNS resolver. \ndefault:\n is disabled\n\n\nenable-underscores-in-headers\n\u00b6\n\n\nEnables underscores in header names. \ndefault:\n is disabled\n\n\nignore-invalid-headers\n\u00b6\n\n\nSet if header fields with invalid names should be ignored.\n\ndefault:\n is enabled\n\n\nenable-vts-status\n\u00b6\n\n\nAllows the replacement of the default status page with a third party module named \nnginx-module-vts\n.\n\ndefault:\n is disabled\n\n\nvts-status-zone-size\n\u00b6\n\n\nVts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. \ndefault:\n 10m\n\n\nReferences:\n\n\nhttps://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone\n\n\nvts-default-filter-key\n\u00b6\n\n\nVts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. \ndefault:\n $geoip_country_code country::*\n\n\nReferences:\n\n\nhttps://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key\n\n\nvts-sum-key\n\u00b6\n\n\nFor metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. \ndefault:\n *\n\n\nReferences:\n\n\nhttps://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key\n\n\nretry-non-idempotent\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".\n\n\nerror-log-level\n\u00b6\n\n\nConfigures the logging level of errors. Log levels above are listed in the order of increasing severity.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nhttp2-max-field-size\n\u00b6\n\n\nLimits the maximum size of an HPACK-compressed request header field.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size\n\n\nhttp2-max-header-size\n\u00b6\n\n\nLimits the maximum size of the entire request header list after HPACK decompression.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size\n\n\nhsts\n\u00b6\n\n\nEnables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.\n\n\nReferences:\n\n\n\n\nhttps://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security\n\n\nhttps://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server\n\n\n\n\nhsts-include-subdomains\n\u00b6\n\n\nEnables or disables the use of HSTS in all the subdomains of the server-name.\n\n\nhsts-max-age\n\u00b6\n\n\nSets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.\n\n\nhsts-preload\n\u00b6\n\n\nEnables or disables the preload attribute in the HSTS feature (when it is enabled) dd\n\n\nkeep-alive\n\u00b6\n\n\nSets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout\n\n\nkeep-alive-requests\n\u00b6\n\n\nSets the maximum number of requests that can be served through one keep-alive connection.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests\n\n\nlarge-client-header-buffers\n\u00b6\n\n\nSets the maximum number and size of buffers used for reading large client request header. \ndefault:\n 4 8k\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers\n\n\nlog-format-escape-json\n\u00b6\n\n\nSets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx \nlog format\n.\n\n\nlog-format-upstream\n\u00b6\n\n\nSets the nginx \nlog format\n.\nExample for json output:\n\n\nconsolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'\n\n\nPlease check the \nlog-format\n for definition of each field.\n\n\nlog-format-stream\n\u00b6\n\n\nSets the nginx \nstream format\n.\n\n\nmax-worker-connections\n\u00b6\n\n\nSets the maximum number of simultaneous connections that can be opened by each \nworker process\n\n\nmap-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the \nmap variables hash tables\n. The details of setting up hash tables are provided in a separate \ndocument\n.\n\n\nproxy-real-ip-cidr\n\u00b6\n\n\nIf use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.\n\n\nproxy-set-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to backends. The value format is namespace/name.  See \nexample\n\n\nserver-name-hash-max-size\n\u00b6\n\n\nSets the maximum size of the \nserver names hash tables\n used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nserver-name-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the server names hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size\n\n\n\n\nproxy-headers-hash-max-size\n\u00b6\n\n\nSets the maximum size of the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size\n\n\n\n\nproxy-headers-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size\n\n\n\n\nserver-tokens\n\u00b6\n\n\nSend NGINX Server header in responses and display NGINX version in error pages. \ndefault:\n is enabled\n\n\nssl-ciphers\n\u00b6\n\n\nSets the \nciphers\n list to enable. The ciphers are specified in the format understood by the OpenSSL library.\n\n\nThe default cipher list is:\n \nECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n.\n\n\nThe ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect \nforward secrecy\n.\n\n\nPlease check the \nMozilla SSL Configuration Generator\n.\n\n\nssl-ecdh-curve\n\u00b6\n\n\nSpecifies a curve for ECDHE ciphers.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve\n\n\nssl-dh-param\n\u00b6\n\n\nSets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".\n\n\nReferences:\n\n\n\n\nhttps://wiki.openssl.org/index.php/Diffie-Hellman_parameters\n\n\nhttps://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam\n\n\n\n\nssl-protocols\n\u00b6\n\n\nSets the \nSSL protocols\n to use. The default is: \nTLSv1.2\n.\n\n\nPlease check the result of the configuration using \nhttps://ssllabs.com/ssltest/analyze.html\n or \nhttps://testssl.sh\n.\n\n\nssl-session-cache\n\u00b6\n\n\nEnables or disables the use of shared \nSSL cache\n among worker processes.\n\n\nssl-session-cache-size\n\u00b6\n\n\nSets the size of the \nSSL shared session cache\n between all worker processes.\n\n\nssl-session-tickets\n\u00b6\n\n\nEnables or disables session resumption through \nTLS session tickets\n.\n\n\nssl-session-ticket-key\n\u00b6\n\n\nSets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.\n\n\nTLS session ticket-key\n, by default, a randomly generated key is used. To create a ticket: \nopenssl rand 80 | base64 -w0\n\n\nssl-session-timeout\n\u00b6\n\n\nSets the time during which a client may \nreuse the session\n parameters stored in a cache.\n\n\nssl-buffer-size\n\u00b6\n\n\nSets the size of the \nSSL buffer\n used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).\n\n\nReferences:\n\n\nhttps://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/\n\n\nuse-proxy-protocol\n\u00b6\n\n\nEnables or disables the \nPROXY protocol\n to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).\n\n\nuse-gzip\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"gzip\" module\n.\nThe default mime type list to compress is: \napplication/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n.\n\n\nuse-geoip\n\u00b6\n\n\nEnables or disables \n\"geoip\" module\n that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\n\ndefault:\n true\n\n\nenable-brotli\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"brotli\" module\n.\nThe default mime type list to compress is: \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n. \ndefault:\n is disabled\n\n\n\n\nNote:\n Brotli does not works in Safari < 11. For more information see \nhttps://caniuse.com/#feat=brotli\n\n\n\n\nbrotli-level\n\u00b6\n\n\nSets the Brotli Compression Level that will be used. \ndefault:\n 4\n\n\nbrotli-types\n\u00b6\n\n\nSets the MIME Types that will be compressed on-the-fly by brotli.\n\ndefault:\n \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n\n\nuse-http2\n\u00b6\n\n\nEnables or disables \nHTTP/2\n support in secure connections.\n\n\ngzip-types\n\u00b6\n\n\nSets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if \nuse-gzip\n is enabled.\n\n\nworker-processes\n\u00b6\n\n\nSets the number of \nworker processes\n.\nThe default of \"auto\" means number of available CPU cores.\n\n\nworker-cpu-affinity\n\u00b6\n\n\nBinds worker processes to the sets of CPUs. \nworker_cpu_affinity\n.\nBy default worker processes are not bound to any specific CPUs. The value can be:\n\n\n\n\n\"\": empty string indicate no affinity is applied.\n\n\ncpumask: e.g. \n0001 0010 0100 1000\n to bind processes to specific cpus.\n\n\nauto: binding worker processes automatically to available CPUs.\n\n\n\n\nworker-shutdown-timeout\n\u00b6\n\n\nSets a timeout for Nginx to \nwait for worker to gracefully shutdown\n. \ndefault:\n \"10s\"\n\n\nload-balance\n\u00b6\n\n\nSets the algorithm to use for load balancing.\nThe value can either be:\n\n\n\n\nround_robin: to use the default round robin loadbalancer\n\n\nleast_conn: to use the least connected method\n\n\nip_hash: to use a hash of the server for routing.\n\n\newma: to use the peak ewma method for routing (only available with \nenable-dynamic-configuration\n flag) \n\n\n\n\nThe default is least_conn.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/load_balancing.html\n\n\nvariables-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size\n\n\nvariables-hash-max-size\n\u00b6\n\n\nSets the maximum size of the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size\n\n\nupstream-keepalive-connections\n\u00b6\n\n\nActivates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed. \ndefault:\n 32\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive\n\n\nlimit-conn-zone-variable\n\u00b6\n\n\nSets parameters for a shared memory zone that will keep states for various keys of \nlimit_conn_zone\n. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.\n\n\nproxy-stream-timeout\n\u00b6\n\n\nSets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout\n\n\nproxy-stream-responses\n\u00b6\n\n\nSets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses\n\n\nbind-address-ipv4\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nbind-address-ipv6\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nforwarded-for-header\n\u00b6\n\n\nSets the header field for identifying the originating IP address of a client. \ndefault:\n X-Forwarded-For\n\n\ncompute-full-forwarded-for\n\u00b6\n\n\nAppend the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.\n\n\nproxy-add-original-uri-header\n\u00b6\n\n\nAdds an X-Original-Uri header with the original request URI to the backend request\n\n\nenable-opentracing\n\u00b6\n\n\nEnables the nginx Opentracing extension. \ndefault:\n is disabled\n\n\nReferences:\n\n\nhttps://github.com/opentracing-contrib/nginx-opentracing\n\n\nzipkin-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\nzipkin-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 9411\n\n\nzipkin-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\njaeger-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\njaeger-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 6831\n\n\njaeger-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\njaeger-sampler-type\n\u00b6\n\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. \ndefault:\n const\n\n\njaeger-sampler-param\n\u00b6\n\n\nSpecifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. \ndefault:\n 1\n\n\nhttp-snippet\n\u00b6\n\n\nAdds custom configuration to the http section of the nginx configuration.\n\ndefault:\n \"\"\n\n\nserver-snippet\n\u00b6\n\n\nAdds custom configuration to all the servers in the nginx configuration.\n\ndefault:\n \"\"\n\n\nlocation-snippet\n\u00b6\n\n\nAdds custom configuration to all the locations in the nginx configuration.\n\ndefault:\n \"\"\n\n\ncustom-http-errors\n\u00b6\n\n\nEnables which HTTP codes should be passed for processing with the \nerror_page directive\n\n\nSetting at least one code also enables \nproxy_intercept_errors\n which are required to process error_page.\n\n\nExample usage: \ncustom-http-errors: 404,415\n\n\nproxy-body-size\n\u00b6\n\n\nSets the maximum allowed size of the client request body.\nSee NGINX \nclient_max_body_size\n.\n\n\nproxy-connect-timeout\n\u00b6\n\n\nSets the timeout for \nestablishing a connection with a proxied server\n. It should be noted that this timeout cannot usually exceed 75 seconds.\n\n\nproxy-read-timeout\n\u00b6\n\n\nSets the timeout in seconds for \nreading a response from the proxied server\n. The timeout is set only between two successive read operations, not for the transmission of the whole response.\n\n\nproxy-send-timeout\n\u00b6\n\n\nSets the timeout in seconds for \ntransmitting a request to the proxied server\n. The timeout is set only between two successive write operations, not for the transmission of the whole request.\n\n\nproxy-buffer-size\n\u00b6\n\n\nSets the size of the buffer used for \nreading the first part of the response\n received from the proxied server. This part usually contains a small response header.\n\n\nproxy-cookie-path\n\u00b6\n\n\nSets a text that \nshould be changed in the path attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-cookie-domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-next-upstream\n\u00b6\n\n\nSpecifies in \nwhich cases\n a request should be passed to the next server.\n\n\nproxy-next-upstream-tries\n\u00b6\n\n\nLimit the number of \npossible tries\n a request should be passed to the next server.\n\n\nproxy-redirect-from\n\u00b6\n\n\nSets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. \ndefault:\n off\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect\n\n\nproxy-request-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of a client request body\n.\n\n\nssl-redirect\n\u00b6\n\n\nSets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\n\ndefault:\n \"true\"\n\n\nwhitelist-source-range\n\u00b6\n\n\nSets the default whitelisted IPs for each \nserver\n block. This can be overwritten by an annotation on an Ingress rule.\nSee \nngx_http_access_module\n.\n\n\nskip-access-log-urls\n\u00b6\n\n\nSets a list of URLs that should not appear in the NGINX access log. This is useful with urls like \n/health\n or \nhealth-check\n that make \"complex\" reading the logs. \ndefault:\n is empty\n\n\nlimit-rate\n\u00b6\n\n\nLimits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate\n\n\nlimit-rate-after\n\u00b6\n\n\nSets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after\n\n\nhttp-redirect-code\n\u00b6\n\n\nSets the HTTP status code to be used in redirects.\nSupported codes are \n301\n,\n302\n,\n307\n and \n308\n\n\ndefault:\n 308\n\n\n\n\nWhy the default code is 308?\n\n\nRFC 7238\n was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.\n\n\n\n\nproxy-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of responses from the proxied server\n.\n\n\nlimit-req-status-code\n\u00b6\n\n\nSets the \nstatus code to return in response to rejected requests\n. \ndefault:\n 503\n\n\nno-tls-redirect-locations\n\u00b6\n\n\nA comma-separated list of locations on which http requests will never get redirected to their https counterpart.\n\ndefault:\n \"/.well-known/acme-challenge\"\n\n\nno-auth-locations\n\u00b6\n\n\nA comma-separated list of locations that should not get authenticated.\n\ndefault:\n \"/.well-known/acme-challenge\"",
             "title": "ConfigMaps"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#configmaps",
-            "text": "ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.  The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be  deployed .  In order to overwrite nginx-controller configuration values as seen in  config.go ,\nyou can add key-value pairs to the data section of the config-map. For Example:  data : \n   map-hash-bucket-size :   \"128\" \n   ssl-protocols :   SSLv2   IMPORTANT:  The key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".  \"Slice\" types (defined below as  []string  or  []int  can be provided as a comma-delimited string.",
+            "text": "ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.  The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be  deployed .  In order to overwrite nginx-controller configuration values as seen in  config.go ,\nyou can add key-value pairs to the data section of the config-map. For Example:  data : \n   map-hash-bucket-size :   \"128\" \n   ssl-protocols :   SSLv2    Important  The key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".  \"Slice\" types (defined below as  []string  or  []int  can be provided as a comma-delimited string.",
             "title": "ConfigMaps"
         },
         {
@@ -362,12 +367,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#allow-backend-server-header",
-            "text": "Enables the return of the header Server from the backend instead of the generic nginx string. By default this is disabled.",
+            "text": "Enables the return of the header Server from the backend instead of the generic nginx string.  default:  is disabled",
             "title": "allow-backend-server-header"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#hide-headers",
-            "text": "Sets additional header that will not be passed from the upstream server to the client response.\nDefault: empty  References: \n- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header",
+            "text": "Sets additional header that will not be passed from the upstream server to the client response. default:  empty  References:  http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header",
             "title": "hide-headers"
         },
         {
@@ -377,87 +382,87 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#error-log-path",
-            "text": "Error log path. Goes to  /var/log/nginx/error.log  by default.  Note:  the file  /var/log/nginx/error.log  is a symlink to  /dev/stderr  References: \n- http://nginx.org/en/docs/ngx_core_module.html#error_log",
+            "text": "Error log path. Goes to  /var/log/nginx/error.log  by default.  Note:  the file  /var/log/nginx/error.log  is a symlink to  /dev/stderr  References:  http://nginx.org/en/docs/ngx_core_module.html#error_log",
             "title": "error-log-path"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-dynamic-tls-records",
-            "text": "Enables dynamically sized TLS records to improve time-to-first-byte. By default this is enabled. See  CloudFlare's blog  for more information.",
+            "text": "Enables dynamically sized TLS records to improve time-to-first-byte.  default:  is enabled  References:  https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency",
             "title": "enable-dynamic-tls-records"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-modsecurity",
-            "text": "Enables the modsecurity module for NGINX. By default this is disabled.",
+            "text": "Enables the modsecurity module for NGINX.  default:  is disabled",
             "title": "enable-modsecurity"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-owasp-modsecurity-crs",
-            "text": "Enables the OWASP ModSecurity Core Rule Set (CRS). By default this is disabled.",
+            "text": "Enables the OWASP ModSecurity Core Rule Set (CRS).  default:  is disabled",
             "title": "enable-owasp-modsecurity-crs"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#client-header-buffer-size",
-            "text": "Allows to configure a custom buffer size for reading client request header.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size",
+            "text": "Allows to configure a custom buffer size for reading client request header.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size",
             "title": "client-header-buffer-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#client-header-timeout",
-            "text": "Defines a timeout for reading client request header, in seconds.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout",
+            "text": "Defines a timeout for reading client request header, in seconds.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout",
             "title": "client-header-timeout"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#client-body-buffer-size",
-            "text": "Sets buffer size for reading client request body.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size",
+            "text": "Sets buffer size for reading client request body.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size",
             "title": "client-body-buffer-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#client-body-timeout",
-            "text": "Defines a timeout for reading client request body, in seconds.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout",
+            "text": "Defines a timeout for reading client request body, in seconds.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout",
             "title": "client-body-timeout"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#disable-access-log",
-            "text": "Disables the Access Log from the entire Ingress Controller. This is '\"false\"' by default.  References: \n- http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log",
+            "text": "Disables the Access Log from the entire Ingress Controller.  default:  '\"false\"'  References:  http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log",
             "title": "disable-access-log"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#disable-ipv6",
-            "text": "Disable listening on IPV6. By default this is disabled.",
+            "text": "Disable listening on IPV6.  default:  is disabled",
             "title": "disable-ipv6"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#disable-ipv6-dns",
-            "text": "Disable IPV6 for nginx DNS resolver. By default this is disabled.",
+            "text": "Disable IPV6 for nginx DNS resolver.  default:  is disabled",
             "title": "disable-ipv6-dns"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-underscores-in-headers",
-            "text": "Enables underscores in header names. By default this is disabled.",
+            "text": "Enables underscores in header names.  default:  is disabled",
             "title": "enable-underscores-in-headers"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#ignore-invalid-headers",
-            "text": "Set if header fields with invalid names should be ignored.\nBy default this is enabled.",
+            "text": "Set if header fields with invalid names should be ignored. default:  is enabled",
             "title": "ignore-invalid-headers"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-vts-status",
-            "text": "Allows the replacement of the default status page with a third party module named  nginx-module-vts .\nBy default this is disabled.",
+            "text": "Allows the replacement of the default status page with a third party module named  nginx-module-vts . default:  is disabled",
             "title": "enable-vts-status"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#vts-status-zone-size",
-            "text": "Vts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. Default value is 10m  References: \n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone",
+            "text": "Vts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes.  default:  10m  References:  https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone",
             "title": "vts-status-zone-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#vts-default-filter-key",
-            "text": "Vts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. Default value is $geoip_country_code country::*  References: \n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key",
+            "text": "Vts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name.  default:  $geoip_country_code country::*  References:  https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key",
             "title": "vts-default-filter-key"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#vts-sum-key",
-            "text": "For metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. Default value is *  References: \n- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key",
+            "text": "For metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined.  default:  *  References:  https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key",
             "title": "vts-sum-key"
         },
         {
@@ -467,22 +472,22 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#error-log-level",
-            "text": "Configures the logging level of errors. Log levels above are listed in the order of increasing severity.  References: \n- http://nginx.org/en/docs/ngx_core_module.html#error_log",
+            "text": "Configures the logging level of errors. Log levels above are listed in the order of increasing severity.  References:  http://nginx.org/en/docs/ngx_core_module.html#error_log",
             "title": "error-log-level"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#http2-max-field-size",
-            "text": "Limits the maximum size of an HPACK-compressed request header field.  References: \n- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size",
+            "text": "Limits the maximum size of an HPACK-compressed request header field.  References:  https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size",
             "title": "http2-max-field-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#http2-max-header-size",
-            "text": "Limits the maximum size of the entire request header list after HPACK decompression.  References: \n- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size",
+            "text": "Limits the maximum size of the entire request header list after HPACK decompression.  References:  https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size",
             "title": "http2-max-header-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#hsts",
-            "text": "Enables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.  References: \n- https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security\n- https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server",
+            "text": "Enables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.  References:   https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security  https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server",
             "title": "hsts"
         },
         {
@@ -502,17 +507,17 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#keep-alive",
-            "text": "Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout",
+            "text": "Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout",
             "title": "keep-alive"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#keep-alive-requests",
-            "text": "Sets the maximum number of requests that can be served through one keep-alive connection.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests",
+            "text": "Sets the maximum number of requests that can be served through one keep-alive connection.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests",
             "title": "keep-alive-requests"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#large-client-header-buffers",
-            "text": "Sets the maximum number and size of buffers used for reading large client request header. Default: 4 8k.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers",
+            "text": "Sets the maximum number and size of buffers used for reading large client request header.  default:  4 8k  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers",
             "title": "large-client-header-buffers"
         },
         {
@@ -522,7 +527,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#log-format-upstream",
-            "text": "Sets the nginx  log format .\nExample for json output:  consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'  Please check  log-format  for definition of each field.",
+            "text": "Sets the nginx  log format .\nExample for json output:  consolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'  Please check the  log-format  for definition of each field.",
             "title": "log-format-upstream"
         },
         {
@@ -552,27 +557,27 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#server-name-hash-max-size",
-            "text": "Sets the maximum size of the  server names hash tables  used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.  References: \n- http://nginx.org/en/docs/hash.html",
+            "text": "Sets the maximum size of the  server names hash tables  used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.  References:  http://nginx.org/en/docs/hash.html",
             "title": "server-name-hash-max-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#server-name-hash-bucket-size",
-            "text": "Sets the size of the bucket for the server names hash tables.  References: \n- http://nginx.org/en/docs/hash.html\n- http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size",
+            "text": "Sets the size of the bucket for the server names hash tables.  References:   http://nginx.org/en/docs/hash.html  http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size",
             "title": "server-name-hash-bucket-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#proxy-headers-hash-max-size",
-            "text": "Sets the maximum size of the proxy headers hash tables.  References: \n- http://nginx.org/en/docs/hash.html\n- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size",
+            "text": "Sets the maximum size of the proxy headers hash tables.  References:   http://nginx.org/en/docs/hash.html  https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size",
             "title": "proxy-headers-hash-max-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#proxy-headers-hash-bucket-size",
-            "text": "Sets the size of the bucket for the proxy headers hash tables.  References: \n- http://nginx.org/en/docs/hash.html\n- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size",
+            "text": "Sets the size of the bucket for the proxy headers hash tables.  References:   http://nginx.org/en/docs/hash.html  https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size",
             "title": "proxy-headers-hash-bucket-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#server-tokens",
-            "text": "Send NGINX Server header in responses and display NGINX version in error pages. By default this is enabled.",
+            "text": "Send NGINX Server header in responses and display NGINX version in error pages.  default:  is enabled",
             "title": "server-tokens"
         },
         {
@@ -582,12 +587,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#ssl-ecdh-curve",
-            "text": "Specifies a curve for ECDHE ciphers.  References: \n- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve",
+            "text": "Specifies a curve for ECDHE ciphers.  References:  http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve",
             "title": "ssl-ecdh-curve"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#ssl-dh-param",
-            "text": "Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".  References: \n- https://wiki.openssl.org/index.php/Diffie-Hellman_parameters\n- https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam\n- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam",
+            "text": "Sets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".  References:   https://wiki.openssl.org/index.php/Diffie-Hellman_parameters  https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam  http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam",
             "title": "ssl-dh-param"
         },
         {
@@ -622,7 +627,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#ssl-buffer-size",
-            "text": "Sets the size of the  SSL buffer  used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).  References: \n- https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/",
+            "text": "Sets the size of the  SSL buffer  used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).  References:  https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/",
             "title": "ssl-buffer-size"
         },
         {
@@ -637,22 +642,22 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#use-geoip",
-            "text": "Enables or disables  \"geoip\" module  that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\nThe default value is true.",
+            "text": "Enables or disables  \"geoip\" module  that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default:  true",
             "title": "use-geoip"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-brotli",
-            "text": "Enables or disables compression of HTTP responses using the  \"brotli\" module .\nThe default mime type list to compress is:  application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component . This is  disabled  by default.  Note:  Brotli does not works in Safari < 11 https://caniuse.com/#feat=brotli",
+            "text": "Enables or disables compression of HTTP responses using the  \"brotli\" module .\nThe default mime type list to compress is:  application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component .  default:  is disabled   Note:  Brotli does not works in Safari < 11. For more information see  https://caniuse.com/#feat=brotli",
             "title": "enable-brotli"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#brotli-level",
-            "text": "Sets the Brotli Compression Level that will be used.  Defaults to  4.",
+            "text": "Sets the Brotli Compression Level that will be used.  default:  4",
             "title": "brotli-level"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#brotli-types",
-            "text": "Sets the MIME Types that will be compressed on-the-fly by brotli. Defaults to   application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component .",
+            "text": "Sets the MIME Types that will be compressed on-the-fly by brotli. default:   application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component",
             "title": "brotli-types"
         },
         {
@@ -677,27 +682,27 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#worker-shutdown-timeout",
-            "text": "Sets a timeout for Nginx to  wait for worker to gracefully shutdown . The default is \"10s\".",
+            "text": "Sets a timeout for Nginx to  wait for worker to gracefully shutdown .  default:  \"10s\"",
             "title": "worker-shutdown-timeout"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#load-balance",
-            "text": "Sets the algorithm to use for load balancing.\nThe value can either be:   round_robin: to use the default round robin loadbalancer  least_conn: to use the least connected method  ip_hash: to use a hash of the server for routing.  ewma: to use the peak ewma method for routing (only available with  enable-dynamic-configuration  flag)    The default is least_conn.  References: \n- http://nginx.org/en/docs/http/load_balancing.html.",
+            "text": "Sets the algorithm to use for load balancing.\nThe value can either be:   round_robin: to use the default round robin loadbalancer  least_conn: to use the least connected method  ip_hash: to use a hash of the server for routing.  ewma: to use the peak ewma method for routing (only available with  enable-dynamic-configuration  flag)    The default is least_conn.  References:  http://nginx.org/en/docs/http/load_balancing.html",
             "title": "load-balance"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#variables-hash-bucket-size",
-            "text": "Sets the bucket size for the variables hash table.  References: \n- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size",
+            "text": "Sets the bucket size for the variables hash table.  References:  http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size",
             "title": "variables-hash-bucket-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#variables-hash-max-size",
-            "text": "Sets the maximum size of the variables hash table.  References: \n- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size",
+            "text": "Sets the maximum size of the variables hash table.  References:  http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size",
             "title": "variables-hash-max-size"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#upstream-keepalive-connections",
-            "text": "Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed. Default: 32  References: \n- http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive",
+            "text": "Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed.  default:  32  References:  http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive",
             "title": "upstream-keepalive-connections"
         },
         {
@@ -707,12 +712,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#proxy-stream-timeout",
-            "text": "Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.  References: \n- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout",
+            "text": "Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.  References:  http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout",
             "title": "proxy-stream-timeout"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#proxy-stream-responses",
-            "text": "Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.  References: \n- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses",
+            "text": "Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.  References:  http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses",
             "title": "proxy-stream-responses"
         },
         {
@@ -727,7 +732,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#forwarded-for-header",
-            "text": "Sets the header field for identifying the originating IP address of a client. Default is X-Forwarded-For",
+            "text": "Sets the header field for identifying the originating IP address of a client.  default:  X-Forwarded-For",
             "title": "forwarded-for-header"
         },
         {
@@ -742,7 +747,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#enable-opentracing",
-            "text": "Enables the nginx Opentracing extension. By default this is disabled.  References: \n- https://github.com/opentracing-contrib/nginx-opentracing",
+            "text": "Enables the nginx Opentracing extension.  default:  is disabled  References:  https://github.com/opentracing-contrib/nginx-opentracing",
             "title": "enable-opentracing"
         },
         {
@@ -752,12 +757,12 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#zipkin-collector-port",
-            "text": "Specifies the port to use when uploading traces. Default: 9411",
+            "text": "Specifies the port to use when uploading traces.  default:  9411",
             "title": "zipkin-collector-port"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#zipkin-service-name",
-            "text": "Specifies the service name to use for any traces created. Default: nginx",
+            "text": "Specifies the service name to use for any traces created.  default:  nginx",
             "title": "zipkin-service-name"
         },
         {
@@ -767,37 +772,37 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#jaeger-collector-port",
-            "text": "Specifies the port to use when uploading traces. Default: 6831",
+            "text": "Specifies the port to use when uploading traces.  default:  6831",
             "title": "jaeger-collector-port"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#jaeger-service-name",
-            "text": "Specifies the service name to use for any traces created. Default: nginx",
+            "text": "Specifies the service name to use for any traces created.  default:  nginx",
             "title": "jaeger-service-name"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#jaeger-sampler-type",
-            "text": "Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. Default const.",
+            "text": "Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote.  default:  const",
             "title": "jaeger-sampler-type"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#jaeger-sampler-param",
-            "text": "Specifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. Default: 1",
+            "text": "Specifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample.  default:  1",
             "title": "jaeger-sampler-param"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#http-snippet",
-            "text": "Adds custom configuration to the http section of the nginx configuration.\nDefault: \"\"",
+            "text": "Adds custom configuration to the http section of the nginx configuration. default:  \"\"",
             "title": "http-snippet"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#server-snippet",
-            "text": "Adds custom configuration to all the servers in the nginx configuration.\nDefault: \"\"",
+            "text": "Adds custom configuration to all the servers in the nginx configuration. default:  \"\"",
             "title": "server-snippet"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#location-snippet",
-            "text": "Adds custom configuration to all the locations in the nginx configuration.\nDefault: \"\"",
+            "text": "Adds custom configuration to all the locations in the nginx configuration. default:  \"\"",
             "title": "location-snippet"
         },
         {
@@ -852,7 +857,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#proxy-redirect-from",
-            "text": "Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. Default: off.  References: \n- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect",
+            "text": "Sets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response.  default:  off  References:  http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect",
             "title": "proxy-redirect-from"
         },
         {
@@ -862,7 +867,7 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#ssl-redirect",
-            "text": "Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\nDefault is \"true\".",
+            "text": "Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default:  \"true\"",
             "title": "ssl-redirect"
         },
         {
@@ -872,22 +877,22 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#skip-access-log-urls",
-            "text": "Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like  /health  or  health-check  that make \"complex\" reading the logs. By default this list is empty",
+            "text": "Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like  /health  or  health-check  that make \"complex\" reading the logs.  default:  is empty",
             "title": "skip-access-log-urls"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#limit-rate",
-            "text": "Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate",
+            "text": "Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate",
             "title": "limit-rate"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#limit-rate-after",
-            "text": "Sets the initial amount after which the further transmission of a response to a client will be rate limited.  References: \n- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after",
+            "text": "Sets the initial amount after which the further transmission of a response to a client will be rate limited.  References:  http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after",
             "title": "limit-rate-after"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#http-redirect-code",
-            "text": "Sets the HTTP status code to be used in redirects.\nSupported codes are  301 , 302 , 307  and  308 \nDefault code is 308.  Why the default code is 308?  RFC 7238  was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.",
+            "text": "Sets the HTTP status code to be used in redirects.\nSupported codes are  301 , 302 , 307  and  308  default:  308   Why the default code is 308?  RFC 7238  was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.",
             "title": "http-redirect-code"
         },
         {
@@ -897,17 +902,17 @@
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#limit-req-status-code",
-            "text": "Sets the  status code to return in response to rejected requests .Default: 503",
+            "text": "Sets the  status code to return in response to rejected requests .  default:  503",
             "title": "limit-req-status-code"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#no-tls-redirect-locations",
-            "text": "A comma-separated list of locations on which http requests will never get redirected to their https counterpart.\nDefault: \"/.well-known/acme-challenge\"",
+            "text": "A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default:  \"/.well-known/acme-challenge\"",
             "title": "no-tls-redirect-locations"
         },
         {
             "location": "/user-guide/nginx-configuration/configmap/#no-auth-locations",
-            "text": "A comma-separated list of locations that should not get authenticated.\nDefault: \"/.well-known/acme-challenge\"",
+            "text": "A comma-separated list of locations that should not get authenticated. default:  \"/.well-known/acme-challenge\"",
             "title": "no-auth-locations"
         },
         {
@@ -942,12 +947,12 @@
         },
         {
             "location": "/user-guide/custom-errors/",
-            "text": "Custom errors\n\u00b6\n\n\nIn case of an error in a request the body of the response is obtained from the \ndefault backend\n.\nEach request to the default backend includes two headers:\n\n\n\n\nX-Code\n indicates the HTTP code to be returned to the client.\n\n\nX-Format\n the value of the \nAccept\n header.\n\n\n\n\nImportant:\n The custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.\n\n\nUsing these two headers it's possible to use a custom backend service like \nthis one\n that inspects each request and returns a custom error page with the format expected by the client. Please check the example \ncustom-errors\n.\n\n\nNGINX sends additional headers that can be used to build custom response:\n\n\n\n\nX-Original-URI\n\n\nX-Namespace\n\n\nX-Ingress-Name\n\n\nX-Service-Name",
+            "text": "Custom errors\n\u00b6\n\n\nIn case of an error in a request the body of the response is obtained from the \ndefault backend\n.\nEach request to the default backend includes two headers:\n\n\n\n\nX-Code\n indicates the HTTP code to be returned to the client.\n\n\nX-Format\n the value of the \nAccept\n header.\n\n\n\n\n\n\nImportant\n\n\nThe custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.\n\n\n\n\nUsing these two headers it's possible to use a custom backend service like \nthis one\n that inspects each request and returns a custom error page with the format expected by the client. Please check the example \ncustom-errors\n.\n\n\nNGINX sends additional headers that can be used to build custom response:\n\n\n\n\nX-Original-URI\n\n\nX-Namespace\n\n\nX-Ingress-Name\n\n\nX-Service-Name",
             "title": "Custom errors"
         },
         {
             "location": "/user-guide/custom-errors/#custom-errors",
-            "text": "In case of an error in a request the body of the response is obtained from the  default backend .\nEach request to the default backend includes two headers:   X-Code  indicates the HTTP code to be returned to the client.  X-Format  the value of the  Accept  header.   Important:  The custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.  Using these two headers it's possible to use a custom backend service like  this one  that inspects each request and returns a custom error page with the format expected by the client. Please check the example  custom-errors .  NGINX sends additional headers that can be used to build custom response:   X-Original-URI  X-Namespace  X-Ingress-Name  X-Service-Name",
+            "text": "In case of an error in a request the body of the response is obtained from the  default backend .\nEach request to the default backend includes two headers:   X-Code  indicates the HTTP code to be returned to the client.  X-Format  the value of the  Accept  header.    Important  The custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.   Using these two headers it's possible to use a custom backend service like  this one  that inspects each request and returns a custom error page with the format expected by the client. Please check the example  custom-errors .  NGINX sends additional headers that can be used to build custom response:   X-Original-URI  X-Namespace  X-Ingress-Name  X-Service-Name",
             "title": "Custom errors"
         },
         {
@@ -972,7 +977,7 @@
         },
         {
             "location": "/user-guide/miscellaneous/",
-            "text": "Miscellaneous\n\u00b6\n\n\nConventions\n\u00b6\n\n\nAnytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:\n\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout \n${\nKEY_FILE\n}\n -out \n${\nCERT_FILE\n}\n -subj \"/CN=\n${\nHOST\n}\n/O=\n${\nHOST\n}\n\"\n\nand create the secret via \nkubectl create secret tls \n${\nCERT_NAME\n}\n --key \n${\nKEY_FILE\n}\n --cert \n${\nCERT_FILE\n}\n\n\nRequirements\n\u00b6\n\n\nThe default backend is a service which handles all url paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).\nBasically a default backend exposes two URLs:\n\n\n\n\n/healthz\n that returns 200\n\n\n/\n that returns 404\n\n\n\n\nThe sub-directory \n/images/404-server\n provides a service which satisfies the requirements for a default backend.  The sub-directory \n/images/custom-error-pages\n provides an additional service for the purpose of customizing the error pages served via the default backend.\n\n\nSource IP address\n\u00b6\n\n\nBy default NGINX uses the content of the header \nX-Forwarded-For\n as the source of truth to get information about the client IP address. This works without issues in L7 \nif we configure the setting \nproxy-real-ip-cidr\n with the correct information of the IP/network address of trusted external load balancer.\n\n\nIf the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.\n\n\nAnother option is to enable proxy protocol using \nuse-proxy-protocol: \"true\"\n.\n\n\nIn this mode NGINX does not use the content of the header to get the source IP address of the connection.\n\n\nProxy Protocol\n\u00b6\n\n\nIf you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the \nProxy Protocol\n for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.\n\n\nAmongst others \nELBs in AWS\n and \nHAProxy\n support Proxy Protocol.\n\n\nWebsockets\n\u00b6\n\n\nSupport for websockets is provided by NGINX out of the box. No special configuration required.\n\n\nThe only requirement to avoid the close of connections is the increase of the values of \nproxy-read-timeout\n and \nproxy-send-timeout\n.\n\n\nThe default value of this settings is \n60 seconds\n.\n\n\nA more adequate value to support websockets is a value higher than one hour (\n3600\n).\n\n\nImportant:\n If the NGINX ingress controller is exposed with a service \ntype=LoadBalancer\n make sure the protocol between the loadbalancer and NGINX is TCP.\n\n\nOptimizing TLS Time To First Byte (TTTFB)\n\u00b6\n\n\nNGINX provides the configuration option \nssl_buffer_size\n to allow the optimization of the TLS record size.\n\n\nThis improves the \nTLS Time To First Byte\n (TTTFB).\nThe default value in the Ingress controller is \n4k\n (NGINX default is \n16k\n).\n\n\nRetries in non-idempotent methods\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.\nThe previous behavior can be restored using \nretry-non-idempotent=true\n in the configuration ConfigMap.\n\n\nLimitations\n\u00b6\n\n\n\n\nIngress rules for TLS require the definition of the field \nhost\n\n\n\n\nWhy endpoints and not services\n\u00b6\n\n\nThe NGINX ingress controller does not use \nServices\n to route traffic to the pods. Instead it uses the Endpoints API in order to bypass \nkube-proxy\n to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.",
+            "text": "Miscellaneous\n\u00b6\n\n\nConventions\n\u00b6\n\n\nAnytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:\n\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout \n${\nKEY_FILE\n}\n -out \n${\nCERT_FILE\n}\n -subj \"/CN=\n${\nHOST\n}\n/O=\n${\nHOST\n}\n\"\n\nand create the secret via \nkubectl create secret tls \n${\nCERT_NAME\n}\n --key \n${\nKEY_FILE\n}\n --cert \n${\nCERT_FILE\n}\n\n\nRequirements\n\u00b6\n\n\nThe default backend is a service which handles all url paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).\nBasically a default backend exposes two URLs:\n\n\n\n\n/healthz\n that returns 200\n\n\n/\n that returns 404\n\n\n\n\nThe sub-directory \n/images/404-server\n provides a service which satisfies the requirements for a default backend.  The sub-directory \n/images/custom-error-pages\n provides an additional service for the purpose of customizing the error pages served via the default backend.\n\n\nSource IP address\n\u00b6\n\n\nBy default NGINX uses the content of the header \nX-Forwarded-For\n as the source of truth to get information about the client IP address. This works without issues in L7 \nif we configure the setting \nproxy-real-ip-cidr\n with the correct information of the IP/network address of trusted external load balancer.\n\n\nIf the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.\n\n\nAnother option is to enable proxy protocol using \nuse-proxy-protocol: \"true\"\n.\n\n\nIn this mode NGINX does not use the content of the header to get the source IP address of the connection.\n\n\nProxy Protocol\n\u00b6\n\n\nIf you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the \nProxy Protocol\n for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.\n\n\nAmongst others \nELBs in AWS\n and \nHAProxy\n support Proxy Protocol.\n\n\nWebsockets\n\u00b6\n\n\nSupport for websockets is provided by NGINX out of the box. No special configuration required.\n\n\nThe only requirement to avoid the close of connections is the increase of the values of \nproxy-read-timeout\n and \nproxy-send-timeout\n.\n\n\nThe default value of this settings is \n60 seconds\n.\n\n\nA more adequate value to support websockets is a value higher than one hour (\n3600\n).\n\n\n\n\nImportant\n\n\nIf the NGINX ingress controller is exposed with a service \ntype=LoadBalancer\n make sure the protocol between the loadbalancer and NGINX is TCP.\n\n\n\n\nOptimizing TLS Time To First Byte (TTTFB)\n\u00b6\n\n\nNGINX provides the configuration option \nssl_buffer_size\n to allow the optimization of the TLS record size.\n\n\nThis improves the \nTLS Time To First Byte\n (TTTFB).\nThe default value in the Ingress controller is \n4k\n (NGINX default is \n16k\n).\n\n\nRetries in non-idempotent methods\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.\nThe previous behavior can be restored using \nretry-non-idempotent=true\n in the configuration ConfigMap.\n\n\nLimitations\n\u00b6\n\n\n\n\nIngress rules for TLS require the definition of the field \nhost\n\n\n\n\nWhy endpoints and not services\n\u00b6\n\n\nThe NGINX ingress controller does not use \nServices\n to route traffic to the pods. Instead it uses the Endpoints API in order to bypass \nkube-proxy\n to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.",
             "title": "Miscellaneous"
         },
         {
@@ -1002,7 +1007,7 @@
         },
         {
             "location": "/user-guide/miscellaneous/#websockets",
-            "text": "Support for websockets is provided by NGINX out of the box. No special configuration required.  The only requirement to avoid the close of connections is the increase of the values of  proxy-read-timeout  and  proxy-send-timeout .  The default value of this settings is  60 seconds .  A more adequate value to support websockets is a value higher than one hour ( 3600 ).  Important:  If the NGINX ingress controller is exposed with a service  type=LoadBalancer  make sure the protocol between the loadbalancer and NGINX is TCP.",
+            "text": "Support for websockets is provided by NGINX out of the box. No special configuration required.  The only requirement to avoid the close of connections is the increase of the values of  proxy-read-timeout  and  proxy-send-timeout .  The default value of this settings is  60 seconds .  A more adequate value to support websockets is a value higher than one hour ( 3600 ).   Important  If the NGINX ingress controller is exposed with a service  type=LoadBalancer  make sure the protocol between the loadbalancer and NGINX is TCP.",
             "title": "Websockets"
         },
         {
@@ -1027,7 +1032,7 @@
         },
         {
             "location": "/user-guide/multiple-ingress/",
-            "text": "Multiple ingress controllers\n\u00b6\n\n\nRunning multiple ingress controllers\n\u00b6\n\n\nIf you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress, you need to specify the annotation \nkubernetes.io/ingress.class: \"nginx\"\n in all ingresses that you would like this controller to claim.  This mechanism also provides users the ability to run \nmultiple\n NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic).  When utilizing this functionality the option \n--ingress-class\n should be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:\n\n\nspec\n:\n\n  \ntemplate\n:\n\n     \nspec\n:\n\n       \ncontainers\n:\n\n         \n-\n \nname\n:\n \nnginx\n-\ningress\n-\ninternal\n-\ncontroller\n\n           \nargs\n:\n\n             \n-\n \n/\nnginx\n-\ningress\n-\ncontroller\n\n             \n-\n \n'--default-backend-service=ingress/nginx-ingress-default-backend'\n\n             \n-\n \n'--election-id=ingress-controller-leader-internal'\n\n             \n-\n \n'--ingress-class=nginx-internal'\n\n             \n-\n \n'--configmap=ingress/nginx-ingress-internal-controller'\n\n\n\n\n\n\nAnnotation ingress.class\n\u00b6\n\n\nIf you have multiple Ingress controllers in a single cluster, you can pick one by specifying the \ningress.class\n \nannotation, eg creating an Ingress with an annotation like\n\n\nmetadata\n:\n\n  \nname\n:\n \nfoo\n\n  \nannotations\n:\n\n    \nkubernetes.io/ingress.class\n:\n \n\"gce\"\n\n\n\n\n\n\nwill target the GCE controller, forcing the nginx controller to ignore it, while an annotation like\n\n\nmetadata\n:\n\n  \nname\n:\n \nfoo\n\n  \nannotations\n:\n\n    \nkubernetes.io/ingress.class\n:\n \n\"nginx\"\n\n\n\n\n\n\nwill target the nginx controller, forcing the GCE controller to ignore it.\n\n\nNote\n: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.\n\n\nDisabling NGINX ingress controller\n\u00b6\n\n\nSetting the annotation \nkubernetes.io/ingress.class\n to any other value  which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress.  If you are only running a single NGINX ingress controller, this can be achieved by setting this to any value except \"nginx\" or an empty string.\n\n\nDo this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.",
+            "text": "Multiple ingress controllers\n\u00b6\n\n\nRunning multiple ingress controllers\n\u00b6\n\n\nIf you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress, you need to specify the annotation \nkubernetes.io/ingress.class: \"nginx\"\n in all ingresses that you would like this controller to claim.\n\n\nThis mechanism also provides users the ability to run \nmultiple\n NGINX ingress controllers (e.g. one which serves public traffic, one which serves \"internal\" traffic).  When utilizing this functionality the option \n--ingress-class\n should be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:\n\n\nspec\n:\n\n  \ntemplate\n:\n\n     \nspec\n:\n\n       \ncontainers\n:\n\n         \n-\n \nname\n:\n \nnginx\n-\ningress\n-\ninternal\n-\ncontroller\n\n           \nargs\n:\n\n             \n-\n \n/\nnginx\n-\ningress\n-\ncontroller\n\n             \n-\n \n'--default-backend-service=ingress/nginx-ingress-default-backend'\n\n             \n-\n \n'--election-id=ingress-controller-leader-internal'\n\n             \n-\n \n'--ingress-class=nginx-internal'\n\n             \n-\n \n'--configmap=ingress/nginx-ingress-internal-controller'\n\n\n\n\n\n\nAnnotation ingress.class\n\u00b6\n\n\nIf you have multiple Ingress controllers in a single cluster, you can pick one by specifying the \ningress.class\n \nannotation, eg creating an Ingress with an annotation like\n\n\nmetadata\n:\n\n  \nname\n:\n \nfoo\n\n  \nannotations\n:\n\n    \nkubernetes.io/ingress.class\n:\n \n\"gce\"\n\n\n\n\n\n\nwill target the GCE controller, forcing the nginx controller to ignore it, while an annotation like\n\n\nmetadata\n:\n\n  \nname\n:\n \nfoo\n\n  \nannotations\n:\n\n    \nkubernetes.io/ingress.class\n:\n \n\"nginx\"\n\n\n\n\n\n\nwill target the nginx controller, forcing the GCE controller to ignore it.\n\n\nNote\n: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.\n\n\nDisabling NGINX ingress controller\n\u00b6\n\n\nSetting the annotation \nkubernetes.io/ingress.class\n to any other value  which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress.  If you are only running a single NGINX ingress controller, this can be achieved by setting this to any value except \"nginx\" or an empty string.\n\n\nDo this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.",
             "title": "Multiple ingress controllers"
         },
         {
@@ -1107,12 +1112,12 @@
         },
         {
             "location": "/user-guide/third-party-addons/modsecurity/",
-            "text": "ModSecurity Web Application Firewall\n\u00b6\n\n\nModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org\n\n\nThe \nModSecurity-nginx\n connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).\n\n\nThe default ModSecurity configuration file is located in \n/etc/nginx/modsecurity/modsecurity.conf\n. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.\nTo enable the ModSecurity feature we need to specify \nenable-modsecurity: \"true\"\n in the configuration configmap.\n\n\nNOTE:\n the default configuration use detection only, because that minimises the chances of post-installation disruption.\nThe file \n/var/log/modsec_audit.log\n contains the log of ModSecurity.\n\n\nThe OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.\nThe directory \n/etc/nginx/owasp-modsecurity-crs\n contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository.\nUsing \nenable-owasp-modsecurity-crs: \"true\"\n we enable the use of the rules.",
+            "text": "ModSecurity Web Application Firewall\n\u00b6\n\n\nModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - \nhttps://www.modsecurity.org\n\n\nThe \nModSecurity-nginx\n connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).\n\n\nThe default ModSecurity configuration file is located in \n/etc/nginx/modsecurity/modsecurity.conf\n. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.\nTo enable the ModSecurity feature we need to specify \nenable-modsecurity: \"true\"\n in the configuration configmap.\n\n\n\n\nNote:\n the default configuration use detection only, because that minimises the chances of post-installation disruption.\nThe file \n/var/log/modsec_audit.log\n contains the log of ModSecurity.\n\n\n\n\nThe OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.\nThe directory \n/etc/nginx/owasp-modsecurity-crs\n contains the \nhttps://github.com/SpiderLabs/owasp-modsecurity-crs repository\n.\nUsing \nenable-owasp-modsecurity-crs: \"true\"\n we enable the use of the rules.",
             "title": "ModSecurity Web Application Firewall"
         },
         {
             "location": "/user-guide/third-party-addons/modsecurity/#modsecurity-web-application-firewall",
-            "text": "ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org  The  ModSecurity-nginx  connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).  The default ModSecurity configuration file is located in  /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.\nTo enable the ModSecurity feature we need to specify  enable-modsecurity: \"true\"  in the configuration configmap.  NOTE:  the default configuration use detection only, because that minimises the chances of post-installation disruption.\nThe file  /var/log/modsec_audit.log  contains the log of ModSecurity.  The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.\nThe directory  /etc/nginx/owasp-modsecurity-crs  contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository.\nUsing  enable-owasp-modsecurity-crs: \"true\"  we enable the use of the rules.",
+            "text": "ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis -  https://www.modsecurity.org  The  ModSecurity-nginx  connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).  The default ModSecurity configuration file is located in  /etc/nginx/modsecurity/modsecurity.conf . This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.\nTo enable the ModSecurity feature we need to specify  enable-modsecurity: \"true\"  in the configuration configmap.   Note:  the default configuration use detection only, because that minimises the chances of post-installation disruption.\nThe file  /var/log/modsec_audit.log  contains the log of ModSecurity.   The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.\nThe directory  /etc/nginx/owasp-modsecurity-crs  contains the  https://github.com/SpiderLabs/owasp-modsecurity-crs repository .\nUsing  enable-owasp-modsecurity-crs: \"true\"  we enable the use of the rules.",
             "title": "ModSecurity Web Application Firewall"
         },
         {
@@ -1127,7 +1132,7 @@
         },
         {
             "location": "/examples/PREREQUISITES/",
-            "text": "Prerequisites\n\u00b6\n\n\nMany of the examples in this directory have common prerequisites.\n\n\nTLS certificates\n\u00b6\n\n\nUnless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA\nkey/cert pair with an arbitrarily chosen hostname, created as follows\n\n\n$\n openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \n\"/CN=nginxsvc/O=nginxsvc\"\n\n\nGenerating a 2048 bit RSA private key\n\n\n................+++\n\n\n................+++\n\n\nwriting new private key to 'tls.key'\n\n\n-----\n\n\n\n$\n kubectl create secret tls tls-secret --key tls.key --cert tls.crt\n\nsecret \"tls-secret\" created\n\n\n\n\n\n\nCA Authentication\n\u00b6\n\n\nYou can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.\n\n\nThese instructions are based on CoreOS OpenSSL \ninstructions\n\n\nGenerating a CA\n\u00b6\n\n\nFirst of all, you've to generate a CA. This is going to be the one who will sign your client certificates.\nIn real production world, you may face CAs with intermediate certificates, as the following:\n\n\n$\n openssl s_client -connect www.google.com:443\n\n[...]\n\n\n---\n\n\nCertificate chain\n\n\n 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com\n\n\n   i:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n   i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority\n\n\n\n\n\n\nTo generate our CA Certificate, we've to run the following commands:\n\n\n$\n openssl genrsa -out ca.key \n2048\n\n\n$\n openssl req -x509 -new -nodes -key ca.key -days \n10000\n -out ca.crt -subj \n\"/CN=example-ca\"\n\n\n\n\n\n\nThis will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days.\nThe ca.crt can be used later in the step of creation of CA authentication secret.\n\n\nGenerating the client certificate\n\u00b6\n\n\nThe following steps generate a client certificate signed by the CA generated above. This client can be\nused to authenticate in a tls-auth configured ingress.\n\n\nFirst, we need to generate an 'openssl.cnf' file that will be used while signing the keys:\n\n\n[req]\n\n\nreq_extensions = v3_req\n\n\ndistinguished_name = req_distinguished_name\n\n\n[req_distinguished_name]\n\n\n[ v3_req ]\n\n\nbasicConstraints = CA:FALSE\n\n\nkeyUsage = nonRepudiation, digitalSignature, keyEncipherment\n\n\n\n\n\n\nThen, a user generates his very own private key (that he needs to keep secret)\nand a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate.\n\n\n$\n openssl genrsa -out client1.key \n2048\n\n\n$\n openssl req -new -key client1.key -out client1.csr -subj \n\"/CN=client1\"\n -config openssl.cnf\n\n\n\n\n\nAs the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate:\n\n\n$\n openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days \n365\n -extensions v3_req -extfile openssl.cnf\n\n\n\n\n\nThen, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).\n\n\nCreating the CA Authentication secret\n\u00b6\n\n\nIf you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):\n\n\n-----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----\n\n\n\n\n\nYou can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:\n\n\n$\n openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n\n\n\n\n\nThen, you've to concatenate them all in only one file, named 'ca.crt' as the following:\n\n\n$\n cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n\n\n\n\n\nThe final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n\n\n\n\n\n\nNote: You can also generate the CA Authentication Secret along with the TLS Secret by using:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n --from-file\n=\ntls.crt\n=\n --from-file\n=\ntls.key\n=\n\n\n\n\n\n\nTest HTTP Service\n\u00b6\n\n\nAll examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows\n\n\n$\n kubectl create -f http-svc.yaml\n\nservice \"http-svc\" created\n\n\nreplicationcontroller \"http-svc\" created\n\n\n\n$\n kubectl get po\n\nNAME             READY     STATUS    RESTARTS   AGE\n\n\nhttp-svc-p1t3t   1/1       Running   0          1d\n\n\n\n$\n kubectl get svc\n\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\n\n\nhttp-svc         10.0.122.116        80:30301/TCP       1d\n\n\n\n\n\n\nYou can test that the HTTP Service works by exposing it temporarily\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\n\n\"http-svc\" patched\n\n\n\n$\n kubectl get svc http-svc\n\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\n\n\nhttp-svc         10.0.122.116        80:30301/TCP       1d\n\n\n\n$\n kubectl describe svc http-svc\n\nName:                   http-svc\n\n\nNamespace:              default\n\n\nLabels:                 app=http-svc\n\n\nSelector:               app=http-svc\n\n\nType:                   LoadBalancer\n\n\nIP:                     10.0.122.116\n\n\nLoadBalancer Ingress:   108.59.87.136\n\n\nPort:                   http    80/TCP\n\n\nNodePort:               http    30301/TCP\n\n\nEndpoints:              10.180.1.6:8080\n\n\nSession Affinity:       None\n\n\nEvents:\n\n\n  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message\n\n\n  --------- --------    -----   ----            -------------   --------    ------          -------\n\n\n  1m        1m      1   {service-controller }           Normal      Type            ClusterIP -> LoadBalancer\n\n\n  1m        1m      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer\n\n\n  16s       16s     1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer\n\n\n\n$\n curl \n108\n.59.87.126\n\nCLIENT VALUES:\n\n\nclient_address=10.240.0.3\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://108.59.87.136:8080/\n\n\n\nSERVER VALUES:\n\n\nserver_version=nginx: 1.9.11 - lua: 10001\n\n\n\nHEADERS RECEIVED:\n\n\naccept=*/*\n\n\nhost=108.59.87.136\n\n\nuser-agent=curl/7.46.0\n\n\nBODY:\n\n\n-no body in request-\n\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"NodePort\"}}'\n\n\n\"http-svc\" patched",
+            "text": "Prerequisites\n\u00b6\n\n\nMany of the examples in this directory have common prerequisites.\n\n\nTLS certificates\n\u00b6\n\n\nUnless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA\nkey/cert pair with an arbitrarily chosen hostname, created as follows\n\n\n$\n openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \n\"/CN=nginxsvc/O=nginxsvc\"\n\n\nGenerating a 2048 bit RSA private key\n\n\n................+++\n\n\n................+++\n\n\nwriting new private key to 'tls.key'\n\n\n-----\n\n\n\n$\n kubectl create secret tls tls-secret --key tls.key --cert tls.crt\n\nsecret \"tls-secret\" created\n\n\n\n\n\n\nCA Authentication\n\u00b6\n\n\nYou can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.\n\n\nThese instructions are based on CoreOS OpenSSL. \nSee live doc.\n\n\nGenerating a CA\n\u00b6\n\n\nFirst of all, you've to generate a CA. This is going to be the one who will sign your client certificates.\nIn real production world, you may face CAs with intermediate certificates, as the following:\n\n\n$\n openssl s_client -connect www.google.com:443\n\n[...]\n\n\n---\n\n\nCertificate chain\n\n\n 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com\n\n\n   i:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n   i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n   i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority\n\n\n\n\n\n\nTo generate our CA Certificate, we've to run the following commands:\n\n\n$\n openssl genrsa -out ca.key \n2048\n\n\n$\n openssl req -x509 -new -nodes -key ca.key -days \n10000\n -out ca.crt -subj \n\"/CN=example-ca\"\n\n\n\n\n\n\nThis will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days.\nThe ca.crt can be used later in the step of creation of CA authentication secret.\n\n\nGenerating the client certificate\n\u00b6\n\n\nThe following steps generate a client certificate signed by the CA generated above. This client can be\nused to authenticate in a tls-auth configured ingress.\n\n\nFirst, we need to generate an 'openssl.cnf' file that will be used while signing the keys:\n\n\n[req]\n\n\nreq_extensions = v3_req\n\n\ndistinguished_name = req_distinguished_name\n\n\n[req_distinguished_name]\n\n\n[ v3_req ]\n\n\nbasicConstraints = CA:FALSE\n\n\nkeyUsage = nonRepudiation, digitalSignature, keyEncipherment\n\n\n\n\n\n\nThen, a user generates his very own private key (that he needs to keep secret)\nand a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate.\n\n\n$\n openssl genrsa -out client1.key \n2048\n\n\n$\n openssl req -new -key client1.key -out client1.csr -subj \n\"/CN=client1\"\n -config openssl.cnf\n\n\n\n\n\nAs the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate:\n\n\n$\n openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days \n365\n -extensions v3_req -extfile openssl.cnf\n\n\n\n\n\nThen, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).\n\n\nCreating the CA Authentication secret\n\u00b6\n\n\nIf you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):\n\n\n-----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----\n\n\n\n\n\nYou can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:\n\n\n$\n openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n\n\n\n\n\nThen, you've to concatenate them all in only one file, named 'ca.crt' as the following:\n\n\n$\n cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n\n\n\n\n\nThe final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n\n\n\n\n\n\nNote:\n You can also generate the CA Authentication Secret along with the TLS Secret by using:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n --from-file\n=\ntls.crt\n=\n --from-file\n=\ntls.key\n=\n\n\n\n\n\n\nTest HTTP Service\n\u00b6\n\n\nAll examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows\n\n\n$\n kubectl create -f http-svc.yaml\n\nservice \"http-svc\" created\n\n\nreplicationcontroller \"http-svc\" created\n\n\n\n$\n kubectl get po\n\nNAME             READY     STATUS    RESTARTS   AGE\n\n\nhttp-svc-p1t3t   1/1       Running   0          1d\n\n\n\n$\n kubectl get svc\n\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\n\n\nhttp-svc         10.0.122.116        80:30301/TCP       1d\n\n\n\n\n\n\nYou can test that the HTTP Service works by exposing it temporarily\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\n\n\"http-svc\" patched\n\n\n\n$\n kubectl get svc http-svc\n\nNAME             CLUSTER-IP     EXTERNAL-IP   PORT(S)            AGE\n\n\nhttp-svc         10.0.122.116        80:30301/TCP       1d\n\n\n\n$\n kubectl describe svc http-svc\n\nName:                   http-svc\n\n\nNamespace:              default\n\n\nLabels:                 app=http-svc\n\n\nSelector:               app=http-svc\n\n\nType:                   LoadBalancer\n\n\nIP:                     10.0.122.116\n\n\nLoadBalancer Ingress:   108.59.87.136\n\n\nPort:                   http    80/TCP\n\n\nNodePort:               http    30301/TCP\n\n\nEndpoints:              10.180.1.6:8080\n\n\nSession Affinity:       None\n\n\nEvents:\n\n\n  FirstSeen LastSeen    Count   From            SubObjectPath   Type        Reason          Message\n\n\n  --------- --------    -----   ----            -------------   --------    ------          -------\n\n\n  1m        1m      1   {service-controller }           Normal      Type            ClusterIP -> LoadBalancer\n\n\n  1m        1m      1   {service-controller }           Normal      CreatingLoadBalancer    Creating load balancer\n\n\n  16s       16s     1   {service-controller }           Normal      CreatedLoadBalancer Created load balancer\n\n\n\n$\n curl \n108\n.59.87.126\n\nCLIENT VALUES:\n\n\nclient_address=10.240.0.3\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://108.59.87.136:8080/\n\n\n\nSERVER VALUES:\n\n\nserver_version=nginx: 1.9.11 - lua: 10001\n\n\n\nHEADERS RECEIVED:\n\n\naccept=*/*\n\n\nhost=108.59.87.136\n\n\nuser-agent=curl/7.46.0\n\n\nBODY:\n\n\n-no body in request-\n\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"NodePort\"}}'\n\n\n\"http-svc\" patched",
             "title": "Prerequisites"
         },
         {
@@ -1142,7 +1147,7 @@
         },
         {
             "location": "/examples/PREREQUISITES/#ca-authentication",
-            "text": "You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.  These instructions are based on CoreOS OpenSSL  instructions",
+            "text": "You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.  These instructions are based on CoreOS OpenSSL.  See live doc.",
             "title": "CA Authentication"
         },
         {
@@ -1157,7 +1162,7 @@
         },
         {
             "location": "/examples/PREREQUISITES/#creating-the-ca-authentication-secret",
-            "text": "If you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):  -----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----  You can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:  $  openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem  Then, you've to concatenate them all in only one file, named 'ca.crt' as the following:  $  cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt  The final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:  $  kubectl create secret generic caingress --namespace = default --from-file = ca.crt =   Note: You can also generate the CA Authentication Secret along with the TLS Secret by using:  $  kubectl create secret generic caingress --namespace = default --from-file = ca.crt =  --from-file = tls.crt =  --from-file = tls.key = ",
+            "text": "If you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):  -----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----  You can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:  $  openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem  Then, you've to concatenate them all in only one file, named 'ca.crt' as the following:  $  cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt  The final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:  $  kubectl create secret generic caingress --namespace = default --from-file = ca.crt =   Note:  You can also generate the CA Authentication Secret along with the TLS Secret by using:  $  kubectl create secret generic caingress --namespace = default --from-file = ca.crt =  --from-file = tls.crt =  --from-file = tls.key = ",
             "title": "Creating the CA Authentication secret"
         },
         {
@@ -1422,7 +1427,7 @@
         },
         {
             "location": "/examples/docker-registry/README/",
-            "text": "Docker registry\n\u00b6\n\n\nThis example demonstrates how to deploy a \ndocker registry\n in the cluster and configure Ingress enable access from Internet\n\n\nDeployment\n\u00b6\n\n\nFirst we deploy the docker registry in the cluster:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml\n\n\n\n\n\n\nImportant:\n DO NOT RUN THIS IN PRODUCTION.\nThis deployment uses \nemptyDir\n in the \nvolumeMount\n which means the contents of the registry will be deleted when the pod dies.\n\n\nThe next required step is creation of the ingress rules. To do this we have two options: with and without TLS\n\n\nWithout TLS\n\u00b6\n\n\nDownload and edit the yaml deployment replacing \nregistry.\n with a valid DNS name pointing to the ingress controller:\n\n\nwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml\n\n\n\n\n\n\nImportant:\n running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.\nPlease check \ndeploy a plain http registry\n\n\nWith TLS\n\u00b6\n\n\nDownload and edit the yaml deployment replacing \nregistry.\n with a valid DNS name pointing to the ingress controller:\n\n\nwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml\n\n\n\n\n\n\nDeploy \nkube lego\n use \nLet's Encrypt\n certificates or edit the ingress rule to use a secret with an existing SSL certificate.\n\n\nTesting\n\u00b6\n\n\nTo test the registry is working correctly we download a known image from \ndocker hub\n, create a tag pointing to the new registry and upload the image:\n\n\ndocker pull ubuntu:16.04\n\n\ndocker tag ubuntu:16.04 `registry./ubuntu:16.04`\n\n\ndocker push `registry./ubuntu:16.04`\n\n\n\n\n\n\nPlease replace \nregistry.\n with your domain.",
+            "text": "Docker registry\n\u00b6\n\n\nThis example demonstrates how to deploy a \ndocker registry\n in the cluster and configure Ingress enable access from Internet\n\n\nDeployment\n\u00b6\n\n\nFirst we deploy the docker registry in the cluster:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml\n\n\n\n\n\n\n\n\nImportant\n\n\nDO NOT RUN THIS IN PRODUCTION\n\n\nThis deployment uses \nemptyDir\n in the \nvolumeMount\n which means the contents of the registry will be deleted when the pod dies.\n\n\n\n\nThe next required step is creation of the ingress rules. To do this we have two options: with and without TLS\n\n\nWithout TLS\n\u00b6\n\n\nDownload and edit the yaml deployment replacing \nregistry.\n with a valid DNS name pointing to the ingress controller:\n\n\nwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml\n\n\n\n\n\n\n\n\nImportant\n\n\nRunning a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.\n\n\nPlease check \ndeploy a plain http registry\n\n\n\n\nWith TLS\n\u00b6\n\n\nDownload and edit the yaml deployment replacing \nregistry.\n with a valid DNS name pointing to the ingress controller:\n\n\nwget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml\n\n\n\n\n\n\nDeploy \nkube lego\n use \nLet's Encrypt\n certificates or edit the ingress rule to use a secret with an existing SSL certificate.\n\n\nTesting\n\u00b6\n\n\nTo test the registry is working correctly we download a known image from \ndocker hub\n, create a tag pointing to the new registry and upload the image:\n\n\ndocker pull ubuntu:16.04\n\n\ndocker tag ubuntu:16.04 `registry./ubuntu:16.04`\n\n\ndocker push `registry./ubuntu:16.04`\n\n\n\n\n\n\nPlease replace \nregistry.\n with your domain.",
             "title": "Docker registry"
         },
         {
@@ -1432,12 +1437,12 @@
         },
         {
             "location": "/examples/docker-registry/README/#deployment",
-            "text": "First we deploy the docker registry in the cluster:  kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml   Important:  DO NOT RUN THIS IN PRODUCTION.\nThis deployment uses  emptyDir  in the  volumeMount  which means the contents of the registry will be deleted when the pod dies.  The next required step is creation of the ingress rules. To do this we have two options: with and without TLS",
+            "text": "First we deploy the docker registry in the cluster:  kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml    Important  DO NOT RUN THIS IN PRODUCTION  This deployment uses  emptyDir  in the  volumeMount  which means the contents of the registry will be deleted when the pod dies.   The next required step is creation of the ingress rules. To do this we have two options: with and without TLS",
             "title": "Deployment"
         },
         {
             "location": "/examples/docker-registry/README/#without-tls",
-            "text": "Download and edit the yaml deployment replacing  registry.  with a valid DNS name pointing to the ingress controller:  wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml   Important:  running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.\nPlease check  deploy a plain http registry",
+            "text": "Download and edit the yaml deployment replacing  registry.  with a valid DNS name pointing to the ingress controller:  wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml    Important  Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.  Please check  deploy a plain http registry",
             "title": "Without TLS"
         },
         {
@@ -1452,7 +1457,7 @@
         },
         {
             "location": "/examples/external-auth/README/",
-            "text": "External Authentication\n\u00b6\n\n\nOverview\n\u00b6\n\n\nThe \nauth-url\n and \nauth-signin\n annotations allow you to use an external\nauthentication provider to protect your Ingress resources.\n\n\n(Note, this annotation requires \nnginx-ingress-controller v0.9.0\n or greater.)\n\n\nKey Detail\n\u00b6\n\n\nThis functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication.\n\n\nOther Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect \n401\ns to the\nsame endpoint.\n\n\nSample:\n\n\n...\n\n\nmetadata\n:\n\n  \nname\n:\n \napplication\n\n  \nannotations\n:\n\n    \n\"nginx.ingress.kubernetes.io/auth-url\"\n:\n \n\"https://$host/oauth2/auth\"\n\n    \n\"nginx.ingress.kubernetes.io/auth-signin\"\n:\n \n\"https://$host/oauth2/sign_in\"\n\n\n...\n\n\n\n\n\n\nExample: OAuth2 Proxy + Kubernetes-Dashboard\n\u00b6\n\n\nThis example will show you how to deploy \noauth2_proxy\n\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider\n\n\nPrepare\n\u00b6\n\n\n\n\nInstall the kubernetes dashboard\n\n\n\n\nkubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml\n\n\n\n\n\n\n\n\nCreate a custom Github OAuth application https://github.com/settings/applications/new\n\n\n\n\n\n\n\n\nHomepage URL is the FQDN in the Ingress rule, like \nhttps://foo.bar.com\n\n\nAuthorization callback URL is the same as the base FQDN plus \n/oauth2\n, like \nhttps://foo.bar.com/oauth2\n\n\n\n\n\n\n\n\n\n\nConfigure oauth2_proxy values in the file oauth2-proxy.yaml with the values:\n\n\n\n\n\n\nOAUTH2_PROXY_CLIENT_ID with the github \n\n\n\n\n\nOAUTH2_PROXY_CLIENT_SECRET with the github \n\n\n\n\n\nOAUTH2_PROXY_COOKIE_SECRET with value of \npython\n \n-\nc\n \n'import os,base64; print base64.b64encode(os.urandom(16))'\n      \n\n\n\n\n\n\nCustomize the contents of the file dashboard-ingress.yaml:\n\n\n\n\n\n\nReplace \n__INGRESS_HOST__\n with a valid FQDN and \n__INGRESS_SECRET__\n with a Secret with a valid SSL certificate.\n\n\n\n\nDeploy the oauth2 proxy and the ingress rules running:\n\n\n\n\n$\n kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml\n\n\n\n\n\nTest the oauth integration accessing the configured URL, like \nhttps://foo.bar.com",
+            "text": "External Authentication\n\u00b6\n\n\nOverview\n\u00b6\n\n\nThe \nauth-url\n and \nauth-signin\n annotations allow you to use an external\nauthentication provider to protect your Ingress resources.\n\n\n\n\nImportant\n\n\nthis annotation requires \nnginx-ingress-controller v0.9.0\n or greater.)\n\n\n\n\nKey Detail\n\u00b6\n\n\nThis functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication.\n\n\nOther Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect \n401\ns to the\nsame endpoint.\n\n\nSample:\n\n\n...\n\n\nmetadata\n:\n\n  \nname\n:\n \napplication\n\n  \nannotations\n:\n\n    \n\"nginx.ingress.kubernetes.io/auth-url\"\n:\n \n\"https://$host/oauth2/auth\"\n\n    \n\"nginx.ingress.kubernetes.io/auth-signin\"\n:\n \n\"https://$host/oauth2/sign_in\"\n\n\n...\n\n\n\n\n\n\nExample: OAuth2 Proxy + Kubernetes-Dashboard\n\u00b6\n\n\nThis example will show you how to deploy \noauth2_proxy\n\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider\n\n\nPrepare\n\u00b6\n\n\n\n\nInstall the kubernetes dashboard\n\n\n\n\nkubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml\n\n\n\n\n\n\n\n\nCreate a \ncustom Github OAuth application\n\n\n\n\n\n\n\n\nHomepage URL is the FQDN in the Ingress rule, like \nhttps://foo.bar.com\n\n\nAuthorization callback URL is the same as the base FQDN plus \n/oauth2\n, like \nhttps://foo.bar.com/oauth2\n\n\n\n\n\n\n\n\n\n\nConfigure oauth2_proxy values in the file oauth2-proxy.yaml with the values:\n\n\n\n\n\n\nOAUTH2_PROXY_CLIENT_ID with the github \n\n\n\n\n\nOAUTH2_PROXY_CLIENT_SECRET with the github \n\n\n\n\n\nOAUTH2_PROXY_COOKIE_SECRET with value of \npython\n \n-\nc\n \n'import os,base64; print base64.b64encode(os.urandom(16))'\n      \n\n\n\n\n\n\nCustomize the contents of the file dashboard-ingress.yaml:\n\n\n\n\n\n\nReplace \n__INGRESS_HOST__\n with a valid FQDN and \n__INGRESS_SECRET__\n with a Secret with a valid SSL certificate.\n\n\n\n\nDeploy the oauth2 proxy and the ingress rules running:\n\n\n\n\n$\n kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml\n\n\n\n\n\nTest the oauth integration accessing the configured URL, like \nhttps://foo.bar.com",
             "title": "External Authentication"
         },
         {
@@ -1462,7 +1467,7 @@
         },
         {
             "location": "/examples/external-auth/README/#overview",
-            "text": "The  auth-url  and  auth-signin  annotations allow you to use an external\nauthentication provider to protect your Ingress resources.  (Note, this annotation requires  nginx-ingress-controller v0.9.0  or greater.)",
+            "text": "The  auth-url  and  auth-signin  annotations allow you to use an external\nauthentication provider to protect your Ingress resources.   Important  this annotation requires  nginx-ingress-controller v0.9.0  or greater.)",
             "title": "Overview"
         },
         {
@@ -1477,7 +1482,7 @@
         },
         {
             "location": "/examples/external-auth/README/#prepare",
-            "text": "Install the kubernetes dashboard   kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml    Create a custom Github OAuth application https://github.com/settings/applications/new     Homepage URL is the FQDN in the Ingress rule, like  https://foo.bar.com  Authorization callback URL is the same as the base FQDN plus  /oauth2 , like  https://foo.bar.com/oauth2      Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values:    OAUTH2_PROXY_CLIENT_ID with the github     OAUTH2_PROXY_CLIENT_SECRET with the github     OAUTH2_PROXY_COOKIE_SECRET with value of  python   - c   'import os,base64; print base64.b64encode(os.urandom(16))'           Customize the contents of the file dashboard-ingress.yaml:    Replace  __INGRESS_HOST__  with a valid FQDN and  __INGRESS_SECRET__  with a Secret with a valid SSL certificate.   Deploy the oauth2 proxy and the ingress rules running:   $  kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml  Test the oauth integration accessing the configured URL, like  https://foo.bar.com",
+            "text": "Install the kubernetes dashboard   kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml    Create a  custom Github OAuth application     Homepage URL is the FQDN in the Ingress rule, like  https://foo.bar.com  Authorization callback URL is the same as the base FQDN plus  /oauth2 , like  https://foo.bar.com/oauth2      Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values:    OAUTH2_PROXY_CLIENT_ID with the github     OAUTH2_PROXY_CLIENT_SECRET with the github     OAUTH2_PROXY_COOKIE_SECRET with value of  python   - c   'import os,base64; print base64.b64encode(os.urandom(16))'           Customize the contents of the file dashboard-ingress.yaml:    Replace  __INGRESS_HOST__  with a valid FQDN and  __INGRESS_SECRET__  with a Secret with a valid SSL certificate.   Deploy the oauth2 proxy and the ingress rules running:   $  kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml  Test the oauth integration accessing the configured URL, like  https://foo.bar.com",
             "title": "Prepare"
         },
         {
@@ -1527,7 +1532,7 @@
         },
         {
             "location": "/examples/static-ip/README/",
-            "text": "Static IPs\n\u00b6\n\n\nThis example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.\n\n\nPrerequisites\n\u00b6\n\n\nYou need a \nTLS cert\n and a \ntest HTTP service\n for this example.\nYou will also need to make sure your Ingress targets exactly one Ingress\ncontroller by specifying the \ningress.class annotation\n,\nand that you have an ingress controller \nrunning\n in your cluster.\n\n\nAcquiring an IP\n\u00b6\n\n\nSince instances of the nginx controller actually run on nodes in your cluster,\nby default nginx Ingresses will only get static IPs if your cloudprovider\nsupports static IP assignments to nodes. On GKE/GCE for example, even though\nnodes get static IPs, the IPs are not retained across upgrade.\n\n\nTo acquire a static IP for the nginx ingress controller, simply put it\nbehind a Service of \nType=LoadBalancer\n.\n\n\nFirst, create a loadbalancer Service and wait for it to acquire an IP\n\n\n$\n kubectl create -f static-ip-svc.yaml\n\nservice \"nginx-ingress-lb\" created\n\n\n\n$\n kubectl get svc nginx-ingress-lb\n\nNAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE\n\n\nnginx-ingress-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m\n\n\n\n\n\n\nthen, update the ingress controller so it adopts the static IP of the Service\nby passing the \n--publish-service\n flag (the example yaml used in the next step\nalready has it set to \"nginx-ingress-lb\").\n\n\n$\n kubectl create -f nginx-ingress-controller.yaml\n\ndeployment \"nginx-ingress-controller\" created\n\n\n\n\n\n\nAssigning the IP to an Ingress\n\u00b6\n\n\nFrom here on every Ingress created with the \ningress.class\n annotation set to\n\nnginx\n will get the IP allocated in the previous step\n\n\n$\n kubectl create -f nginx-ingress.yaml\n\ningress \"nginx-ingress\" created\n\n\n\n$\n kubectl get ing nginx-ingress\n\nNAME            HOSTS     ADDRESS           PORTS     AGE\n\n\nnginx-ingress   *         104.154.109.191   80, 443   13m\n\n\n\n$\n curl \n104\n.154.109.191 -kL\n\nCLIENT VALUES:\n\n\nclient_address=10.180.1.25\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://104.154.109.191:8080/\n\n\n...\n\n\n\n\n\n\nRetaining the IP\n\u00b6\n\n\nYou can test retention by deleting the Ingress\n\n\n$\n kubectl delete ing nginx-ingress\n\ningress \"nginx-ingress\" deleted\n\n\n\n$\n kubectl create -f nginx-ingress.yaml\n\ningress \"nginx-ingress\" created\n\n\n\n$\n kubectl get ing nginx-ingress\n\nNAME            HOSTS     ADDRESS           PORTS     AGE\n\n\nnginx-ingress   *         104.154.109.191   80, 443   13m\n\n\n\n\n\n\nNote that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all\nIngresses, because all requests are proxied through the same set of nginx\ncontrollers.\n\n\nPromote ephemeral to static IP\n\u00b6\n\n\nTo promote the allocated IP to static, you can update the Service manifest\n\n\n$\n kubectl patch svc nginx-ingress-lb -p \n'{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}'\n\n\n\"nginx-ingress-lb\" patched\n\n\n\n\n\n\nand promote the IP to static (promotion works differently for cloudproviders,\nprovided example is for GKE/GCE)\n`\n\n\n$\n gcloud compute addresses create nginx-ingress-lb --addresses \n104\n.154.109.191 --region us-central1\n\nCreated [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb].\n\n\n---\n\n\naddress: 104.154.109.191\n\n\ncreationTimestamp: '2017-01-31T16:34:50.089-08:00'\n\n\ndescription: ''\n\n\nid: '5208037144487826373'\n\n\nkind: compute#address\n\n\nname: nginx-ingress-lb\n\n\nregion: us-central1\n\n\nselfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb\n\n\nstatus: IN_USE\n\n\nusers:\n\n\n- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000\n\n\n\n\n\n\nNow even if the Service is deleted, the IP will persist, so you can recreate the\nService with \nspec.loadBalancerIP\n set to \n104.154.109.191\n.",
+            "text": "Static IPs\n\u00b6\n\n\nThis example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.\n\n\nPrerequisites\n\u00b6\n\n\nYou need a \nTLS cert\n and a \ntest HTTP service\n for this example.\nYou will also need to make sure your Ingress targets exactly one Ingress\ncontroller by specifying the \ningress.class annotation\n,\nand that you have an ingress controller \nrunning\n in your cluster.\n\n\nAcquiring an IP\n\u00b6\n\n\nSince instances of the nginx controller actually run on nodes in your cluster,\nby default nginx Ingresses will only get static IPs if your cloudprovider\nsupports static IP assignments to nodes. On GKE/GCE for example, even though\nnodes get static IPs, the IPs are not retained across upgrade.\n\n\nTo acquire a static IP for the nginx ingress controller, simply put it\nbehind a Service of \nType=LoadBalancer\n.\n\n\nFirst, create a loadbalancer Service and wait for it to acquire an IP\n\n\n$\n kubectl create -f static-ip-svc.yaml\n\nservice \"nginx-ingress-lb\" created\n\n\n\n$\n kubectl get svc nginx-ingress-lb\n\nNAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE\n\n\nnginx-ingress-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m\n\n\n\n\n\n\nthen, update the ingress controller so it adopts the static IP of the Service\nby passing the \n--publish-service\n flag (the example yaml used in the next step\nalready has it set to \"nginx-ingress-lb\").\n\n\n$\n kubectl create -f nginx-ingress-controller.yaml\n\ndeployment \"nginx-ingress-controller\" created\n\n\n\n\n\n\nAssigning the IP to an Ingress\n\u00b6\n\n\nFrom here on every Ingress created with the \ningress.class\n annotation set to\n\nnginx\n will get the IP allocated in the previous step\n\n\n$\n kubectl create -f nginx-ingress.yaml\n\ningress \"nginx-ingress\" created\n\n\n\n$\n kubectl get ing nginx-ingress\n\nNAME            HOSTS     ADDRESS           PORTS     AGE\n\n\nnginx-ingress   *         104.154.109.191   80, 443   13m\n\n\n\n$\n curl \n104\n.154.109.191 -kL\n\nCLIENT VALUES:\n\n\nclient_address=10.180.1.25\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://104.154.109.191:8080/\n\n\n...\n\n\n\n\n\n\nRetaining the IP\n\u00b6\n\n\nYou can test retention by deleting the Ingress\n\n\n$\n kubectl delete ing nginx-ingress\n\ningress \"nginx-ingress\" deleted\n\n\n\n$\n kubectl create -f nginx-ingress.yaml\n\ningress \"nginx-ingress\" created\n\n\n\n$\n kubectl get ing nginx-ingress\n\nNAME            HOSTS     ADDRESS           PORTS     AGE\n\n\nnginx-ingress   *         104.154.109.191   80, 443   13m\n\n\n\n\n\n\n\n\nNote that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all\nIngresses, because all requests are proxied through the same set of nginx\ncontrollers.\n\n\n\n\nPromote ephemeral to static IP\n\u00b6\n\n\nTo promote the allocated IP to static, you can update the Service manifest\n\n\n$\n kubectl patch svc nginx-ingress-lb -p \n'{\"spec\": {\"loadBalancerIP\": \"104.154.109.191\"}}'\n\n\n\"nginx-ingress-lb\" patched\n\n\n\n\n\n\nand promote the IP to static (promotion works differently for cloudproviders,\nprovided example is for GKE/GCE)\n`\n\n\n$\n gcloud compute addresses create nginx-ingress-lb --addresses \n104\n.154.109.191 --region us-central1\n\nCreated [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb].\n\n\n---\n\n\naddress: 104.154.109.191\n\n\ncreationTimestamp: '2017-01-31T16:34:50.089-08:00'\n\n\ndescription: ''\n\n\nid: '5208037144487826373'\n\n\nkind: compute#address\n\n\nname: nginx-ingress-lb\n\n\nregion: us-central1\n\n\nselfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb\n\n\nstatus: IN_USE\n\n\nusers:\n\n\n- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000\n\n\n\n\n\n\nNow even if the Service is deleted, the IP will persist, so you can recreate the\nService with \nspec.loadBalancerIP\n set to \n104.154.109.191\n.",
             "title": "Static IPs"
         },
         {
@@ -1552,7 +1557,7 @@
         },
         {
             "location": "/examples/static-ip/README/#retaining-the-ip",
-            "text": "You can test retention by deleting the Ingress  $  kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted  $  kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created  $  kubectl get ing nginx-ingress NAME            HOSTS     ADDRESS           PORTS     AGE  nginx-ingress   *         104.154.109.191   80, 443   13m   Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all\nIngresses, because all requests are proxied through the same set of nginx\ncontrollers.",
+            "text": "You can test retention by deleting the Ingress  $  kubectl delete ing nginx-ingress ingress \"nginx-ingress\" deleted  $  kubectl create -f nginx-ingress.yaml ingress \"nginx-ingress\" created  $  kubectl get ing nginx-ingress NAME            HOSTS     ADDRESS           PORTS     AGE  nginx-ingress   *         104.154.109.191   80, 443   13m    Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all\nIngresses, because all requests are proxied through the same set of nginx\ncontrollers.",
             "title": "Retaining the IP"
         },
         {
@@ -1587,7 +1592,7 @@
         },
         {
             "location": "/development/",
-            "text": "Developing for NGINX Ingress controller\n\u00b6\n\n\nThis document explains how to get started with developing for NGINX Ingress controller.\nIt includes how to build, test, and release ingress controllers.\n\n\nQuick Start\n\u00b6\n\n\nInitial developer environment build\n\u00b6\n\n\nPrequisites\n: Minikube must be installed; See \nreleases\n for installation instructions. \n\n\nIf you are using \nMacOS\n and deploying to \nminikube\n, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace \ningress-nginx\n:\n\n\n$ make dev-env\n\n\n\n\n\nUpdating the deployment\n\u00b6\n\n\nThe nginx controller container image can be rebuilt using:\n\n\n$ \nARCH\n=\namd64 \nTAG\n=\ndev \nREGISTRY\n=\n$USER\n/ingress-controller make build container\n\n\n\n\n\nThe image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up:\n\n\n$ kubectl get pods -n ingress-nginx\n$ kubectl delete pod -n ingress-nginx nginx-ingress-controller-\n\n\n\n\n\nDependencies\n\u00b6\n\n\nThe build uses dependencies in the \nvendor\n directory, which\nmust be installed before building a binary/image. Occasionally, you\nmight need to update the dependencies.\n\n\nThis guide requires you to install the \ndep\n dependency tool.\n\n\nCheck the version of \ndep\n you are using and make sure it is up to date.\n\n\n$\n dep version\n\ndep:\n\n\n version     : devel\n\n\n build date  : \n\n\n git hash    : \n\n\n go version  : go1.9\n\n\n go compiler : gc\n\n\n platform    : linux/amd64\n\n\n\n\n\n\nIf you have an older version of \ndep\n, you can update it as follows:\n\n\n$\n go get -u github.com/golang/dep\n\n\n\n\n\nThis will automatically save the dependencies to the \nvendor/\n directory.\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n dep ensure\n\n$\n dep ensure -update\n\n$\n dep prune\n\n\n\n\n\nBuilding\n\u00b6\n\n\nAll ingress controllers are built through a Makefile. Depending on your\nrequirements you can build a raw server binary, a local container image,\nor push an image to a remote repository.\n\n\nIn order to use your local Docker, you may need to set the following environment variables:\n\n\n#\n \n\"gcloud docker\"\n \n(\ndefault\n)\n or \n\"docker\"\n\n\n$\n \nexport\n \nDOCKER\n=\n\n\n\n#\n \n\"quay.io/kubernetes-ingress-controller\"\n \n(\ndefault\n)\n, \n\"index.docker.io\"\n, or your own registry\n\n$\n \nexport\n \nREGISTRY\n=\n\n\n\n\n\n\nTo find the registry simply run: \ndocker system info | grep Registry\n\n\nNginx Controller\n\u00b6\n\n\nBuild a raw server binary\n\n\n$\n make build\n\n\n\n\n\nTODO\n: add more specific instructions needed for raw server binary.\n\n\nBuild a local container image\n\n\n$\n \nTAG\n=\n \nREGISTRY\n=\n$USER\n/ingress-controller make docker-build\n\n\n\n\n\nPush the container image to a remote repository\n\n\n$\n \nTAG\n=\n \nREGISTRY\n=\n$USER\n/ingress-controller make docker-push\n\n\n\n\n\nDeploying\n\u00b6\n\n\nThere are several ways to deploy the ingress controller onto a cluster.\nPlease check the \ndeployment guide\n\n\nTesting\n\u00b6\n\n\nTo run unit-tests, just run\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n make \ntest\n\n\n\n\n\n\nIf you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo.\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n make e2e-test\n\n\n\n\n\nTo run unit-tests for lua code locally, run:\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n ./rootfs/etc/nginx/lua/test/up.sh\n\n$\n make lua-test\n\n\n\n\n\nLua tests are located in \n$GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test\n. When creating a new test file it must follow the naming convention \n_test.lua\n or it will be ignored. \n\n\nReleasing\n\u00b6\n\n\nAll Makefiles will produce a release binary, as shown above. To publish this\nto a wider Kubernetes user base, push the image to a container registry, like\n\ngcr.io\n. All release images are hosted under \ngcr.io/google_containers\n and\ntagged according to a \nsemver\n scheme.\n\n\nAn example release might look like:\n\n\n$ make release\n\n\n\n\n\nPlease follow these guidelines to cut a release:\n\n\n\n\nUpdate the \nrelease\n\npage with a short description of the major changes that correspond to a given\nimage tag.\n\n\nCut a release branch, if appropriate. Release branches follow the format of\n\ncontroller-release-version\n. Typically, pre-releases are cut from HEAD.\nAll major feature work is done in HEAD. Specific bug fixes are\ncherry-picked into a release branch.\n\n\nIf you're not confident about the stability of the code,\n\ntag\n it as alpha or beta.\nTypically, a release branch should have stable code.",
+            "text": "Developing for NGINX Ingress controller\n\u00b6\n\n\nThis document explains how to get started with developing for NGINX Ingress controller.\nIt includes how to build, test, and release ingress controllers.\n\n\nQuick Start\n\u00b6\n\n\nInitial developer environment build\n\u00b6\n\n\n\n\nPrequisites\n: Minikube must be installed.\nSee \nreleases\n for installation instructions. \n\n\n\n\nIf you are using \nMacOS\n and deploying to \nminikube\n, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace \ningress-nginx\n:\n\n\n$ make dev-env\n\n\n\n\n\nUpdating the deployment\n\u00b6\n\n\nThe nginx controller container image can be rebuilt using:\n\n\n$ \nARCH\n=\namd64 \nTAG\n=\ndev \nREGISTRY\n=\n$USER\n/ingress-controller make build container\n\n\n\n\n\nThe image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up:\n\n\n$ kubectl get pods -n ingress-nginx\n$ kubectl delete pod -n ingress-nginx nginx-ingress-controller-\n\n\n\n\n\nDependencies\n\u00b6\n\n\nThe build uses dependencies in the \nvendor\n directory, which\nmust be installed before building a binary/image. Occasionally, you\nmight need to update the dependencies.\n\n\nThis guide requires you to install the \ndep\n dependency tool.\n\n\nCheck the version of \ndep\n you are using and make sure it is up to date.\n\n\n$\n dep version\n\ndep:\n\n\n version     : devel\n\n\n build date  : \n\n\n git hash    : \n\n\n go version  : go1.9\n\n\n go compiler : gc\n\n\n platform    : linux/amd64\n\n\n\n\n\n\nIf you have an older version of \ndep\n, you can update it as follows:\n\n\n$\n go get -u github.com/golang/dep\n\n\n\n\n\nThis will automatically save the dependencies to the \nvendor/\n directory.\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n dep ensure\n\n$\n dep ensure -update\n\n$\n dep prune\n\n\n\n\n\nBuilding\n\u00b6\n\n\nAll ingress controllers are built through a Makefile. Depending on your\nrequirements you can build a raw server binary, a local container image,\nor push an image to a remote repository.\n\n\nIn order to use your local Docker, you may need to set the following environment variables:\n\n\n#\n \n\"gcloud docker\"\n \n(\ndefault\n)\n or \n\"docker\"\n\n\n$\n \nexport\n \nDOCKER\n=\n\n\n\n#\n \n\"quay.io/kubernetes-ingress-controller\"\n \n(\ndefault\n)\n, \n\"index.docker.io\"\n, or your own registry\n\n$\n \nexport\n \nREGISTRY\n=\n\n\n\n\n\n\nTo find the registry simply run: \ndocker system info | grep Registry\n\n\nNginx Controller\n\u00b6\n\n\nBuild a raw server binary\n\n\n$\n make build\n\n\n\n\n\nTODO\n: add more specific instructions needed for raw server binary.\n\n\nBuild a local container image\n\n\n$\n \nTAG\n=\n \nREGISTRY\n=\n$USER\n/ingress-controller make docker-build\n\n\n\n\n\nPush the container image to a remote repository\n\n\n$\n \nTAG\n=\n \nREGISTRY\n=\n$USER\n/ingress-controller make docker-push\n\n\n\n\n\nDeploying\n\u00b6\n\n\nThere are several ways to deploy the ingress controller onto a cluster.\nPlease check the \ndeployment guide\n\n\nTesting\n\u00b6\n\n\nTo run unit-tests, just run\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n make \ntest\n\n\n\n\n\n\nIf you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo.\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n make e2e-test\n\n\n\n\n\nTo run unit-tests for lua code locally, run:\n\n\n$\n \ncd\n \n$GOPATH\n/src/k8s.io/ingress-nginx\n\n$\n ./rootfs/etc/nginx/lua/test/up.sh\n\n$\n make lua-test\n\n\n\n\n\nLua tests are located in \n$GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test\n. When creating a new test file it must follow the naming convention \n_test.lua\n or it will be ignored. \n\n\nReleasing\n\u00b6\n\n\nAll Makefiles will produce a release binary, as shown above. To publish this\nto a wider Kubernetes user base, push the image to a container registry, like\n\ngcr.io\n. All release images are hosted under \ngcr.io/google_containers\n and\ntagged according to a \nsemver\n scheme.\n\n\nAn example release might look like:\n\n\n$ make release\n\n\n\n\n\nPlease follow these guidelines to cut a release:\n\n\n\n\nUpdate the \nrelease\n\npage with a short description of the major changes that correspond to a given\nimage tag.\n\n\nCut a release branch, if appropriate. Release branches follow the format of\n\ncontroller-release-version\n. Typically, pre-releases are cut from HEAD.\nAll major feature work is done in HEAD. Specific bug fixes are\ncherry-picked into a release branch.\n\n\nIf you're not confident about the stability of the code,\n\ntag\n it as alpha or beta.\nTypically, a release branch should have stable code.",
             "title": "Developing for NGINX Ingress controller"
         },
         {
@@ -1602,7 +1607,7 @@
         },
         {
             "location": "/development/#initial-developer-environment-build",
-            "text": "Prequisites : Minikube must be installed; See  releases  for installation instructions.   If you are using  MacOS  and deploying to  minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace  ingress-nginx :  $ make dev-env",
+            "text": "Prequisites : Minikube must be installed.\nSee  releases  for installation instructions.    If you are using  MacOS  and deploying to  minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace  ingress-nginx :  $ make dev-env",
             "title": "Initial developer environment build"
         },
         {
@@ -1652,7 +1657,7 @@
         },
         {
             "location": "/troubleshooting/",
-            "text": "Debug & Troubleshooting\n\u00b6\n\n\nDebug\n\u00b6\n\n\nUsing the flag \n--v=XX\n it is possible to increase the level of logging.\nIn particular:\n\n\n\n\n--v=2\n shows details using \ndiff\n about the changes in the configuration in nginx\n\n\n\n\nI0316 12:24:37.581267       1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf\n\n\nI0316 12:24:37.581356       1 utils.go:149] --- /tmp/922554809  2016-03-16 12:24:37.000000000 +0000\n\n\n+++ /tmp/079811012  2016-03-16 12:24:37.000000000 +0000\n\n\n@@ -235,7 +235,6 @@\n\n\n\n     upstream default-http-svcx {\n\n\n         least_conn;\n\n\n-        server 10.2.112.124:5000;\n\n\n         server 10.2.208.50:5000;\n\n\n\n     }\n\n\nI0316 12:24:37.610073       1 command.go:69] change in configuration detected. Reloading...\n\n\n\n\n\n\n\n\n--v=3\n shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format\n\n\n--v=5\n configures NGINX in \ndebug mode\n\n\n\n\nTroubleshooting\n\u00b6\n\n\nAuthentication to the Kubernetes API Server\n\u00b6\n\n\nA number of components are involved in the authentication process and the first step is to narrow\ndown the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.\nBoth authentications must work:\n\n\n+-------------+   service          +------------+\n|             |   authentication   |            |\n+  apiserver  +<-------------------+  ingress   |\n|             |                    | controller |\n+-------------+                    +------------+\n\n\n\n\n\nService authentication\n\n\nThe Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways:\n\n\n\n\n\n\nService Account:\n This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.\n\n\n\n\n\n\nKubeconfig file:\n In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the \n--kubeconfig\n flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the \n--kubeconfig\n does not requires the flag \n--apiserver-host\n.\nThe format of the file is identical to \n~/.kube/config\n which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.\n\n\n\n\n\n\nUsing the flag \n--apiserver-host\n:\n Using this flag \n--apiserver-host=http://localhost:8080\n it is possible to specify an unsecured API server or reach a remote kubernetes cluster using \nkubectl proxy\n.\nPlease do not use this approach in production.\n\n\n\n\n\n\nIn the diagram below you can see the full authentication flow with all options, starting with the browser\non the lower left hand side.\n\n\nKubernetes                                                  Workstation\n+---------------------------------------------------+     +------------------+\n|                                                   |     |                  |\n|  +-----------+   apiserver        +------------+  |     |  +------------+  |\n|  |           |   proxy            |            |  |     |  |            |  |\n|  | apiserver |                    |  ingress   |  |     |  |  ingress   |  |\n|  |           |                    | controller |  |     |  | controller |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |  service account/  |            |  |     |  |            |  |\n|  |           |  kubeconfig        |            |  |     |  |            |  |\n|  |           +<-------------------+            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  +------+----+      kubeconfig    +------+-----+  |     |  +------+-----+  |\n|         |<--------------------------------------------------------|        |\n|                                                   |     |                  |\n+---------------------------------------------------+     +------------------+\n\n\n\n\n\nService Account\n\u00b6\n\n\nIf using a service account to connect to the API server, Dashboard expects the file\n\n/var/run/secrets/kubernetes.io/serviceaccount/token\n to be present. It provides a secret\ntoken that is required to authenticate with the API server.\n\n\nVerify with the following commands:\n\n\n# start a container that contains curl\n\n$ kubectl run \ntest\n --image\n=\ntutum/curl -- sleep \n10000\n\n\n\n# check that container is running\n\n$ kubectl get pods\nNAME                   READY     STATUS    RESTARTS   AGE\ntest-701078429-s5kca   \n1\n/1       Running   \n0\n          16s\n\n\n# check if secret exists\n\n$ kubectl \nexec\n test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt\nnamespace\ntoken\n\n\n# get service IP of master\n\n$ kubectl get services\nNAME         CLUSTER-IP   EXTERNAL-IP   PORT\n(\nS\n)\n   AGE\nkubernetes   \n10\n.0.0.1             \n443\n/TCP   1d\n\n\n# check base connectivity from cluster inside\n\n$ kubectl \nexec\n test-701078429-s5kca -- curl -k https://10.0.0.1\nUnauthorized\n\n\n# connect using tokens\n\n$ \nTOKEN_VALUE\n=\n$(\nkubectl \nexec\n test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token\n)\n\n$ \necho\n \n$TOKEN_VALUE\n\neyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A\n$ kubectl \nexec\n test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  \n\"Authorization: Bearer \n$TOKEN_VALUE\n\"\n https://10.0.0.1\n\n{\n\n  \n\"paths\"\n: \n[\n\n    \n\"/api\"\n,\n    \n\"/api/v1\"\n,\n    \n\"/apis\"\n,\n    \n\"/apis/apps\"\n,\n    \n\"/apis/apps/v1alpha1\"\n,\n    \n\"/apis/authentication.k8s.io\"\n,\n    \n\"/apis/authentication.k8s.io/v1beta1\"\n,\n    \n\"/apis/authorization.k8s.io\"\n,\n    \n\"/apis/authorization.k8s.io/v1beta1\"\n,\n    \n\"/apis/autoscaling\"\n,\n    \n\"/apis/autoscaling/v1\"\n,\n    \n\"/apis/batch\"\n,\n    \n\"/apis/batch/v1\"\n,\n    \n\"/apis/batch/v2alpha1\"\n,\n    \n\"/apis/certificates.k8s.io\"\n,\n    \n\"/apis/certificates.k8s.io/v1alpha1\"\n,\n    \n\"/apis/extensions\"\n,\n    \n\"/apis/extensions/v1beta1\"\n,\n    \n\"/apis/policy\"\n,\n    \n\"/apis/policy/v1alpha1\"\n,\n    \n\"/apis/rbac.authorization.k8s.io\"\n,\n    \n\"/apis/rbac.authorization.k8s.io/v1alpha1\"\n,\n    \n\"/apis/storage.k8s.io\"\n,\n    \n\"/apis/storage.k8s.io/v1beta1\"\n,\n    \n\"/healthz\"\n,\n    \n\"/healthz/ping\"\n,\n    \n\"/logs\"\n,\n    \n\"/metrics\"\n,\n    \n\"/swaggerapi/\"\n,\n    \n\"/ui/\"\n,\n    \n\"/version\"\n\n  \n]\n\n\n}\n\n\n\n\n\n\nIf it is not working, there are two possible reasons:\n\n\n\n\n\n\nThe contents of the tokens are invalid. Find the secret name with \nkubectl get secrets | grep service-account\n and\ndelete it with \nkubectl delete secret \n. It will automatically be recreated.\n\n\n\n\n\n\nYou have a non-standard Kubernetes installation and the file containing the token may not be present.\nThe API server will mount a volume containing this file, but only if the API server is configured to use\nthe ServiceAccount admission controller.\nIf you experience this error, verify that your API server is using the ServiceAccount admission controller.\nIf you are configuring the API server by hand, you can set this with the \n--admission-control\n parameter.\nPlease note that you should use other admission controllers as well. Before configuring this option, you should\nread about admission controllers.\n\n\n\n\n\n\nMore information:\n\n\n\n\nUser Guide: Service Accounts\n\n\nCluster Administrator Guide: Managing Service Accounts\n\n\n\n\nKubeconfig\n\u00b6\n\n\nIf you want to use a kubeconfig file for authentication, follow the deploy procedure and \nadd the flag \n--kubeconfig=/etc/kubernetes/kubeconfig.yaml\n to the deployment",
+            "text": "Debug & Troubleshooting\n\u00b6\n\n\nDebug\n\u00b6\n\n\nUsing the flag \n--v=XX\n it is possible to increase the level of logging.\nIn particular:\n\n\n\n\n--v=2\n shows details using \ndiff\n about the changes in the configuration in nginx\n\n\n\n\nI0316 12:24:37.581267       1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf\n\n\nI0316 12:24:37.581356       1 utils.go:149] --- /tmp/922554809  2016-03-16 12:24:37.000000000 +0000\n\n\n+++ /tmp/079811012  2016-03-16 12:24:37.000000000 +0000\n\n\n@@ -235,7 +235,6 @@\n\n\n\n     upstream default-http-svcx {\n\n\n         least_conn;\n\n\n-        server 10.2.112.124:5000;\n\n\n         server 10.2.208.50:5000;\n\n\n\n     }\n\n\nI0316 12:24:37.610073       1 command.go:69] change in configuration detected. Reloading...\n\n\n\n\n\n\n\n\n--v=3\n shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format\n\n\n--v=5\n configures NGINX in \ndebug mode\n\n\n\n\nTroubleshooting\n\u00b6\n\n\nAuthentication to the Kubernetes API Server\n\u00b6\n\n\nA number of components are involved in the authentication process and the first step is to narrow\ndown the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.\nBoth authentications must work:\n\n\n+-------------+   service          +------------+\n|             |   authentication   |            |\n+  apiserver  +<-------------------+  ingress   |\n|             |                    | controller |\n+-------------+                    +------------+\n\n\n\n\n\nService authentication\n\n\nThe Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways:\n\n\n\n\n\n\nService Account:\n This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.\n\n\n\n\n\n\nKubeconfig file:\n In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the \n--kubeconfig\n flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the \n--kubeconfig\n does not requires the flag \n--apiserver-host\n.\nThe format of the file is identical to \n~/.kube/config\n which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.\n\n\n\n\n\n\nUsing the flag \n--apiserver-host\n:\n Using this flag \n--apiserver-host=http://localhost:8080\n it is possible to specify an unsecured API server or reach a remote kubernetes cluster using \nkubectl proxy\n.\nPlease do not use this approach in production.\n\n\n\n\n\n\nIn the diagram below you can see the full authentication flow with all options, starting with the browser\non the lower left hand side.\n\n\nKubernetes                                                  Workstation\n+---------------------------------------------------+     +------------------+\n|                                                   |     |                  |\n|  +-----------+   apiserver        +------------+  |     |  +------------+  |\n|  |           |   proxy            |            |  |     |  |            |  |\n|  | apiserver |                    |  ingress   |  |     |  |  ingress   |  |\n|  |           |                    | controller |  |     |  | controller |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  |           |  service account/  |            |  |     |  |            |  |\n|  |           |  kubeconfig        |            |  |     |  |            |  |\n|  |           +<-------------------+            |  |     |  |            |  |\n|  |           |                    |            |  |     |  |            |  |\n|  +------+----+      kubeconfig    +------+-----+  |     |  +------+-----+  |\n|         |<--------------------------------------------------------|        |\n|                                                   |     |                  |\n+---------------------------------------------------+     +------------------+\n\n\n\n\n\nService Account\n\u00b6\n\n\nIf using a service account to connect to the API server, Dashboard expects the file\n\n/var/run/secrets/kubernetes.io/serviceaccount/token\n to be present. It provides a secret\ntoken that is required to authenticate with the API server.\n\n\nVerify with the following commands:\n\n\n# start a container that contains curl\n\n$ kubectl run \ntest\n --image\n=\ntutum/curl -- sleep \n10000\n\n\n\n# check that container is running\n\n$ kubectl get pods\nNAME                   READY     STATUS    RESTARTS   AGE\ntest-701078429-s5kca   \n1\n/1       Running   \n0\n          16s\n\n\n# check if secret exists\n\n$ kubectl \nexec\n test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt\nnamespace\ntoken\n\n\n# get service IP of master\n\n$ kubectl get services\nNAME         CLUSTER-IP   EXTERNAL-IP   PORT\n(\nS\n)\n   AGE\nkubernetes   \n10\n.0.0.1             \n443\n/TCP   1d\n\n\n# check base connectivity from cluster inside\n\n$ kubectl \nexec\n test-701078429-s5kca -- curl -k https://10.0.0.1\nUnauthorized\n\n\n# connect using tokens\n\n$ \nTOKEN_VALUE\n=\n$(\nkubectl \nexec\n test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token\n)\n\n$ \necho\n \n$TOKEN_VALUE\n\neyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A\n$ kubectl \nexec\n test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H  \n\"Authorization: Bearer \n$TOKEN_VALUE\n\"\n https://10.0.0.1\n\n{\n\n  \n\"paths\"\n: \n[\n\n    \n\"/api\"\n,\n    \n\"/api/v1\"\n,\n    \n\"/apis\"\n,\n    \n\"/apis/apps\"\n,\n    \n\"/apis/apps/v1alpha1\"\n,\n    \n\"/apis/authentication.k8s.io\"\n,\n    \n\"/apis/authentication.k8s.io/v1beta1\"\n,\n    \n\"/apis/authorization.k8s.io\"\n,\n    \n\"/apis/authorization.k8s.io/v1beta1\"\n,\n    \n\"/apis/autoscaling\"\n,\n    \n\"/apis/autoscaling/v1\"\n,\n    \n\"/apis/batch\"\n,\n    \n\"/apis/batch/v1\"\n,\n    \n\"/apis/batch/v2alpha1\"\n,\n    \n\"/apis/certificates.k8s.io\"\n,\n    \n\"/apis/certificates.k8s.io/v1alpha1\"\n,\n    \n\"/apis/extensions\"\n,\n    \n\"/apis/extensions/v1beta1\"\n,\n    \n\"/apis/policy\"\n,\n    \n\"/apis/policy/v1alpha1\"\n,\n    \n\"/apis/rbac.authorization.k8s.io\"\n,\n    \n\"/apis/rbac.authorization.k8s.io/v1alpha1\"\n,\n    \n\"/apis/storage.k8s.io\"\n,\n    \n\"/apis/storage.k8s.io/v1beta1\"\n,\n    \n\"/healthz\"\n,\n    \n\"/healthz/ping\"\n,\n    \n\"/logs\"\n,\n    \n\"/metrics\"\n,\n    \n\"/swaggerapi/\"\n,\n    \n\"/ui/\"\n,\n    \n\"/version\"\n\n  \n]\n\n\n}\n\n\n\n\n\n\nIf it is not working, there are two possible reasons:\n\n\n\n\n\n\nThe contents of the tokens are invalid. Find the secret name with \nkubectl get secrets | grep service-account\n and\ndelete it with \nkubectl delete secret \n. It will automatically be recreated.\n\n\n\n\n\n\nYou have a non-standard Kubernetes installation and the file containing the token may not be present.\nThe API server will mount a volume containing this file, but only if the API server is configured to use\nthe ServiceAccount admission controller.\nIf you experience this error, verify that your API server is using the ServiceAccount admission controller.\nIf you are configuring the API server by hand, you can set this with the \n--admission-control\n parameter.\n\n\n\n\nNote that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.\n\n\n\n\n\n\n\n\nMore information:\n\n\n\n\nUser Guide: Service Accounts\n\n\nCluster Administrator Guide: Managing Service Accounts\n\n\n\n\nKubeconfig\n\u00b6\n\n\nIf you want to use a kubeconfig file for authentication, follow the deploy procedure and \nadd the flag \n--kubeconfig=/etc/kubernetes/kubeconfig.yaml\n to the deployment",
             "title": "Debug & Troubleshooting"
         },
         {
@@ -1677,7 +1682,7 @@
         },
         {
             "location": "/troubleshooting/#service-account",
-            "text": "If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token  to be present. It provides a secret\ntoken that is required to authenticate with the API server.  Verify with the following commands:  # start a container that contains curl \n$ kubectl run  test  --image = tutum/curl -- sleep  10000  # check that container is running \n$ kubectl get pods\nNAME                   READY     STATUS    RESTARTS   AGE\ntest-701078429-s5kca    1 /1       Running    0           16s # check if secret exists \n$ kubectl  exec  test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt\nnamespace\ntoken # get service IP of master \n$ kubectl get services\nNAME         CLUSTER-IP   EXTERNAL-IP   PORT ( S )    AGE\nkubernetes    10 .0.0.1              443 /TCP   1d # check base connectivity from cluster inside \n$ kubectl  exec  test-701078429-s5kca -- curl -k https://10.0.0.1\nUnauthorized # connect using tokens \n$  TOKEN_VALUE = $( kubectl  exec  test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) \n$  echo   $TOKEN_VALUE \neyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A\n$ kubectl  exec  test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H   \"Authorization: Bearer  $TOKEN_VALUE \"  https://10.0.0.1 { \n   \"paths\" :  [ \n     \"/api\" ,\n     \"/api/v1\" ,\n     \"/apis\" ,\n     \"/apis/apps\" ,\n     \"/apis/apps/v1alpha1\" ,\n     \"/apis/authentication.k8s.io\" ,\n     \"/apis/authentication.k8s.io/v1beta1\" ,\n     \"/apis/authorization.k8s.io\" ,\n     \"/apis/authorization.k8s.io/v1beta1\" ,\n     \"/apis/autoscaling\" ,\n     \"/apis/autoscaling/v1\" ,\n     \"/apis/batch\" ,\n     \"/apis/batch/v1\" ,\n     \"/apis/batch/v2alpha1\" ,\n     \"/apis/certificates.k8s.io\" ,\n     \"/apis/certificates.k8s.io/v1alpha1\" ,\n     \"/apis/extensions\" ,\n     \"/apis/extensions/v1beta1\" ,\n     \"/apis/policy\" ,\n     \"/apis/policy/v1alpha1\" ,\n     \"/apis/rbac.authorization.k8s.io\" ,\n     \"/apis/rbac.authorization.k8s.io/v1alpha1\" ,\n     \"/apis/storage.k8s.io\" ,\n     \"/apis/storage.k8s.io/v1beta1\" ,\n     \"/healthz\" ,\n     \"/healthz/ping\" ,\n     \"/logs\" ,\n     \"/metrics\" ,\n     \"/swaggerapi/\" ,\n     \"/ui/\" ,\n     \"/version\" \n   ]  }   If it is not working, there are two possible reasons:    The contents of the tokens are invalid. Find the secret name with  kubectl get secrets | grep service-account  and\ndelete it with  kubectl delete secret  . It will automatically be recreated.    You have a non-standard Kubernetes installation and the file containing the token may not be present.\nThe API server will mount a volume containing this file, but only if the API server is configured to use\nthe ServiceAccount admission controller.\nIf you experience this error, verify that your API server is using the ServiceAccount admission controller.\nIf you are configuring the API server by hand, you can set this with the  --admission-control  parameter.\nPlease note that you should use other admission controllers as well. Before configuring this option, you should\nread about admission controllers.    More information:   User Guide: Service Accounts  Cluster Administrator Guide: Managing Service Accounts",
+            "text": "If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token  to be present. It provides a secret\ntoken that is required to authenticate with the API server.  Verify with the following commands:  # start a container that contains curl \n$ kubectl run  test  --image = tutum/curl -- sleep  10000  # check that container is running \n$ kubectl get pods\nNAME                   READY     STATUS    RESTARTS   AGE\ntest-701078429-s5kca    1 /1       Running    0           16s # check if secret exists \n$ kubectl  exec  test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/\nca.crt\nnamespace\ntoken # get service IP of master \n$ kubectl get services\nNAME         CLUSTER-IP   EXTERNAL-IP   PORT ( S )    AGE\nkubernetes    10 .0.0.1              443 /TCP   1d # check base connectivity from cluster inside \n$ kubectl  exec  test-701078429-s5kca -- curl -k https://10.0.0.1\nUnauthorized # connect using tokens \n$  TOKEN_VALUE = $( kubectl  exec  test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) \n$  echo   $TOKEN_VALUE \neyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A\n$ kubectl  exec  test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H   \"Authorization: Bearer  $TOKEN_VALUE \"  https://10.0.0.1 { \n   \"paths\" :  [ \n     \"/api\" ,\n     \"/api/v1\" ,\n     \"/apis\" ,\n     \"/apis/apps\" ,\n     \"/apis/apps/v1alpha1\" ,\n     \"/apis/authentication.k8s.io\" ,\n     \"/apis/authentication.k8s.io/v1beta1\" ,\n     \"/apis/authorization.k8s.io\" ,\n     \"/apis/authorization.k8s.io/v1beta1\" ,\n     \"/apis/autoscaling\" ,\n     \"/apis/autoscaling/v1\" ,\n     \"/apis/batch\" ,\n     \"/apis/batch/v1\" ,\n     \"/apis/batch/v2alpha1\" ,\n     \"/apis/certificates.k8s.io\" ,\n     \"/apis/certificates.k8s.io/v1alpha1\" ,\n     \"/apis/extensions\" ,\n     \"/apis/extensions/v1beta1\" ,\n     \"/apis/policy\" ,\n     \"/apis/policy/v1alpha1\" ,\n     \"/apis/rbac.authorization.k8s.io\" ,\n     \"/apis/rbac.authorization.k8s.io/v1alpha1\" ,\n     \"/apis/storage.k8s.io\" ,\n     \"/apis/storage.k8s.io/v1beta1\" ,\n     \"/healthz\" ,\n     \"/healthz/ping\" ,\n     \"/logs\" ,\n     \"/metrics\" ,\n     \"/swaggerapi/\" ,\n     \"/ui/\" ,\n     \"/version\" \n   ]  }   If it is not working, there are two possible reasons:    The contents of the tokens are invalid. Find the secret name with  kubectl get secrets | grep service-account  and\ndelete it with  kubectl delete secret  . It will automatically be recreated.    You have a non-standard Kubernetes installation and the file containing the token may not be present.\nThe API server will mount a volume containing this file, but only if the API server is configured to use\nthe ServiceAccount admission controller.\nIf you experience this error, verify that your API server is using the ServiceAccount admission controller.\nIf you are configuring the API server by hand, you can set this with the  --admission-control  parameter.   Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.     More information:   User Guide: Service Accounts  Cluster Administrator Guide: Managing Service Accounts",
             "title": "Service Account"
         },
         {
diff --git a/sitemap.xml b/sitemap.xml
index 48fba364e..ecd6f1a51 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -4,7 +4,7 @@
     
     
      /
-     2018-04-27
+     2018-04-29
      daily
     
     
@@ -13,13 +13,13 @@
         
     
      /deploy/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /deploy/rbac/
-     2018-04-27
+     2018-04-29
      daily
     
         
@@ -35,49 +35,49 @@
         
     
      /user-guide/cli-arguments/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/custom-errors/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/exposing-tcp-udp-services/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/external-articles/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/miscellaneous/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/multiple-ingress/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/nginx-status-page/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /user-guide/tls/
-     2018-04-27
+     2018-04-29
      daily
     
         
@@ -93,19 +93,19 @@
         
     
      /examples/PREREQUISITES/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/affinity/cookie/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
@@ -123,37 +123,37 @@
         
     
      /examples/docker-registry/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/external-auth/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/multi-tls/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/rewrite/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/static-ip/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
     
      /examples/tls-termination/README/
-     2018-04-27
+     2018-04-29
      daily
     
         
@@ -162,7 +162,7 @@
     
     
      /development/
-     2018-04-27
+     2018-04-29
      daily
     
     
@@ -170,7 +170,7 @@
     
     
      /ingress-controller-catalog/
-     2018-04-27
+     2018-04-29
      daily
     
     
@@ -178,7 +178,7 @@
     
     
      /troubleshooting/
-     2018-04-27
+     2018-04-29
      daily
     
     
diff --git a/troubleshooting/index.html b/troubleshooting/index.html
index 682f35009..23ff4b4c0 100644
--- a/troubleshooting/index.html
+++ b/troubleshooting/index.html
@@ -1293,9 +1293,10 @@ delete it with kubectl delete secret <name>--admission-control parameter.
-Please note that you should use other admission controllers as well. Before configuring this option, you should
-read about admission controllers.

+If you are configuring the API server by hand, you can set this with the --admission-control parameter.

+
+

Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.

+

More information:

diff --git a/user-guide/custom-errors/index.html b/user-guide/custom-errors/index.html index eab406aab..9def4071a 100644 --- a/user-guide/custom-errors/index.html +++ b/user-guide/custom-errors/index.html @@ -1025,7 +1025,10 @@ Each request to the default backend includes two headers:

  • X-Code indicates the HTTP code to be returned to the client.
  • X-Format the value of the Accept header.
  • -

    Important: The custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.

    +
    +

    Important

    +

    The custom backend must return the correct HTTP status code to be returned. NGINX does not change the response from the custom default backend.

    +

    Using these two headers it's possible to use a custom backend service like this one that inspects each request and returns a custom error page with the format expected by the client. Please check the example custom-errors.

    NGINX sends additional headers that can be used to build custom response:

    Example: nginx.ingress.kubernetes.io/cors-max-age: 600

    -

    For more information please check https://enable-cors.org/server_nginx.html

    +

    For more information please see https://enable-cors.org

    Server Alias

    To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: "<alias>". This will create a server with the same configuration, but a different server_name as the provided host.

    -

    Note: A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias -annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created -the new server configuration will take place over the alias configuration.

    -

    For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name

    +
    +

    Note

    +

    A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created the new server configuration will take place over the alias configuration.

    +
    +

    For more information please see http://nginx.org

    Server snippet

    Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.

    apiVersion: extensions/v1beta1
    @@ -1928,13 +1955,16 @@ the new server configuration will take place over the alias configuration.

    -

    Important: This annotation can be used only once per host

    +
    +

    Important

    +

    This annotation can be used only once per host

    +

    Client Body Buffer Size

    Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is applied to each location provided in the ingress rule.

    -

    Note: The annotation value must be given in a valid format otherwise the +

    Note: The annotation value must be given in a valid format otherwise the For example to set the client-body-buffer-size the following can be done:

    • nginx.ingress.kubernetes.io/client-body-buffer-size: "1000" # 1000 bytes
    • @@ -1943,7 +1973,7 @@ For example to set the client-body-buffer-size the following can be done:

    • nginx.ingress.kubernetes.io/client-body-buffer-size: 1m # 1 megabyte
    • nginx.ingress.kubernetes.io/client-body-buffer-size: 1M # 1 megabyte
    -

    For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

    +

    For more information please see http://nginx.org

    External Authentication

    To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.

    nginx.ingress.kubernetes.io/auth-url: "URL to the authentication service"
    @@ -1971,15 +2001,23 @@ For example to set the client-body-buffer-size the following can be done:

    This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.

    SSL Passthrough

    The annotation nginx.ingress.kubernetes.io/ssl-passthrough allows to configure TLS termination in the pod and not in NGINX.

    -

    Important:

    +
    +

    Important

      -
    • Using the annotation nginx.ingress.kubernetes.io/ssl-passthrough invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).
    • -
    • The use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described here. If you're using ingress-controller without load balancer then the flag --enable-ssl-passthrough is required (by default it is disabled).
    • +
    • +

      Using the annotation nginx.ingress.kubernetes.io/ssl-passthrough invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).

      +
    • +
    • +

      The use of this annotation requires Proxy Protocol to be enabled in the load-balancer. For example enabling Proxy Protocol for AWS ELB is described here. If you're using ingress-controller without load balancer then the flag --enable-ssl-passthrough is required (by default it is disabled).

      +
    +

    Secure backends

    By default NGINX uses http to reach the services. Adding the annotation nginx.ingress.kubernetes.io/secure-backends: "true" in the Ingress rule changes the protocol to https. If you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation nginx.ingress.kubernetes.io/secure-verify-ca-secret.

    -

    Please note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the secure-backends annotation.

    +
    +

    Note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the secure-backends annotation.

    +

    Service Upstream

    By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257.

    Known Issues

    @@ -1995,16 +2033,18 @@ If you want to validate the upstream against a specific certificate, you can cre

    Redirect from to www

    In some scenarios is required to redirect from www.domain.com to domain.com or viceversa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: "true"

    -

    Important: -If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.

    +
    +

    Important

    +

    If at some point a new Ingress is created with a host equal to one of the options (like domain.com) the annotation will be omitted.

    +

    Whitelist source range

    You can specify the allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

    To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

    -

    Note: Adding an annotation to an Ingress rule overrides any global restriction.

    +

    Note: Adding an annotation to an Ingress rule overrides any global restriction.

    If you use the cookie type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.

    In case of NGINX the annotation nginx.ingress.kubernetes.io/session-cookie-hash defines which algorithm will be used to 'hash' the used upstream. Default value is md5 and possible values are md5, sha1 and index. -The index option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!

    +The index option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!

    In NGINX this feature is implemented by the third party module nginx-sticky-module-ng. The workflow used to define which upstream server will be used is explained here

    Custom timeouts

    Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. @@ -2058,6 +2098,12 @@ To use custom values in an Ingress rule define these annotation:

    +

    Enable Rewrite Log

    +

    In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:

    +
    nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
    +
    + +

    Lua Resty WAF

    Using lua-resty-waf-* annotations we can enable and control lua-resty-waf per location. Following configuration will enable WAF for the paths defined in the corresponding ingress:

    @@ -2068,7 +2114,7 @@ Following configuration will enable WAF for the paths defined in the correspondi

    In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug to "true" in addition to the above configuration. The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf are inactive and simulate. In inactive mode WAF won't do anything, whereas in simulate mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.

    -

    lua-resty-waf comes with predefined set of rules(https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules) that covers ModSecurity CRS. +

    lua-resty-waf comes with predefined set of rules https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules that covers ModSecurity CRS. You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets to ignore subset of those rulesets. For an example:

    nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets: "41000_sqli, 42000_xss"
     
    @@ -2081,7 +2127,7 @@ configure a WAF rule to deny requests with query string value that contains word
    -

    For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf.

    +

    For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf.

    diff --git a/user-guide/nginx-configuration/configmap/index.html b/user-guide/nginx-configuration/configmap/index.html index 1dde1d253..a68f20778 100644 --- a/user-guide/nginx-configuration/configmap/index.html +++ b/user-guide/nginx-configuration/configmap/index.html @@ -2631,11 +2631,13 @@ you can add key-value pairs to the data section of the config-map. For Example:< -

    IMPORTANT:

    +
    +

    Important

    The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".

    "Slice" types (defined below as []string or []int can be provided as a comma-delimited string.

    +

    Configuration options

    The following table shows a configuration option's name, type, and the default value:

    @@ -3217,12 +3219,12 @@ Same for numbers, like "100".

    add-headers

    Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

    allow-backend-server-header

    -

    Enables the return of the header Server from the backend instead of the generic nginx string. By default this is disabled.

    +

    Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

    hide-headers

    Sets additional header that will not be passed from the upstream server to the client response. -Default: empty

    +default: empty

    References: -- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

    +http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

    access-log-path

    Access log path. Goes to /var/log/nginx/access.log by default.

    Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

    @@ -3230,77 +3232,81 @@ Default: empty

    Error log path. Goes to /var/log/nginx/error.log by default.

    Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

    References: -- http://nginx.org/en/docs/ngx_core_module.html#error_log

    +http://nginx.org/en/docs/ngx_core_module.html#error_log

    enable-dynamic-tls-records

    -

    Enables dynamically sized TLS records to improve time-to-first-byte. By default this is enabled. See CloudFlare's blog for more information.

    +

    Enables dynamically sized TLS records to improve time-to-first-byte. default: is enabled

    +

    References: +https://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency

    enable-modsecurity

    -

    Enables the modsecurity module for NGINX. By default this is disabled.

    +

    Enables the modsecurity module for NGINX. default: is disabled

    enable-owasp-modsecurity-crs

    -

    Enables the OWASP ModSecurity Core Rule Set (CRS). By default this is disabled.

    +

    Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

    client-header-buffer-size

    Allows to configure a custom buffer size for reading client request header.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

    client-header-timeout

    Defines a timeout for reading client request header, in seconds.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

    client-body-buffer-size

    Sets buffer size for reading client request body.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

    client-body-timeout

    Defines a timeout for reading client request body, in seconds.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

    disable-access-log

    -

    Disables the Access Log from the entire Ingress Controller. This is '"false"' by default.

    +

    Disables the Access Log from the entire Ingress Controller. default: '"false"'

    References: -- http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

    +http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

    disable-ipv6

    -

    Disable listening on IPV6. By default this is disabled.

    +

    Disable listening on IPV6. default: is disabled

    disable-ipv6-dns

    -

    Disable IPV6 for nginx DNS resolver. By default this is disabled.

    +

    Disable IPV6 for nginx DNS resolver. default: is disabled

    enable-underscores-in-headers

    -

    Enables underscores in header names. By default this is disabled.

    +

    Enables underscores in header names. default: is disabled

    ignore-invalid-headers

    Set if header fields with invalid names should be ignored. -By default this is enabled.

    +default: is enabled

    enable-vts-status

    Allows the replacement of the default status page with a third party module named nginx-module-vts. -By default this is disabled.

    +default: is disabled

    vts-status-zone-size

    -

    Vts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. Default value is 10m

    +

    Vts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. default: 10m

    References: -- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone

    +https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone

    vts-default-filter-key

    -

    Vts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. Default value is $geoip_country_code country::*

    +

    Vts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. default: $geoip_country_code country::*

    References: -- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key

    +https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key

    vts-sum-key

    -

    For metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. Default value is *

    +

    For metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. default: *

    References: -- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key

    +https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key

    retry-non-idempotent

    Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".

    error-log-level

    Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

    References: -- http://nginx.org/en/docs/ngx_core_module.html#error_log

    +http://nginx.org/en/docs/ngx_core_module.html#error_log

    http2-max-field-size

    Limits the maximum size of an HPACK-compressed request header field.

    References: -- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

    +https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

    http2-max-header-size

    Limits the maximum size of the entire request header list after HPACK decompression.

    References: -- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

    +https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

    hsts

    Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

    -

    References: -- https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security -- https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server

    +

    References:

    +

    hsts-include-subdomains

    Enables or disables the use of HSTS in all the subdomains of the server-name.

    hsts-max-age

    @@ -3310,22 +3316,22 @@ HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature

    keep-alive

    Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

    keep-alive-requests

    Sets the maximum number of requests that can be served through one keep-alive connection.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

    large-client-header-buffers

    -

    Sets the maximum number and size of buffers used for reading large client request header. Default: 4 8k.

    +

    Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers

    log-format-escape-json

    Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx log format.

    log-format-upstream

    Sets the nginx log format. Example for json output:

    consolelog-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr","x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri","request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":"$http_user_agent" }'

    -

    Please check log-format for definition of each field.

    +

    Please check the log-format for definition of each field.

    log-format-stream

    Sets the nginx stream format.

    max-worker-connections

    @@ -3339,24 +3345,30 @@ Example for json output:

    server-name-hash-max-size

    Sets the maximum size of the server names hash tables used in server names,map directive’s values, MIME types, names of request header strings, etc.

    References: -- http://nginx.org/en/docs/hash.html

    +http://nginx.org/en/docs/hash.html

    server-name-hash-bucket-size

    Sets the size of the bucket for the server names hash tables.

    -

    References: -- http://nginx.org/en/docs/hash.html -- http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size

    +

    References:

    +

    proxy-headers-hash-max-size

    Sets the maximum size of the proxy headers hash tables.

    -

    References: -- http://nginx.org/en/docs/hash.html -- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size

    +

    References:

    +

    proxy-headers-hash-bucket-size

    Sets the size of the bucket for the proxy headers hash tables.

    -

    References: -- http://nginx.org/en/docs/hash.html -- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size

    +

    References:

    +

    server-tokens

    -

    Send NGINX Server header in responses and display NGINX version in error pages. By default this is enabled.

    +

    Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled

    ssl-ciphers

    Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.

    The default cipher list is: @@ -3366,13 +3378,15 @@ Example for json output:

    ssl-ecdh-curve

    Specifies a curve for ECDHE ciphers.

    References: -- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve

    +http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve

    ssl-dh-param

    Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy".

    -

    References: -- https://wiki.openssl.org/index.php/Diffie-Hellman_parameters -- https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam -- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam

    +

    References:

    +

    ssl-protocols

    Sets the SSL protocols to use. The default is: TLSv1.2.

    Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh.

    @@ -3390,7 +3404,7 @@ Example for json output:

    ssl-buffer-size

    Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).

    References: -- https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/

    +https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/

    use-proxy-protocol

    Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

    use-gzip

    @@ -3398,16 +3412,18 @@ Example for json output:

    The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.

    use-geoip

    Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. -The default value is true.

    +default: true

    enable-brotli

    Enables or disables compression of HTTP responses using the "brotli" module. -The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. This is disabled by default.

    -

    Note: Brotli does not works in Safari < 11 https://caniuse.com/#feat=brotli

    +The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled

    +
    +

    Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli

    +

    brotli-level

    -

    Sets the Brotli Compression Level that will be used. Defaults to 4.

    +

    Sets the Brotli Compression Level that will be used. default: 4

    brotli-types

    Sets the MIME Types that will be compressed on-the-fly by brotli. -Defaults to application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.

    +default:application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component

    use-http2

    Enables or disables HTTP/2 support in secure connections.

    gzip-types

    @@ -3424,7 +3440,7 @@ By default worker processes are not bound to any specific CPUs. The value can be
  • auto: binding worker processes automatically to available CPUs.
  • worker-shutdown-timeout

    -

    Sets a timeout for Nginx to wait for worker to gracefully shutdown. The default is "10s".

    +

    Sets a timeout for Nginx to wait for worker to gracefully shutdown. default: "10s"

    load-balance

    Sets the algorithm to use for load balancing. The value can either be:

    @@ -3436,70 +3452,70 @@ The value can either be:

    The default is least_conn.

    References: -- http://nginx.org/en/docs/http/load_balancing.html.

    +http://nginx.org/en/docs/http/load_balancing.html

    variables-hash-bucket-size

    Sets the bucket size for the variables hash table.

    References: -- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size

    +http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size

    variables-hash-max-size

    Sets the maximum size of the variables hash table.

    References: -- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size

    +http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size

    upstream-keepalive-connections

    Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this -number is exceeded, the least recently used connections are closed. Default: 32

    +number is exceeded, the least recently used connections are closed. default: 32

    References: -- http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

    +http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

    limit-conn-zone-variable

    Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.

    proxy-stream-timeout

    Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.

    References: -- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

    +http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

    proxy-stream-responses

    Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.

    References: -- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses

    +http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses

    bind-address-ipv4

    Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

    bind-address-ipv6

    Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

    forwarded-for-header

    -

    Sets the header field for identifying the originating IP address of a client. Default is X-Forwarded-For

    +

    Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For

    compute-full-forwarded-for

    Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.

    proxy-add-original-uri-header

    Adds an X-Original-Uri header with the original request URI to the backend request

    enable-opentracing

    -

    Enables the nginx Opentracing extension. By default this is disabled.

    +

    Enables the nginx Opentracing extension. default: is disabled

    References: -- https://github.com/opentracing-contrib/nginx-opentracing

    +https://github.com/opentracing-contrib/nginx-opentracing

    zipkin-collector-host

    Specifies the host to use when uploading traces. It must be a valid URL.

    zipkin-collector-port

    -

    Specifies the port to use when uploading traces. Default: 9411

    +

    Specifies the port to use when uploading traces. default: 9411

    zipkin-service-name

    -

    Specifies the service name to use for any traces created. Default: nginx

    +

    Specifies the service name to use for any traces created. default: nginx

    jaeger-collector-host

    Specifies the host to use when uploading traces. It must be a valid URL.

    jaeger-collector-port

    -

    Specifies the port to use when uploading traces. Default: 6831

    +

    Specifies the port to use when uploading traces. default: 6831

    jaeger-service-name

    -

    Specifies the service name to use for any traces created. Default: nginx

    +

    Specifies the service name to use for any traces created. default: nginx

    jaeger-sampler-type

    -

    Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. Default const.

    +

    Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const

    jaeger-sampler-param

    Specifies the argument to be passed to the sampler constructor. Must be a number. -For const this should be 0 to never sample and 1 to always sample. Default: 1

    +For const this should be 0 to never sample and 1 to always sample. default: 1

    http-snippet

    Adds custom configuration to the http section of the nginx configuration. -Default: ""

    +default: ""

    server-snippet

    Adds custom configuration to all the servers in the nginx configuration. -Default: ""

    +default: ""

    location-snippet

    Adds custom configuration to all the locations in the nginx configuration. -Default: ""

    +default: ""

    custom-http-errors

    Enables which HTTP codes should be passed for processing with the error_page directive

    Setting at least one code also enables proxy_intercept_errors which are required to process error_page.

    @@ -3524,43 +3540,45 @@ See NGINX proxy-next-upstream-tries

    Limit the number of possible tries a request should be passed to the next server.

    proxy-redirect-from

    -

    Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. Default: off.

    +

    Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. default: off

    References: -- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

    +http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

    proxy-request-buffering

    Enables or disables buffering of a client request body.

    ssl-redirect

    Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). -Default is "true".

    +default: "true"

    whitelist-source-range

    Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

    skip-access-log-urls

    -

    Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. By default this list is empty

    +

    Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty

    limit-rate

    Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

    limit-rate-after

    Sets the initial amount after which the further transmission of a response to a client will be rate limited.

    References: -- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after

    +http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after

    http-redirect-code

    Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 -Default code is 308.

    -

    Why the default code is 308?

    +default: 308

    +
    +

    Why the default code is 308?

    RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.

    +

    proxy-buffering

    Enables or disables buffering of responses from the proxied server.

    limit-req-status-code

    -

    Sets the status code to return in response to rejected requests.Default: 503

    +

    Sets the status code to return in response to rejected requests. default: 503

    no-tls-redirect-locations

    A comma-separated list of locations on which http requests will never get redirected to their https counterpart. -Default: "/.well-known/acme-challenge"

    +default: "/.well-known/acme-challenge"

    no-auth-locations

    A comma-separated list of locations that should not get authenticated. -Default: "/.well-known/acme-challenge"

    +default: "/.well-known/acme-challenge"

    diff --git a/user-guide/third-party-addons/modsecurity/index.html b/user-guide/third-party-addons/modsecurity/index.html index 0cab95b0c..61000391c 100644 --- a/user-guide/third-party-addons/modsecurity/index.html +++ b/user-guide/third-party-addons/modsecurity/index.html @@ -1021,14 +1021,16 @@

    ModSecurity Web Application Firewall

    -

    ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org

    +

    ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org

    The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).

    The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. To enable the ModSecurity feature we need to specify enable-modsecurity: "true" in the configuration configmap.

    -

    NOTE: the default configuration use detection only, because that minimises the chances of post-installation disruption. +

    +

    Note: the default configuration use detection only, because that minimises the chances of post-installation disruption. The file /var/log/modsec_audit.log contains the log of ModSecurity.

    +

    The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts. -The directory /etc/nginx/owasp-modsecurity-crs contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository. +The directory /etc/nginx/owasp-modsecurity-crs contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository. Using enable-owasp-modsecurity-crs: "true" we enable the use of the rules.

    diff --git a/user-guide/third-party-addons/opentracing/index.html b/user-guide/third-party-addons/opentracing/index.html index 7aafa73e7..9519d90c7 100644 --- a/user-guide/third-party-addons/opentracing/index.html +++ b/user-guide/third-party-addons/opentracing/index.html @@ -1054,7 +1054,7 @@ kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/ma

    In the zipkin interface we can see the details:

    -

    zipkin screenshot

    +

    zipkin screenshot