From 8300bdfbb4fa5c3cca6195937a089a122ff09ad2 Mon Sep 17 00:00:00 2001 From: Travis Bot Date: Sat, 28 Jul 2018 15:21:37 +0000 Subject: [PATCH] Deploy GitHub Pages --- deploy/index.html | 3 +- .../oauth-external-auth/README/index.html | 2 +- .../dashboard-ingress.yaml | 2 +- search/search_index.json | 18 ++-- sitemap.xml | 86 +++++++++---------- user-guide/cli-arguments/index.html | 2 +- .../nginx-configuration/configmap/index.html | 10 +-- 7 files changed, 61 insertions(+), 62 deletions(-) diff --git a/deploy/index.html b/deploy/index.html index 6da87a211..ec06ceae9 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -1340,8 +1340,7 @@

Provider Specific Steps

There are cloud provider specific yaml files.

Docker for Mac

-

Kubernetes is available for Docker for Mac's Edge channel. Switch to the Edge -channel and enable Kubernetes.

+

Kubernetes is available in Docker for Mac (from version 18.06.0-ce)

Create a service

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
 
diff --git a/examples/auth/oauth-external-auth/README/index.html b/examples/auth/oauth-external-auth/README/index.html index 3340ddffc..25583fc9c 100644 --- a/examples/auth/oauth-external-auth/README/index.html +++ b/examples/auth/oauth-external-auth/README/index.html @@ -1156,7 +1156,7 @@ same endpoint.

name: application annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" - nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri" + nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" ... diff --git a/examples/auth/oauth-external-auth/dashboard-ingress.yaml b/examples/auth/oauth-external-auth/dashboard-ingress.yaml index 743f6b49e..17a222939 100644 --- a/examples/auth/oauth-external-auth/dashboard-ingress.yaml +++ b/examples/auth/oauth-external-auth/dashboard-ingress.yaml @@ -3,7 +3,7 @@ kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth" - nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$request_uri" + nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri" name: external-auth-oauth2 namespace: kube-system spec: diff --git a/search/search_index.json b/search/search_index.json index 1981d6c6c..788bfd6ef 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -17,7 +17,7 @@ }, { "location": "/deploy/", - "text": "Installation Guide\n\u00b6\n\n\nContents\n\u00b6\n\n\n\n\nGeneric Deployment\n\n\nMandatory command\n\n\nProvider Specific Steps\n\n\nDocker for Mac\n\n\nminikube\n\n\nAWS\n\n\nGCE - GKE\n\n\nAzure\n\n\nBaremetal\n\n\n\n\n\n\nVerify installation\n\n\nDetect installed version\n\n\nUsing Helm\n\n\n\n\nGeneric Deployment\n\u00b6\n\n\nThe following resources are required for a generic deployment.\n\n\nMandatory command\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml\n\n\n\n\n\n\nProvider Specific Steps\n\u00b6\n\n\nThere are cloud provider specific yaml files.\n\n\nDocker for Mac\n\u00b6\n\n\nKubernetes is available for Docker for Mac's Edge channel. Switch to the \nEdge\nchannel\n and \nenable Kubernetes\n.\n\n\nCreate a service\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nminikube\n\u00b6\n\n\nFor standard usage:\n\n\nminikube addons enable ingress\n\n\n\n\n\n\nFor development:\n\n\n\n\nDisable the ingress addon:\n\n\n\n\n$\n minikube addons disable ingress\n\n\n\n\n\n\n\nExecute \nmake dev-env\n\n\nConfirm the \nnginx-ingress-controller\n deployment exists:\n\n\n\n\n$\n kubectl get pods -n ingress-nginx \n\nNAME READY STATUS RESTARTS AGE\n\n\ndefault-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s\n\n\nnginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s\n\n\n\n\n\n\nAWS\n\u00b6\n\n\nIn AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of \nType=LoadBalancer\n.\nSince Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)\nPlease check the \nelastic load balancing AWS details page\n\n\nElastic Load Balancer - ELB\n\u00b6\n\n\nThis setup requires to choose in which layer (L4 or L7) we want to configure the ELB:\n\n\n\n\nLayer 4\n: use TCP as the listener protocol for ports 80 and 443.\n\n\nLayer 7\n: use HTTP as the listener protocol for port 80 and terminate TLS in the ELB\n\n\n\n\nFor L4:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml\n\n\n\n\n\n\nFor L7:\n\n\nChange line of the file \nprovider/aws/service-l7.yaml\n replacing the dummy id with a valid one \n\"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\"\n\nThen execute:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml\n\n\n\n\n\n\nThis example creates an ELB with just two listeners, one in port 80 and another in port 443\n\n\n\n\nNetwork Load Balancer (NLB)\n\u00b6\n\n\nThis type of load balancer is supported since v1.10.0 as an ALPHA feature.\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml\n\n\n\n\n\n\nGCE - GKE\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nImportant Note:\n proxy protocol is not supported in GCE/GKE\n\n\nAzure\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nBaremetal\n\u00b6\n\n\nUsing \nNodePort\n:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml\n\n\n\n\n\n\nVerify installation\n\u00b6\n\n\nTo check if the ingress controller pods have started, run the following command:\n\n\nkubectl get pods --all-namespaces -l app=ingress-nginx --watch\n\n\n\n\n\n\nOnce the operator pods are running, you can cancel the above command by typing \nCtrl+C\n.\nNow, you are ready to create your first ingress.\n\n\nDetect installed version\n\u00b6\n\n\nTo detect which version of the ingress controller is running, exec into the pod and run \nnginx-ingress-controller version\n command.\n\n\nPOD_NAMESPACE=ingress-nginx\n\n\nPOD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name})\n\n\nkubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version\n\n\n\n\n\n\nUsing Helm\n\u00b6\n\n\nNGINX Ingress controller can be installed via \nHelm\n using the chart \nstable/nginx-ingress\n from the official charts repository. \nTo install the chart with the release name \nmy-nginx\n:\n\n\nhelm install stable/nginx-ingress --name my-nginx\n\n\n\n\n\n\nIf the kubernetes cluster has RBAC enabled, then run:\n\n\nhelm install stable/nginx-ingress --name my-nginx --set rbac.create=true\n\n\n\n\n\n\nDetect installed version:\n\n\nPOD_NAME=$(kubectl get pods -l app=nginx-ingress -o jsonpath={.items[0].metadata.name})\n\n\nkubectl exec -it $POD_NAME -- /nginx-ingress-controller --version", + "text": "Installation Guide\n\u00b6\n\n\nContents\n\u00b6\n\n\n\n\nGeneric Deployment\n\n\nMandatory command\n\n\nProvider Specific Steps\n\n\nDocker for Mac\n\n\nminikube\n\n\nAWS\n\n\nGCE - GKE\n\n\nAzure\n\n\nBaremetal\n\n\n\n\n\n\nVerify installation\n\n\nDetect installed version\n\n\nUsing Helm\n\n\n\n\nGeneric Deployment\n\u00b6\n\n\nThe following resources are required for a generic deployment.\n\n\nMandatory command\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml\n\n\n\n\n\n\nProvider Specific Steps\n\u00b6\n\n\nThere are cloud provider specific yaml files.\n\n\nDocker for Mac\n\u00b6\n\n\nKubernetes is available in Docker for Mac (from \nversion 18.06.0-ce\n)\n\n\nCreate a service\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nminikube\n\u00b6\n\n\nFor standard usage:\n\n\nminikube addons enable ingress\n\n\n\n\n\n\nFor development:\n\n\n\n\nDisable the ingress addon:\n\n\n\n\n$\n minikube addons disable ingress\n\n\n\n\n\n\n\nExecute \nmake dev-env\n\n\nConfirm the \nnginx-ingress-controller\n deployment exists:\n\n\n\n\n$\n kubectl get pods -n ingress-nginx \n\nNAME READY STATUS RESTARTS AGE\n\n\ndefault-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s\n\n\nnginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s\n\n\n\n\n\n\nAWS\n\u00b6\n\n\nIn AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of \nType=LoadBalancer\n.\nSince Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)\nPlease check the \nelastic load balancing AWS details page\n\n\nElastic Load Balancer - ELB\n\u00b6\n\n\nThis setup requires to choose in which layer (L4 or L7) we want to configure the ELB:\n\n\n\n\nLayer 4\n: use TCP as the listener protocol for ports 80 and 443.\n\n\nLayer 7\n: use HTTP as the listener protocol for port 80 and terminate TLS in the ELB\n\n\n\n\nFor L4:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml\n\n\n\n\n\n\nFor L7:\n\n\nChange line of the file \nprovider/aws/service-l7.yaml\n replacing the dummy id with a valid one \n\"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\"\n\nThen execute:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml\n\n\n\n\n\n\nThis example creates an ELB with just two listeners, one in port 80 and another in port 443\n\n\n\n\nNetwork Load Balancer (NLB)\n\u00b6\n\n\nThis type of load balancer is supported since v1.10.0 as an ALPHA feature.\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml\n\n\n\n\n\n\nGCE - GKE\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nImportant Note:\n proxy protocol is not supported in GCE/GKE\n\n\nAzure\n\u00b6\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml\n\n\n\n\n\n\nBaremetal\n\u00b6\n\n\nUsing \nNodePort\n:\n\n\nkubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml\n\n\n\n\n\n\nVerify installation\n\u00b6\n\n\nTo check if the ingress controller pods have started, run the following command:\n\n\nkubectl get pods --all-namespaces -l app=ingress-nginx --watch\n\n\n\n\n\n\nOnce the operator pods are running, you can cancel the above command by typing \nCtrl+C\n.\nNow, you are ready to create your first ingress.\n\n\nDetect installed version\n\u00b6\n\n\nTo detect which version of the ingress controller is running, exec into the pod and run \nnginx-ingress-controller version\n command.\n\n\nPOD_NAMESPACE=ingress-nginx\n\n\nPOD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name})\n\n\nkubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version\n\n\n\n\n\n\nUsing Helm\n\u00b6\n\n\nNGINX Ingress controller can be installed via \nHelm\n using the chart \nstable/nginx-ingress\n from the official charts repository. \nTo install the chart with the release name \nmy-nginx\n:\n\n\nhelm install stable/nginx-ingress --name my-nginx\n\n\n\n\n\n\nIf the kubernetes cluster has RBAC enabled, then run:\n\n\nhelm install stable/nginx-ingress --name my-nginx --set rbac.create=true\n\n\n\n\n\n\nDetect installed version:\n\n\nPOD_NAME=$(kubectl get pods -l app=nginx-ingress -o jsonpath={.items[0].metadata.name})\n\n\nkubectl exec -it $POD_NAME -- /nginx-ingress-controller --version", "title": "Installation Guide" }, { @@ -47,7 +47,7 @@ }, { "location": "/deploy/#docker-for-mac", - "text": "Kubernetes is available for Docker for Mac's Edge channel. Switch to the Edge\nchannel and enable Kubernetes . Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml", + "text": "Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml", "title": "Docker for Mac" }, { @@ -372,7 +372,7 @@ }, { "location": "/user-guide/nginx-configuration/configmap/", - "text": "ConfigMaps\n\u00b6\n\n\nConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.\n\n\nThe ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be \ndeployed\n.\n\n\nIn order to overwrite nginx-controller configuration values as seen in \nconfig.go\n,\nyou can add key-value pairs to the data section of the config-map. For Example:\n\n\ndata\n:\n\n \nmap-hash-bucket-size\n:\n \n\"128\"\n\n \nssl-protocols\n:\n \nSSLv2\n\n\n\n\n\n\n\n\nImportant\n\n\nThe key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".\n\n\n\"Slice\" types (defined below as \n[]string\n or \n[]int\n can be provided as a comma-delimited string.\n\n\n\n\nConfiguration options\n\u00b6\n\n\nThe following table shows a configuration option's name, type, and the default value:\n\n\n\n\n\n\n\n\nname\n\n\ntype\n\n\ndefault\n\n\n\n\n\n\n\n\n\n\nadd-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nallow-backend-server-header\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nhide-headers\n\n\nstring array\n\n\nempty\n\n\n\n\n\n\naccess-log-path\n\n\nstring\n\n\n\"/var/log/nginx/access.log\"\n\n\n\n\n\n\nerror-log-path\n\n\nstring\n\n\n\"/var/log/nginx/error.log\"\n\n\n\n\n\n\nenable-dynamic-tls-records\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-modsecurity\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nenable-owasp-modsecurity-crs\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nclient-header-buffer-size\n\n\nstring\n\n\n\"1k\"\n\n\n\n\n\n\nclient-header-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nclient-body-buffer-size\n\n\nstring\n\n\n\"8k\"\n\n\n\n\n\n\nclient-body-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\ndisable-access-log\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6-dns\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nenable-underscores-in-headers\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nignore-invalid-headers\n\n\nbool\n\n\ntrue\n\n\n\n\n\n\nretry-non-idempotent\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nerror-log-level\n\n\nstring\n\n\n\"notice\"\n\n\n\n\n\n\nhttp2-max-field-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nhttp2-max-header-size\n\n\nstring\n\n\n\"16k\"\n\n\n\n\n\n\nhsts\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-include-subdomains\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-max-age\n\n\nstring\n\n\n\"15724800\"\n\n\n\n\n\n\nhsts-preload\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nkeep-alive\n\n\nint\n\n\n75\n\n\n\n\n\n\nkeep-alive-requests\n\n\nint\n\n\n100\n\n\n\n\n\n\nlarge-client-header-buffers\n\n\nstring\n\n\n\"4 8k\"\n\n\n\n\n\n\nlog-format-escape-json\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nlog-format-upstream\n\n\nstring\n\n\n%v\n \n-\n \n[\n$the_real_ip\n]\n \n-\n \n$remote_user\n \n[\n$time_local\n]\n \n\"$request\"\n \n$status\n \n$body_bytes_sent\n \n\"$http_referer\"\n \n\"$http_user_agent\"\n \n$request_length\n \n$request_time\n \n[\n$proxy_upstream_name\n]\n \n$upstream_addr\n \n$upstream_response_length\n \n$upstream_response_time\n \n$upstream_status\n\n\n\n\n\n\nlog-format-stream\n\n\nstring\n\n\n[$time_local] $protocol $status $bytes_sent $bytes_received $session_time\n\n\n\n\n\n\nmax-worker-connections\n\n\nint\n\n\n16384\n\n\n\n\n\n\nmap-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nnginx-status-ipv4-whitelist\n\n\n[]string\n\n\n\"127.0.0.1\"\n\n\n\n\n\n\nnginx-status-ipv6-whitelist\n\n\n[]string\n\n\n\"::1\"\n\n\n\n\n\n\nproxy-real-ip-cidr\n\n\n[]string\n\n\n\"0.0.0.0/0\"\n\n\n\n\n\n\nproxy-set-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-name-hash-max-size\n\n\nint\n\n\n1024\n\n\n\n\n\n\nserver-name-hash-bucket-size\n\n\nint\n\n\n\n\n\n\n\n\n\nproxy-headers-hash-max-size\n\n\nint\n\n\n512\n\n\n\n\n\n\nproxy-headers-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nserver-tokens\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-ciphers\n\n\nstring\n\n\n\"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\"\n\n\n\n\n\n\nssl-ecdh-curve\n\n\nstring\n\n\n\"auto\"\n\n\n\n\n\n\nssl-dh-param\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nssl-protocols\n\n\nstring\n\n\n\"TLSv1.2\"\n\n\n\n\n\n\nssl-session-cache\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-cache-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-session-tickets\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-ticket-key\n\n\nstring\n\n\n\n\n\n\n\n\n\nssl-session-timeout\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nuse-proxy-protocol\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-protocol-header-timeout\n\n\nstring\n\n\n\"5s\"\n\n\n\n\n\n\nuse-gzip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nuse-geoip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-brotli\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nbrotli-level\n\n\nint\n\n\n4\n\n\n\n\n\n\nbrotli-types\n\n\nstring\n\n\n\"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nuse-http2\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngzip-level\n\n\nint\n\n\n5\n\n\n\n\n\n\ngzip-types\n\n\nstring\n\n\n\"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nworker-processes\n\n\nstring\n\n\n\n\n\n\n\n\n\nworker-cpu-affinity\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nworker-shutdown-timeout\n\n\nstring\n\n\n\"10s\"\n\n\n\n\n\n\nload-balance\n\n\nstring\n\n\n\"least_conn\"\n\n\n\n\n\n\nvariables-hash-bucket-size\n\n\nint\n\n\n128\n\n\n\n\n\n\nvariables-hash-max-size\n\n\nint\n\n\n2048\n\n\n\n\n\n\nupstream-keepalive-connections\n\n\nint\n\n\n32\n\n\n\n\n\n\nlimit-conn-zone-variable\n\n\nstring\n\n\n\"$binary_remote_addr\"\n\n\n\n\n\n\nproxy-stream-timeout\n\n\nstring\n\n\n\"600s\"\n\n\n\n\n\n\nproxy-stream-responses\n\n\nint\n\n\n1\n\n\n\n\n\n\nbind-address\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nforwarded-for-header\n\n\nstring\n\n\n\"X-Forwarded-For\"\n\n\n\n\n\n\ncompute-full-forwarded-for\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-add-original-uri-header\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngenerate-request-id\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-opentracing\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nzipkin-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nzipkin-collector-port\n\n\nint\n\n\n9411\n\n\n\n\n\n\nzipkin-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\nzipkin-sample-rate\n\n\nfloat\n\n\n1.0\n\n\n\n\n\n\njaeger-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\njaeger-collector-port\n\n\nint\n\n\n6831\n\n\n\n\n\n\njaeger-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-sampler-type\n\n\nstring\n\n\n\"const\"\n\n\n\n\n\n\njaeger-sampler-param\n\n\nstring\n\n\n\"1\"\n\n\n\n\n\n\nhttp-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nlocation-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\ncustom-http-errors\n\n\n[]int]\n\n\n[]int{}\n\n\n\n\n\n\nproxy-body-size\n\n\nstring\n\n\n\"1m\"\n\n\n\n\n\n\nproxy-connect-timeout\n\n\nint\n\n\n5\n\n\n\n\n\n\nproxy-read-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-send-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nproxy-cookie-path\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-cookie-domain\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-next-upstream\n\n\nstring\n\n\n\"error timeout\"\n\n\n\n\n\n\nproxy-next-upstream-tries\n\n\nint\n\n\n3\n\n\n\n\n\n\nproxy-redirect-from\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-request-buffering\n\n\nstring\n\n\n\"on\"\n\n\n\n\n\n\nssl-redirect\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nwhitelist-source-range\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nskip-access-log-urls\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nlimit-rate\n\n\nint\n\n\n0\n\n\n\n\n\n\nlimit-rate-after\n\n\nint\n\n\n0\n\n\n\n\n\n\nhttp-redirect-code\n\n\nint\n\n\n308\n\n\n\n\n\n\nproxy-buffering\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nlimit-req-status-code\n\n\nint\n\n\n503\n\n\n\n\n\n\nno-tls-redirect-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\nno-auth-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\n\n\nadd-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to the client. See \nproxy-set-headers\n. \nexample\n\n\nallow-backend-server-header\n\u00b6\n\n\nEnables the return of the header Server from the backend instead of the generic nginx string. \ndefault:\n is disabled\n\n\nhide-headers\n\u00b6\n\n\nSets additional header that will not be passed from the upstream server to the client response.\n\ndefault:\n empty\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header\n\n\naccess-log-path\n\u00b6\n\n\nAccess log path. Goes to \n/var/log/nginx/access.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/access.log\n is a symlink to \n/dev/stdout\n\n\nerror-log-path\n\u00b6\n\n\nError log path. Goes to \n/var/log/nginx/error.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/error.log\n is a symlink to \n/dev/stderr\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nenable-dynamic-tls-records\n\u00b6\n\n\nEnables dynamically sized TLS records to improve time-to-first-byte. \ndefault:\n is enabled\n\n\nReferences:\n\n\nhttps://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency\n\n\nenable-modsecurity\n\u00b6\n\n\nEnables the modsecurity module for NGINX. \ndefault:\n is disabled\n\n\nenable-owasp-modsecurity-crs\n\u00b6\n\n\nEnables the OWASP ModSecurity Core Rule Set (CRS). \ndefault:\n is disabled\n\n\nclient-header-buffer-size\n\u00b6\n\n\nAllows to configure a custom buffer size for reading client request header.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size\n\n\nclient-header-timeout\n\u00b6\n\n\nDefines a timeout for reading client request header, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout\n\n\nclient-body-buffer-size\n\u00b6\n\n\nSets buffer size for reading client request body.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size\n\n\nclient-body-timeout\n\u00b6\n\n\nDefines a timeout for reading client request body, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout\n\n\ndisable-access-log\n\u00b6\n\n\nDisables the Access Log from the entire Ingress Controller. \ndefault:\n '\"false\"'\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_log_module.html#access_log\n\n\ndisable-ipv6\n\u00b6\n\n\nDisable listening on IPV6. \ndefault:\n is disabled\n\n\ndisable-ipv6-dns\n\u00b6\n\n\nDisable IPV6 for nginx DNS resolver. \ndefault:\n is disabled\n\n\nenable-underscores-in-headers\n\u00b6\n\n\nEnables underscores in header names. \ndefault:\n is disabled\n\n\nignore-invalid-headers\n\u00b6\n\n\nSet if header fields with invalid names should be ignored.\n\ndefault:\n is enabled\n\n\nretry-non-idempotent\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".\n\n\nerror-log-level\n\u00b6\n\n\nConfigures the logging level of errors. Log levels above are listed in the order of increasing severity.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nhttp2-max-field-size\n\u00b6\n\n\nLimits the maximum size of an HPACK-compressed request header field.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size\n\n\nhttp2-max-header-size\n\u00b6\n\n\nLimits the maximum size of the entire request header list after HPACK decompression.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size\n\n\nhsts\n\u00b6\n\n\nEnables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.\n\n\nReferences:\n\n\n\n\nhttps://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security\n\n\nhttps://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server\n\n\n\n\nhsts-include-subdomains\n\u00b6\n\n\nEnables or disables the use of HSTS in all the subdomains of the server-name.\n\n\nhsts-max-age\n\u00b6\n\n\nSets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.\n\n\nhsts-preload\n\u00b6\n\n\nEnables or disables the preload attribute in the HSTS feature (when it is enabled) dd\n\n\nkeep-alive\n\u00b6\n\n\nSets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout\n\n\nkeep-alive-requests\n\u00b6\n\n\nSets the maximum number of requests that can be served through one keep-alive connection.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests\n\n\nlarge-client-header-buffers\n\u00b6\n\n\nSets the maximum number and size of buffers used for reading large client request header. \ndefault:\n 4 8k\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers\n\n\nlog-format-escape-json\n\u00b6\n\n\nSets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx \nlog format\n.\n\n\nlog-format-upstream\n\u00b6\n\n\nSets the nginx \nlog format\n.\nExample for json output:\n\n\nconsolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'\n\n\nPlease check the \nlog-format\n for definition of each field.\n\n\nlog-format-stream\n\u00b6\n\n\nSets the nginx \nstream format\n.\n\n\nmax-worker-connections\n\u00b6\n\n\nSets the maximum number of simultaneous connections that can be opened by each \nworker process\n\n\nmap-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the \nmap variables hash tables\n. The details of setting up hash tables are provided in a separate \ndocument\n.\n\n\nproxy-real-ip-cidr\n\u00b6\n\n\nIf use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.\n\n\nproxy-set-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See \nexample\n\n\nserver-name-hash-max-size\n\u00b6\n\n\nSets the maximum size of the \nserver names hash tables\n used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nserver-name-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the server names hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size\n\n\n\n\nproxy-headers-hash-max-size\n\u00b6\n\n\nSets the maximum size of the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size\n\n\n\n\nproxy-headers-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size\n\n\n\n\nserver-tokens\n\u00b6\n\n\nSend NGINX Server header in responses and display NGINX version in error pages. \ndefault:\n is enabled\n\n\nssl-ciphers\n\u00b6\n\n\nSets the \nciphers\n list to enable. The ciphers are specified in the format understood by the OpenSSL library.\n\n\nThe default cipher list is:\n \nECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n.\n\n\nThe ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect \nforward secrecy\n.\n\n\nPlease check the \nMozilla SSL Configuration Generator\n.\n\n\nssl-ecdh-curve\n\u00b6\n\n\nSpecifies a curve for ECDHE ciphers.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve\n\n\nssl-dh-param\n\u00b6\n\n\nSets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".\n\n\nReferences:\n\n\n\n\nhttps://wiki.openssl.org/index.php/Diffie-Hellman_parameters\n\n\nhttps://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam\n\n\n\n\nssl-protocols\n\u00b6\n\n\nSets the \nSSL protocols\n to use. The default is: \nTLSv1.2\n.\n\n\nPlease check the result of the configuration using \nhttps://ssllabs.com/ssltest/analyze.html\n or \nhttps://testssl.sh\n.\n\n\nssl-session-cache\n\u00b6\n\n\nEnables or disables the use of shared \nSSL cache\n among worker processes.\n\n\nssl-session-cache-size\n\u00b6\n\n\nSets the size of the \nSSL shared session cache\n between all worker processes.\n\n\nssl-session-tickets\n\u00b6\n\n\nEnables or disables session resumption through \nTLS session tickets\n.\n\n\nssl-session-ticket-key\n\u00b6\n\n\nSets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.\nTo create a ticket: \nopenssl rand 80 | openssl enc -A -base64\n\n\nTLS session ticket-key\n, by default, a randomly generated key is used. \n\n\nssl-session-timeout\n\u00b6\n\n\nSets the time during which a client may \nreuse the session\n parameters stored in a cache.\n\n\nssl-buffer-size\n\u00b6\n\n\nSets the size of the \nSSL buffer\n used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).\n\n\nReferences:\n\n\nhttps://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/\n\n\nuse-proxy-protocol\n\u00b6\n\n\nEnables or disables the \nPROXY protocol\n to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).\n\n\nproxy-protocol-header-timeout\n\u00b6\n\n\nSets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinetly on a dropped connection.\n\ndefault:\n 5s\n\n\nuse-gzip\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"gzip\" module\n.\nThe default mime type list to compress is: \napplication/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n.\n\n\nuse-geoip\n\u00b6\n\n\nEnables or disables \n\"geoip\" module\n that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\n\ndefault:\n true\n\n\nenable-brotli\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"brotli\" module\n.\nThe default mime type list to compress is: \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n. \ndefault:\n is disabled\n\n\n\n\nNote:\n Brotli does not works in Safari < 11. For more information see \nhttps://caniuse.com/#feat=brotli\n\n\n\n\nbrotli-level\n\u00b6\n\n\nSets the Brotli Compression Level that will be used. \ndefault:\n 4\n\n\nbrotli-types\n\u00b6\n\n\nSets the MIME Types that will be compressed on-the-fly by brotli.\n\ndefault:\n \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n\n\nuse-http2\n\u00b6\n\n\nEnables or disables \nHTTP/2\n support in secure connections.\n\n\ngzip-level\n\u00b6\n\n\nSets the gzip Compression Level that will be used. \ndefault:\n 5\n\n\ngzip-types\n\u00b6\n\n\nSets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if \nuse-gzip\n is enabled.\n\n\nworker-processes\n\u00b6\n\n\nSets the number of \nworker processes\n.\nThe default of \"auto\" means number of available CPU cores.\n\n\nworker-cpu-affinity\n\u00b6\n\n\nBinds worker processes to the sets of CPUs. \nworker_cpu_affinity\n.\nBy default worker processes are not bound to any specific CPUs. The value can be:\n\n\n\n\n\"\": empty string indicate no affinity is applied.\n\n\ncpumask: e.g. \n0001 0010 0100 1000\n to bind processes to specific cpus.\n\n\nauto: binding worker processes automatically to available CPUs.\n\n\n\n\nworker-shutdown-timeout\n\u00b6\n\n\nSets a timeout for Nginx to \nwait for worker to gracefully shutdown\n. \ndefault:\n \"10s\"\n\n\nload-balance\n\u00b6\n\n\nSets the algorithm to use for load balancing.\nThe value can either be:\n\n\n\n\nround_robin: to use the default round robin loadbalancer\n\n\nleast_conn: to use the least connected method\n\n\nip_hash: to use a hash of the server for routing.\n\n\newma: to use the peak ewma method for routing (only available with \nenable-dynamic-configuration\n flag) \n\n\n\n\nThe default is least_conn.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/load_balancing.html\n\n\nvariables-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size\n\n\nvariables-hash-max-size\n\u00b6\n\n\nSets the maximum size of the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size\n\n\nupstream-keepalive-connections\n\u00b6\n\n\nActivates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed. \ndefault:\n 32\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive\n\n\nlimit-conn-zone-variable\n\u00b6\n\n\nSets parameters for a shared memory zone that will keep states for various keys of \nlimit_conn_zone\n. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.\n\n\nproxy-stream-timeout\n\u00b6\n\n\nSets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout\n\n\nproxy-stream-responses\n\u00b6\n\n\nSets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses\n\n\nbind-address\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nforwarded-for-header\n\u00b6\n\n\nSets the header field for identifying the originating IP address of a client. \ndefault:\n X-Forwarded-For\n\n\ncompute-full-forwarded-for\n\u00b6\n\n\nAppend the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.\n\n\nproxy-add-original-uri-header\n\u00b6\n\n\nAdds an X-Original-Uri header with the original request URI to the backend request\n\n\ngenerate-request-id\n\u00b6\n\n\nEnsures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request\n\n\nenable-opentracing\n\u00b6\n\n\nEnables the nginx Opentracing extension. \ndefault:\n is disabled\n\n\nReferences:\n\n\nhttps://github.com/opentracing-contrib/nginx-opentracing\n\n\nzipkin-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\nzipkin-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 9411\n\n\nzipkin-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\nzipkin-sample-rate\n\u00b6\n\n\nSpecifies sample rate for any traces created. \ndefault:\n 1.0\n\n\njaeger-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\njaeger-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 6831\n\n\njaeger-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\njaeger-sampler-type\n\u00b6\n\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. \ndefault:\n const\n\n\njaeger-sampler-param\n\u00b6\n\n\nSpecifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. \ndefault:\n 1\n\n\nhttp-snippet\n\u00b6\n\n\nAdds custom configuration to the http section of the nginx configuration.\n\ndefault:\n \"\"\n\n\nserver-snippet\n\u00b6\n\n\nAdds custom configuration to all the servers in the nginx configuration.\n\ndefault:\n \"\"\n\n\nlocation-snippet\n\u00b6\n\n\nAdds custom configuration to all the locations in the nginx configuration.\n\ndefault:\n \"\"\n\n\ncustom-http-errors\n\u00b6\n\n\nEnables which HTTP codes should be passed for processing with the \nerror_page directive\n\n\nSetting at least one code also enables \nproxy_intercept_errors\n which are required to process error_page.\n\n\nExample usage: \ncustom-http-errors: 404,415\n\n\nproxy-body-size\n\u00b6\n\n\nSets the maximum allowed size of the client request body.\nSee NGINX \nclient_max_body_size\n.\n\n\nproxy-connect-timeout\n\u00b6\n\n\nSets the timeout for \nestablishing a connection with a proxied server\n. It should be noted that this timeout cannot usually exceed 75 seconds.\n\n\nproxy-read-timeout\n\u00b6\n\n\nSets the timeout in seconds for \nreading a response from the proxied server\n. The timeout is set only between two successive read operations, not for the transmission of the whole response.\n\n\nproxy-send-timeout\n\u00b6\n\n\nSets the timeout in seconds for \ntransmitting a request to the proxied server\n. The timeout is set only between two successive write operations, not for the transmission of the whole request.\n\n\nproxy-buffer-size\n\u00b6\n\n\nSets the size of the buffer used for \nreading the first part of the response\n received from the proxied server. This part usually contains a small response header.\n\n\nproxy-cookie-path\n\u00b6\n\n\nSets a text that \nshould be changed in the path attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-cookie-domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-next-upstream\n\u00b6\n\n\nSpecifies in \nwhich cases\n a request should be passed to the next server.\n\n\nproxy-next-upstream-tries\n\u00b6\n\n\nLimit the number of \npossible tries\n a request should be passed to the next server.\n\n\nproxy-redirect-from\n\u00b6\n\n\nSets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. \ndefault:\n off\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect\n\n\nproxy-request-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of a client request body\n.\n\n\nssl-redirect\n\u00b6\n\n\nSets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\n\ndefault:\n \"true\"\n\n\nwhitelist-source-range\n\u00b6\n\n\nSets the default whitelisted IPs for each \nserver\n block. This can be overwritten by an annotation on an Ingress rule.\nSee \nngx_http_access_module\n.\n\n\nskip-access-log-urls\n\u00b6\n\n\nSets a list of URLs that should not appear in the NGINX access log. This is useful with urls like \n/health\n or \nhealth-check\n that make \"complex\" reading the logs. \ndefault:\n is empty\n\n\nlimit-rate\n\u00b6\n\n\nLimits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate\n\n\nlimit-rate-after\n\u00b6\n\n\nSets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after\n\n\nhttp-redirect-code\n\u00b6\n\n\nSets the HTTP status code to be used in redirects.\nSupported codes are \n301\n,\n302\n,\n307\n and \n308\n\n\ndefault:\n 308\n\n\n\n\nWhy the default code is 308?\n\n\nRFC 7238\n was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.\n\n\n\n\nproxy-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of responses from the proxied server\n.\n\n\nlimit-req-status-code\n\u00b6\n\n\nSets the \nstatus code to return in response to rejected requests\n. \ndefault:\n 503\n\n\nno-tls-redirect-locations\n\u00b6\n\n\nA comma-separated list of locations on which http requests will never get redirected to their https counterpart.\n\ndefault:\n \"/.well-known/acme-challenge\"\n\n\nno-auth-locations\n\u00b6\n\n\nA comma-separated list of locations that should not get authenticated.\n\ndefault:\n \"/.well-known/acme-challenge\"", + "text": "ConfigMaps\n\u00b6\n\n\nConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.\n\n\nThe ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system\ncomponents for the nginx-controller. Before you can begin using a config-map it must be \ndeployed\n.\n\n\nIn order to overwrite nginx-controller configuration values as seen in \nconfig.go\n,\nyou can add key-value pairs to the data section of the config-map. For Example:\n\n\ndata\n:\n\n \nmap-hash-bucket-size\n:\n \n\"128\"\n\n \nssl-protocols\n:\n \nSSLv2\n\n\n\n\n\n\n\n\nImportant\n\n\nThe key and values in a ConfigMap can only be strings.\nThis means that we want a value with boolean values we need to quote the values, like \"true\" or \"false\".\nSame for numbers, like \"100\".\n\n\n\"Slice\" types (defined below as \n[]string\n or \n[]int\n can be provided as a comma-delimited string.\n\n\n\n\nConfiguration options\n\u00b6\n\n\nThe following table shows a configuration option's name, type, and the default value:\n\n\n\n\n\n\n\n\nname\n\n\ntype\n\n\ndefault\n\n\n\n\n\n\n\n\n\n\nadd-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nallow-backend-server-header\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nhide-headers\n\n\nstring array\n\n\nempty\n\n\n\n\n\n\naccess-log-path\n\n\nstring\n\n\n\"/var/log/nginx/access.log\"\n\n\n\n\n\n\nerror-log-path\n\n\nstring\n\n\n\"/var/log/nginx/error.log\"\n\n\n\n\n\n\nenable-dynamic-tls-records\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-modsecurity\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nenable-owasp-modsecurity-crs\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nclient-header-buffer-size\n\n\nstring\n\n\n\"1k\"\n\n\n\n\n\n\nclient-header-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nclient-body-buffer-size\n\n\nstring\n\n\n\"8k\"\n\n\n\n\n\n\nclient-body-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\ndisable-access-log\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\ndisable-ipv6-dns\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nenable-underscores-in-headers\n\n\nbool\n\n\nfalse\n\n\n\n\n\n\nignore-invalid-headers\n\n\nbool\n\n\ntrue\n\n\n\n\n\n\nretry-non-idempotent\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nerror-log-level\n\n\nstring\n\n\n\"notice\"\n\n\n\n\n\n\nhttp2-max-field-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nhttp2-max-header-size\n\n\nstring\n\n\n\"16k\"\n\n\n\n\n\n\nhsts\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-include-subdomains\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nhsts-max-age\n\n\nstring\n\n\n\"15724800\"\n\n\n\n\n\n\nhsts-preload\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nkeep-alive\n\n\nint\n\n\n75\n\n\n\n\n\n\nkeep-alive-requests\n\n\nint\n\n\n100\n\n\n\n\n\n\nlarge-client-header-buffers\n\n\nstring\n\n\n\"4 8k\"\n\n\n\n\n\n\nlog-format-escape-json\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nlog-format-upstream\n\n\nstring\n\n\n%v\n \n-\n \n[\n$the_real_ip\n]\n \n-\n \n$remote_user\n \n[\n$time_local\n]\n \n\"$request\"\n \n$status\n \n$body_bytes_sent\n \n\"$http_referer\"\n \n\"$http_user_agent\"\n \n$request_length\n \n$request_time\n \n[\n$proxy_upstream_name\n]\n \n$upstream_addr\n \n$upstream_response_length\n \n$upstream_response_time\n \n$upstream_status\n\n\n\n\n\n\nlog-format-stream\n\n\nstring\n\n\n[$time_local] $protocol $status $bytes_sent $bytes_received $session_time\n\n\n\n\n\n\nmax-worker-connections\n\n\nint\n\n\n16384\n\n\n\n\n\n\nmap-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nnginx-status-ipv4-whitelist\n\n\n[]string\n\n\n\"127.0.0.1\"\n\n\n\n\n\n\nnginx-status-ipv6-whitelist\n\n\n[]string\n\n\n\"::1\"\n\n\n\n\n\n\nproxy-real-ip-cidr\n\n\n[]string\n\n\n\"0.0.0.0/0\"\n\n\n\n\n\n\nproxy-set-headers\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-name-hash-max-size\n\n\nint\n\n\n1024\n\n\n\n\n\n\nserver-name-hash-bucket-size\n\n\nint\n\n\n\n\n\n\n\n\n\nproxy-headers-hash-max-size\n\n\nint\n\n\n512\n\n\n\n\n\n\nproxy-headers-hash-bucket-size\n\n\nint\n\n\n64\n\n\n\n\n\n\nserver-tokens\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-ciphers\n\n\nstring\n\n\n\"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\"\n\n\n\n\n\n\nssl-ecdh-curve\n\n\nstring\n\n\n\"auto\"\n\n\n\n\n\n\nssl-dh-param\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nssl-protocols\n\n\nstring\n\n\n\"TLSv1.2\"\n\n\n\n\n\n\nssl-session-cache\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-cache-size\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-session-tickets\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nssl-session-ticket-key\n\n\nstring\n\n\n\n\n\n\n\n\n\nssl-session-timeout\n\n\nstring\n\n\n\"10m\"\n\n\n\n\n\n\nssl-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nuse-proxy-protocol\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-protocol-header-timeout\n\n\nstring\n\n\n\"5s\"\n\n\n\n\n\n\nuse-gzip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nuse-geoip\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-brotli\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nbrotli-level\n\n\nint\n\n\n4\n\n\n\n\n\n\nbrotli-types\n\n\nstring\n\n\n\"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nuse-http2\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngzip-level\n\n\nint\n\n\n5\n\n\n\n\n\n\ngzip-types\n\n\nstring\n\n\n\"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\"\n\n\n\n\n\n\nworker-processes\n\n\nstring\n\n\n\n\n\n\n\n\n\nworker-cpu-affinity\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nworker-shutdown-timeout\n\n\nstring\n\n\n\"10s\"\n\n\n\n\n\n\nload-balance\n\n\nstring\n\n\n\"round_robin\"\n\n\n\n\n\n\nvariables-hash-bucket-size\n\n\nint\n\n\n128\n\n\n\n\n\n\nvariables-hash-max-size\n\n\nint\n\n\n2048\n\n\n\n\n\n\nupstream-keepalive-connections\n\n\nint\n\n\n32\n\n\n\n\n\n\nlimit-conn-zone-variable\n\n\nstring\n\n\n\"$binary_remote_addr\"\n\n\n\n\n\n\nproxy-stream-timeout\n\n\nstring\n\n\n\"600s\"\n\n\n\n\n\n\nproxy-stream-responses\n\n\nint\n\n\n1\n\n\n\n\n\n\nbind-address\n\n\n[]string\n\n\n\"\"\n\n\n\n\n\n\nforwarded-for-header\n\n\nstring\n\n\n\"X-Forwarded-For\"\n\n\n\n\n\n\ncompute-full-forwarded-for\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nproxy-add-original-uri-header\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\ngenerate-request-id\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nenable-opentracing\n\n\nbool\n\n\n\"false\"\n\n\n\n\n\n\nzipkin-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nzipkin-collector-port\n\n\nint\n\n\n9411\n\n\n\n\n\n\nzipkin-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\nzipkin-sample-rate\n\n\nfloat\n\n\n1.0\n\n\n\n\n\n\njaeger-collector-host\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\njaeger-collector-port\n\n\nint\n\n\n6831\n\n\n\n\n\n\njaeger-service-name\n\n\nstring\n\n\n\"nginx\"\n\n\n\n\n\n\njaeger-sampler-type\n\n\nstring\n\n\n\"const\"\n\n\n\n\n\n\njaeger-sampler-param\n\n\nstring\n\n\n\"1\"\n\n\n\n\n\n\nhttp-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nserver-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\nlocation-snippet\n\n\nstring\n\n\n\"\"\n\n\n\n\n\n\ncustom-http-errors\n\n\n[]int]\n\n\n[]int{}\n\n\n\n\n\n\nproxy-body-size\n\n\nstring\n\n\n\"1m\"\n\n\n\n\n\n\nproxy-connect-timeout\n\n\nint\n\n\n5\n\n\n\n\n\n\nproxy-read-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-send-timeout\n\n\nint\n\n\n60\n\n\n\n\n\n\nproxy-buffer-size\n\n\nstring\n\n\n\"4k\"\n\n\n\n\n\n\nproxy-cookie-path\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-cookie-domain\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-next-upstream\n\n\nstring\n\n\n\"error timeout\"\n\n\n\n\n\n\nproxy-next-upstream-tries\n\n\nint\n\n\n3\n\n\n\n\n\n\nproxy-redirect-from\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nproxy-request-buffering\n\n\nstring\n\n\n\"on\"\n\n\n\n\n\n\nssl-redirect\n\n\nbool\n\n\n\"true\"\n\n\n\n\n\n\nwhitelist-source-range\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nskip-access-log-urls\n\n\n[]string\n\n\n[]string{}\n\n\n\n\n\n\nlimit-rate\n\n\nint\n\n\n0\n\n\n\n\n\n\nlimit-rate-after\n\n\nint\n\n\n0\n\n\n\n\n\n\nhttp-redirect-code\n\n\nint\n\n\n308\n\n\n\n\n\n\nproxy-buffering\n\n\nstring\n\n\n\"off\"\n\n\n\n\n\n\nlimit-req-status-code\n\n\nint\n\n\n503\n\n\n\n\n\n\nno-tls-redirect-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\nno-auth-locations\n\n\nstring\n\n\n\"/.well-known/acme-challenge\"\n\n\n\n\n\n\n\n\nadd-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to the client. See \nproxy-set-headers\n. \nexample\n\n\nallow-backend-server-header\n\u00b6\n\n\nEnables the return of the header Server from the backend instead of the generic nginx string. \ndefault:\n is disabled\n\n\nhide-headers\n\u00b6\n\n\nSets additional header that will not be passed from the upstream server to the client response.\n\ndefault:\n empty\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header\n\n\naccess-log-path\n\u00b6\n\n\nAccess log path. Goes to \n/var/log/nginx/access.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/access.log\n is a symlink to \n/dev/stdout\n\n\nerror-log-path\n\u00b6\n\n\nError log path. Goes to \n/var/log/nginx/error.log\n by default.\n\n\nNote:\n the file \n/var/log/nginx/error.log\n is a symlink to \n/dev/stderr\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nenable-dynamic-tls-records\n\u00b6\n\n\nEnables dynamically sized TLS records to improve time-to-first-byte. \ndefault:\n is enabled\n\n\nReferences:\n\n\nhttps://blog.cloudflare.com/optimizing-tls-over-tcp-to-reduce-latency\n\n\nenable-modsecurity\n\u00b6\n\n\nEnables the modsecurity module for NGINX. \ndefault:\n is disabled\n\n\nenable-owasp-modsecurity-crs\n\u00b6\n\n\nEnables the OWASP ModSecurity Core Rule Set (CRS). \ndefault:\n is disabled\n\n\nclient-header-buffer-size\n\u00b6\n\n\nAllows to configure a custom buffer size for reading client request header.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size\n\n\nclient-header-timeout\n\u00b6\n\n\nDefines a timeout for reading client request header, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout\n\n\nclient-body-buffer-size\n\u00b6\n\n\nSets buffer size for reading client request body.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size\n\n\nclient-body-timeout\n\u00b6\n\n\nDefines a timeout for reading client request body, in seconds.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout\n\n\ndisable-access-log\n\u00b6\n\n\nDisables the Access Log from the entire Ingress Controller. \ndefault:\n '\"false\"'\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_log_module.html#access_log\n\n\ndisable-ipv6\n\u00b6\n\n\nDisable listening on IPV6. \ndefault:\n is disabled\n\n\ndisable-ipv6-dns\n\u00b6\n\n\nDisable IPV6 for nginx DNS resolver. \ndefault:\n is disabled\n\n\nenable-underscores-in-headers\n\u00b6\n\n\nEnables underscores in header names. \ndefault:\n is disabled\n\n\nignore-invalid-headers\n\u00b6\n\n\nSet if header fields with invalid names should be ignored.\n\ndefault:\n is enabled\n\n\nretry-non-idempotent\n\u00b6\n\n\nSince 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value \"true\".\n\n\nerror-log-level\n\u00b6\n\n\nConfigures the logging level of errors. Log levels above are listed in the order of increasing severity.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/ngx_core_module.html#error_log\n\n\nhttp2-max-field-size\n\u00b6\n\n\nLimits the maximum size of an HPACK-compressed request header field.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size\n\n\nhttp2-max-header-size\n\u00b6\n\n\nLimits the maximum size of the entire request header list after HPACK decompression.\n\n\nReferences:\n\n\nhttps://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size\n\n\nhsts\n\u00b6\n\n\nEnables or disables the header HSTS in servers running SSL.\nHTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.\n\n\nReferences:\n\n\n\n\nhttps://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security\n\n\nhttps://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server\n\n\n\n\nhsts-include-subdomains\n\u00b6\n\n\nEnables or disables the use of HSTS in all the subdomains of the server-name.\n\n\nhsts-max-age\n\u00b6\n\n\nSets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.\n\n\nhsts-preload\n\u00b6\n\n\nEnables or disables the preload attribute in the HSTS feature (when it is enabled) dd\n\n\nkeep-alive\n\u00b6\n\n\nSets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout\n\n\nkeep-alive-requests\n\u00b6\n\n\nSets the maximum number of requests that can be served through one keep-alive connection.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests\n\n\nlarge-client-header-buffers\n\u00b6\n\n\nSets the maximum number and size of buffers used for reading large client request header. \ndefault:\n 4 8k\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers\n\n\nlog-format-escape-json\n\u00b6\n\n\nSets if the escape parameter allows JSON (\"true\") or default characters escaping in variables (\"false\") Sets the nginx \nlog format\n.\n\n\nlog-format-upstream\n\u00b6\n\n\nSets the nginx \nlog format\n.\nExample for json output:\n\n\nconsolelog-format-upstream: '{ \"time\": \"$time_iso8601\", \"remote_addr\": \"$proxy_protocol_addr\",\"x-forward-for\": \"$proxy_add_x_forwarded_for\", \"request_id\": \"$req_id\", \"remote_user\":\"$remote_user\", \"bytes_sent\": $bytes_sent, \"request_time\": $request_time, \"status\":$status, \"vhost\": \"$host\", \"request_proto\": \"$server_protocol\", \"path\": \"$uri\",\"request_query\": \"$args\", \"request_length\": $request_length, \"duration\": $request_time,\"method\": \"$request_method\", \"http_referrer\": \"$http_referer\", \"http_user_agent\":\"$http_user_agent\" }'\n\n\nPlease check the \nlog-format\n for definition of each field.\n\n\nlog-format-stream\n\u00b6\n\n\nSets the nginx \nstream format\n.\n\n\nmax-worker-connections\n\u00b6\n\n\nSets the maximum number of simultaneous connections that can be opened by each \nworker process\n\n\nmap-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the \nmap variables hash tables\n. The details of setting up hash tables are provided in a separate \ndocument\n.\n\n\nproxy-real-ip-cidr\n\u00b6\n\n\nIf use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.\n\n\nproxy-set-headers\n\u00b6\n\n\nSets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See \nexample\n\n\nserver-name-hash-max-size\n\u00b6\n\n\nSets the maximum size of the \nserver names hash tables\n used in server names,map directive\u2019s values, MIME types, names of request header strings, etc.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nserver-name-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the server names hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size\n\n\n\n\nproxy-headers-hash-max-size\n\u00b6\n\n\nSets the maximum size of the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size\n\n\n\n\nproxy-headers-hash-bucket-size\n\u00b6\n\n\nSets the size of the bucket for the proxy headers hash tables.\n\n\nReferences:\n\n\n\n\nhttp://nginx.org/en/docs/hash.html\n\n\nhttps://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size\n\n\n\n\nserver-tokens\n\u00b6\n\n\nSend NGINX Server header in responses and display NGINX version in error pages. \ndefault:\n is enabled\n\n\nssl-ciphers\n\u00b6\n\n\nSets the \nciphers\n list to enable. The ciphers are specified in the format understood by the OpenSSL library.\n\n\nThe default cipher list is:\n \nECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\n.\n\n\nThe ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect \nforward secrecy\n.\n\n\nPlease check the \nMozilla SSL Configuration Generator\n.\n\n\nssl-ecdh-curve\n\u00b6\n\n\nSpecifies a curve for ECDHE ciphers.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve\n\n\nssl-dh-param\n\u00b6\n\n\nSets the name of the secret that contains Diffie-Hellman key to help with \"Perfect Forward Secrecy\".\n\n\nReferences:\n\n\n\n\nhttps://wiki.openssl.org/index.php/Diffie-Hellman_parameters\n\n\nhttps://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam\n\n\nhttp://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam\n\n\n\n\nssl-protocols\n\u00b6\n\n\nSets the \nSSL protocols\n to use. The default is: \nTLSv1.2\n.\n\n\nPlease check the result of the configuration using \nhttps://ssllabs.com/ssltest/analyze.html\n or \nhttps://testssl.sh\n.\n\n\nssl-session-cache\n\u00b6\n\n\nEnables or disables the use of shared \nSSL cache\n among worker processes.\n\n\nssl-session-cache-size\n\u00b6\n\n\nSets the size of the \nSSL shared session cache\n between all worker processes.\n\n\nssl-session-tickets\n\u00b6\n\n\nEnables or disables session resumption through \nTLS session tickets\n.\n\n\nssl-session-ticket-key\n\u00b6\n\n\nSets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.\nTo create a ticket: \nopenssl rand 80 | openssl enc -A -base64\n\n\nTLS session ticket-key\n, by default, a randomly generated key is used. \n\n\nssl-session-timeout\n\u00b6\n\n\nSets the time during which a client may \nreuse the session\n parameters stored in a cache.\n\n\nssl-buffer-size\n\u00b6\n\n\nSets the size of the \nSSL buffer\n used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).\n\n\nReferences:\n\n\nhttps://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/\n\n\nuse-proxy-protocol\n\u00b6\n\n\nEnables or disables the \nPROXY protocol\n to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).\n\n\nproxy-protocol-header-timeout\n\u00b6\n\n\nSets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinetly on a dropped connection.\n\ndefault:\n 5s\n\n\nuse-gzip\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"gzip\" module\n.\nThe default mime type list to compress is: \napplication/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n.\n\n\nuse-geoip\n\u00b6\n\n\nEnables or disables \n\"geoip\" module\n that creates variables with values depending on the client IP address, using the precompiled MaxMind databases.\n\ndefault:\n true\n\n\nenable-brotli\n\u00b6\n\n\nEnables or disables compression of HTTP responses using the \n\"brotli\" module\n.\nThe default mime type list to compress is: \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n. \ndefault:\n is disabled\n\n\n\n\nNote:\n Brotli does not works in Safari < 11. For more information see \nhttps://caniuse.com/#feat=brotli\n\n\n\n\nbrotli-level\n\u00b6\n\n\nSets the Brotli Compression Level that will be used. \ndefault:\n 4\n\n\nbrotli-types\n\u00b6\n\n\nSets the MIME Types that will be compressed on-the-fly by brotli.\n\ndefault:\n \napplication/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\n\n\nuse-http2\n\u00b6\n\n\nEnables or disables \nHTTP/2\n support in secure connections.\n\n\ngzip-level\n\u00b6\n\n\nSets the gzip Compression Level that will be used. \ndefault:\n 5\n\n\ngzip-types\n\u00b6\n\n\nSets the MIME types in addition to \"text/html\" to compress. The special value \"*\" matches any MIME type. Responses with the \"text/html\" type are always compressed if \nuse-gzip\n is enabled.\n\n\nworker-processes\n\u00b6\n\n\nSets the number of \nworker processes\n.\nThe default of \"auto\" means number of available CPU cores.\n\n\nworker-cpu-affinity\n\u00b6\n\n\nBinds worker processes to the sets of CPUs. \nworker_cpu_affinity\n.\nBy default worker processes are not bound to any specific CPUs. The value can be:\n\n\n\n\n\"\": empty string indicate no affinity is applied.\n\n\ncpumask: e.g. \n0001 0010 0100 1000\n to bind processes to specific cpus.\n\n\nauto: binding worker processes automatically to available CPUs.\n\n\n\n\nworker-shutdown-timeout\n\u00b6\n\n\nSets a timeout for Nginx to \nwait for worker to gracefully shutdown\n. \ndefault:\n \"10s\"\n\n\nload-balance\n\u00b6\n\n\nSets the algorithm to use for load balancing.\nThe value can either be:\n\n\n\n\nround_robin: to use the default round robin loadbalancer\n\n\nleast_conn: to use the least connected method (\nnote\n that this is available only in non-dynamic mode: \n--enable-dynamic-configuration=false\n)\n\n\nip_hash: to use a hash of the server for routing (\nnote\n that this is available only in non-dynamic mode: \n--enable-dynamic-configuration=false\n, but alternatively you can consider using \nnginx.ingress.kubernetes.io/upstream-hash-by\n)\n\n\newma: to use the Peak EWMA method for routing (\nimplementation\n)\n\n\n\n\nThe default is \nround_robin\n.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/load_balancing.html\n\n\nvariables-hash-bucket-size\n\u00b6\n\n\nSets the bucket size for the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size\n\n\nvariables-hash-max-size\n\u00b6\n\n\nSets the maximum size of the variables hash table.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size\n\n\nupstream-keepalive-connections\n\u00b6\n\n\nActivates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this\nnumber is exceeded, the least recently used connections are closed. \ndefault:\n 32\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive\n\n\nlimit-conn-zone-variable\n\u00b6\n\n\nSets parameters for a shared memory zone that will keep states for various keys of \nlimit_conn_zone\n. The default of \"$binary_remote_addr\" variable\u2019s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.\n\n\nproxy-stream-timeout\n\u00b6\n\n\nSets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout\n\n\nproxy-stream-responses\n\u00b6\n\n\nSets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses\n\n\nbind-address\n\u00b6\n\n\nSets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.\n\n\nforwarded-for-header\n\u00b6\n\n\nSets the header field for identifying the originating IP address of a client. \ndefault:\n X-Forwarded-For\n\n\ncompute-full-forwarded-for\n\u00b6\n\n\nAppend the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.\n\n\nproxy-add-original-uri-header\n\u00b6\n\n\nAdds an X-Original-Uri header with the original request URI to the backend request\n\n\ngenerate-request-id\n\u00b6\n\n\nEnsures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request\n\n\nenable-opentracing\n\u00b6\n\n\nEnables the nginx Opentracing extension. \ndefault:\n is disabled\n\n\nReferences:\n\n\nhttps://github.com/opentracing-contrib/nginx-opentracing\n\n\nzipkin-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\nzipkin-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 9411\n\n\nzipkin-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\nzipkin-sample-rate\n\u00b6\n\n\nSpecifies sample rate for any traces created. \ndefault:\n 1.0\n\n\njaeger-collector-host\n\u00b6\n\n\nSpecifies the host to use when uploading traces. It must be a valid URL.\n\n\njaeger-collector-port\n\u00b6\n\n\nSpecifies the port to use when uploading traces. \ndefault:\n 6831\n\n\njaeger-service-name\n\u00b6\n\n\nSpecifies the service name to use for any traces created. \ndefault:\n nginx\n\n\njaeger-sampler-type\n\u00b6\n\n\nSpecifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. \ndefault:\n const\n\n\njaeger-sampler-param\n\u00b6\n\n\nSpecifies the argument to be passed to the sampler constructor. Must be a number.\nFor const this should be 0 to never sample and 1 to always sample. \ndefault:\n 1\n\n\nhttp-snippet\n\u00b6\n\n\nAdds custom configuration to the http section of the nginx configuration.\n\ndefault:\n \"\"\n\n\nserver-snippet\n\u00b6\n\n\nAdds custom configuration to all the servers in the nginx configuration.\n\ndefault:\n \"\"\n\n\nlocation-snippet\n\u00b6\n\n\nAdds custom configuration to all the locations in the nginx configuration.\n\ndefault:\n \"\"\n\n\ncustom-http-errors\n\u00b6\n\n\nEnables which HTTP codes should be passed for processing with the \nerror_page directive\n\n\nSetting at least one code also enables \nproxy_intercept_errors\n which are required to process error_page.\n\n\nExample usage: \ncustom-http-errors: 404,415\n\n\nproxy-body-size\n\u00b6\n\n\nSets the maximum allowed size of the client request body.\nSee NGINX \nclient_max_body_size\n.\n\n\nproxy-connect-timeout\n\u00b6\n\n\nSets the timeout for \nestablishing a connection with a proxied server\n. It should be noted that this timeout cannot usually exceed 75 seconds.\n\n\nproxy-read-timeout\n\u00b6\n\n\nSets the timeout in seconds for \nreading a response from the proxied server\n. The timeout is set only between two successive read operations, not for the transmission of the whole response.\n\n\nproxy-send-timeout\n\u00b6\n\n\nSets the timeout in seconds for \ntransmitting a request to the proxied server\n. The timeout is set only between two successive write operations, not for the transmission of the whole request.\n\n\nproxy-buffer-size\n\u00b6\n\n\nSets the size of the buffer used for \nreading the first part of the response\n received from the proxied server. This part usually contains a small response header.\n\n\nproxy-cookie-path\n\u00b6\n\n\nSets a text that \nshould be changed in the path attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-cookie-domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \u201cSet-Cookie\u201d header fields of a proxied server response.\n\n\nproxy-next-upstream\n\u00b6\n\n\nSpecifies in \nwhich cases\n a request should be passed to the next server.\n\n\nproxy-next-upstream-tries\n\u00b6\n\n\nLimit the number of \npossible tries\n a request should be passed to the next server.\n\n\nproxy-redirect-from\n\u00b6\n\n\nSets the original text that should be changed in the \"Location\" and \"Refresh\" header fields of a proxied server response. \ndefault:\n off\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect\n\n\nproxy-request-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of a client request body\n.\n\n\nssl-redirect\n\u00b6\n\n\nSets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule).\n\ndefault:\n \"true\"\n\n\nwhitelist-source-range\n\u00b6\n\n\nSets the default whitelisted IPs for each \nserver\n block. This can be overwritten by an annotation on an Ingress rule.\nSee \nngx_http_access_module\n.\n\n\nskip-access-log-urls\n\u00b6\n\n\nSets a list of URLs that should not appear in the NGINX access log. This is useful with urls like \n/health\n or \nhealth-check\n that make \"complex\" reading the logs. \ndefault:\n is empty\n\n\nlimit-rate\n\u00b6\n\n\nLimits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate\n\n\nlimit-rate-after\n\u00b6\n\n\nSets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nReferences:\n\n\nhttp://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after\n\n\nhttp-redirect-code\n\u00b6\n\n\nSets the HTTP status code to be used in redirects.\nSupported codes are \n301\n,\n302\n,\n307\n and \n308\n\n\ndefault:\n 308\n\n\n\n\nWhy the default code is 308?\n\n\nRFC 7238\n was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.\n\n\n\n\nproxy-buffering\n\u00b6\n\n\nEnables or disables \nbuffering of responses from the proxied server\n.\n\n\nlimit-req-status-code\n\u00b6\n\n\nSets the \nstatus code to return in response to rejected requests\n. \ndefault:\n 503\n\n\nno-tls-redirect-locations\n\u00b6\n\n\nA comma-separated list of locations on which http requests will never get redirected to their https counterpart.\n\ndefault:\n \"/.well-known/acme-challenge\"\n\n\nno-auth-locations\n\u00b6\n\n\nA comma-separated list of locations that should not get authenticated.\n\ndefault:\n \"/.well-known/acme-challenge\"", "title": "ConfigMaps" }, { @@ -382,7 +382,7 @@ }, { "location": "/user-guide/nginx-configuration/configmap/#configuration-options", - "text": "The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-path string \"/var/log/nginx/access.log\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time max-worker-connections int 16384 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"least_conn\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int] []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\"", + "text": "The following table shows a configuration option's name, type, and the default value: name type default add-headers string \"\" allow-backend-server-header bool \"false\" hide-headers string array empty access-log-path string \"/var/log/nginx/access.log\" error-log-path string \"/var/log/nginx/error.log\" enable-dynamic-tls-records bool \"true\" enable-modsecurity bool \"false\" enable-owasp-modsecurity-crs bool \"false\" client-header-buffer-size string \"1k\" client-header-timeout int 60 client-body-buffer-size string \"8k\" client-body-timeout int 60 disable-access-log bool false disable-ipv6 bool false disable-ipv6-dns bool false enable-underscores-in-headers bool false ignore-invalid-headers bool true retry-non-idempotent bool \"false\" error-log-level string \"notice\" http2-max-field-size string \"4k\" http2-max-header-size string \"16k\" hsts bool \"true\" hsts-include-subdomains bool \"true\" hsts-max-age string \"15724800\" hsts-preload bool \"false\" keep-alive int 75 keep-alive-requests int 100 large-client-header-buffers string \"4 8k\" log-format-escape-json bool \"false\" log-format-upstream string %v - [ $the_real_ip ] - $remote_user [ $time_local ] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $request_length $request_time [ $proxy_upstream_name ] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status log-format-stream string [$time_local] $protocol $status $bytes_sent $bytes_received $session_time max-worker-connections int 16384 map-hash-bucket-size int 64 nginx-status-ipv4-whitelist []string \"127.0.0.1\" nginx-status-ipv6-whitelist []string \"::1\" proxy-real-ip-cidr []string \"0.0.0.0/0\" proxy-set-headers string \"\" server-name-hash-max-size int 1024 server-name-hash-bucket-size int proxy-headers-hash-max-size int 512 proxy-headers-hash-bucket-size int 64 server-tokens bool \"true\" ssl-ciphers string \"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256\" ssl-ecdh-curve string \"auto\" ssl-dh-param string \"\" ssl-protocols string \"TLSv1.2\" ssl-session-cache bool \"true\" ssl-session-cache-size string \"10m\" ssl-session-tickets bool \"true\" ssl-session-ticket-key string ssl-session-timeout string \"10m\" ssl-buffer-size string \"4k\" use-proxy-protocol bool \"false\" proxy-protocol-header-timeout string \"5s\" use-gzip bool \"true\" use-geoip bool \"true\" enable-brotli bool \"false\" brotli-level int 4 brotli-types string \"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" use-http2 bool \"true\" gzip-level int 5 gzip-types string \"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component\" worker-processes string worker-cpu-affinity string \"\" worker-shutdown-timeout string \"10s\" load-balance string \"round_robin\" variables-hash-bucket-size int 128 variables-hash-max-size int 2048 upstream-keepalive-connections int 32 limit-conn-zone-variable string \"$binary_remote_addr\" proxy-stream-timeout string \"600s\" proxy-stream-responses int 1 bind-address []string \"\" forwarded-for-header string \"X-Forwarded-For\" compute-full-forwarded-for bool \"false\" proxy-add-original-uri-header bool \"true\" generate-request-id bool \"true\" enable-opentracing bool \"false\" zipkin-collector-host string \"\" zipkin-collector-port int 9411 zipkin-service-name string \"nginx\" zipkin-sample-rate float 1.0 jaeger-collector-host string \"\" jaeger-collector-port int 6831 jaeger-service-name string \"nginx\" jaeger-sampler-type string \"const\" jaeger-sampler-param string \"1\" http-snippet string \"\" server-snippet string \"\" location-snippet string \"\" custom-http-errors []int] []int{} proxy-body-size string \"1m\" proxy-connect-timeout int 5 proxy-read-timeout int 60 proxy-send-timeout int 60 proxy-buffer-size string \"4k\" proxy-cookie-path string \"off\" proxy-cookie-domain string \"off\" proxy-next-upstream string \"error timeout\" proxy-next-upstream-tries int 3 proxy-redirect-from string \"off\" proxy-request-buffering string \"on\" ssl-redirect bool \"true\" whitelist-source-range []string []string{} skip-access-log-urls []string []string{} limit-rate int 0 limit-rate-after int 0 http-redirect-code int 308 proxy-buffering string \"off\" limit-req-status-code int 503 no-tls-redirect-locations string \"/.well-known/acme-challenge\" no-auth-locations string \"/.well-known/acme-challenge\"", "title": "Configuration options" }, { @@ -702,7 +702,7 @@ }, { "location": "/user-guide/nginx-configuration/configmap/#load-balance", - "text": "Sets the algorithm to use for load balancing.\nThe value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ip_hash: to use a hash of the server for routing. ewma: to use the peak ewma method for routing (only available with enable-dynamic-configuration flag) The default is least_conn. References: http://nginx.org/en/docs/http/load_balancing.html", + "text": "Sets the algorithm to use for load balancing.\nThe value can either be: round_robin: to use the default round robin loadbalancer least_conn: to use the least connected method ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false ) ip_hash: to use a hash of the server for routing ( note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false , but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by ) ewma: to use the Peak EWMA method for routing ( implementation ) The default is round_robin . References: http://nginx.org/en/docs/http/load_balancing.html", "title": "load-balance" }, { @@ -957,12 +957,12 @@ }, { "location": "/user-guide/cli-arguments/", - "text": "Command line arguments\n\u00b6\n\n\nThe following command line arguments are accepted by the Ingress controller executable.\n\n\nThey are set in the container spec of the \nnginx-ingress-controller\n Deployment manifest (see \ndeploy/with-rbac.yaml\n or \ndeploy/without-rbac.yaml\n).\n\n\n\n\n\n\n\n\nArgument\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\n--alsologtostderr\n\n\nlog to standard error as well as files\n\n\n\n\n\n\n--annotations-prefix string\n\n\nPrefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\")\n\n\n\n\n\n\n--apiserver-host string\n\n\nAddress of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.\n\n\n\n\n\n\n--configmap string\n\n\nName of the ConfigMap containing custom global configurations for the controller.\n\n\n\n\n\n\n--default-backend-service string\n\n\nService used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service.\n\n\n\n\n\n\n--default-server-port int\n\n\nPort to use for exposing the default server (catch-all). (default 8181)\n\n\n\n\n\n\n--default-ssl-certificate string\n\n\nSecret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\".\n\n\n\n\n\n\n--election-id string\n\n\nElection id to use for Ingress status updates. (default \"ingress-controller-leader\")\n\n\n\n\n\n\n--enable-dynamic-configuration\n\n\nDynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries.\n\n\n\n\n\n\n--enable-ssl-chain-completion\n\n\nAutocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true)\n\n\n\n\n\n\n--enable-ssl-passthrough\n\n\nEnable SSL Passthrough.\n\n\n\n\n\n\n--force-namespace-isolation\n\n\nForce namespace isolation. Prevents Ingress objects from referencing Secrets and ConfigMaps located in a different namespace than their own. May be used together with watch-namespace.\n\n\n\n\n\n\n--health-check-path string\n\n\nURL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\")\n\n\n\n\n\n\n--healthz-port int\n\n\nPort to use for the healthz endpoint. (default 10254)\n\n\n\n\n\n\n--http-port int\n\n\nPort to use for servicing HTTP traffic. (default 80)\n\n\n\n\n\n\n--https-port int\n\n\nPort to use for servicing HTTPS traffic. (default 443)\n\n\n\n\n\n\n--ingress-class string\n\n\nName of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty.\n\n\n\n\n\n\n--kubeconfig string\n\n\nPath to a kubeconfig file containing authorization and API server information.\n\n\n\n\n\n\n--log_backtrace_at traceLocation\n\n\nwhen logging hits line file:N, emit a stack trace (default :0)\n\n\n\n\n\n\n--log_dir string\n\n\nIf non-empty, write log files in this directory\n\n\n\n\n\n\n--logtostderr\n\n\nlog to standard error instead of files (default true)\n\n\n\n\n\n\n--profiling\n\n\nEnable profiling via web interface host:port/debug/pprof/ (default true)\n\n\n\n\n\n\n--publish-service string\n\n\nService fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.\n\n\n\n\n\n\n--publish-status-address string\n\n\nCustomized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.\n\n\n\n\n\n\n--report-node-internal-ip-address\n\n\nSet the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.\n\n\n\n\n\n\n--sort-backends\n\n\nSort servers inside NGINX upstreams.\n\n\n\n\n\n\n--ssl-passthrough-proxy-port int\n\n\nPort to use internally for SSL Passthrough. (default 442)\n\n\n\n\n\n\n--status-port int\n\n\nPort to use for exposing NGINX status pages. (default 18080)\n\n\n\n\n\n\n--stderrthreshold severity\n\n\nlogs at or above this threshold go to stderr (default 2)\n\n\n\n\n\n\n--sync-period duration\n\n\nPeriod at which the controller forces the repopulation of its local object stores. (default 10m0s)\n\n\n\n\n\n\n--sync-rate-limit float32\n\n\nDefine the sync frequency upper limit (default 0.3)\n\n\n\n\n\n\n--tcp-services-configmap string\n\n\nName of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.\n\n\n\n\n\n\n--udp-services-configmap string\n\n\nName of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number.\n\n\n\n\n\n\n--update-status\n\n\nUpdate the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)\n\n\n\n\n\n\n--update-status-on-shutdown\n\n\nUpdate the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)\n\n\n\n\n\n\n--v Level\n\n\nlog level for V logs\n\n\n\n\n\n\n--version\n\n\nShow release information about the NGINX Ingress controller and exit.\n\n\n\n\n\n\n--vmodule moduleSpec\n\n\ncomma-separated list of pattern=N settings for file-filtered logging\n\n\n\n\n\n\n--watch-namespace string\n\n\nNamespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.", + "text": "Command line arguments\n\u00b6\n\n\nThe following command line arguments are accepted by the Ingress controller executable.\n\n\nThey are set in the container spec of the \nnginx-ingress-controller\n Deployment manifest (see \ndeploy/with-rbac.yaml\n or \ndeploy/without-rbac.yaml\n).\n\n\n\n\n\n\n\n\nArgument\n\n\nDescription\n\n\n\n\n\n\n\n\n\n\n--alsologtostderr\n\n\nlog to standard error as well as files\n\n\n\n\n\n\n--annotations-prefix string\n\n\nPrefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\")\n\n\n\n\n\n\n--apiserver-host string\n\n\nAddress of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted.\n\n\n\n\n\n\n--configmap string\n\n\nName of the ConfigMap containing custom global configurations for the controller.\n\n\n\n\n\n\n--default-backend-service string\n\n\nService used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service.\n\n\n\n\n\n\n--default-server-port int\n\n\nPort to use for exposing the default server (catch-all). (default 8181)\n\n\n\n\n\n\n--default-ssl-certificate string\n\n\nSecret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\".\n\n\n\n\n\n\n--election-id string\n\n\nElection id to use for Ingress status updates. (default \"ingress-controller-leader\")\n\n\n\n\n\n\n--enable-dynamic-configuration\n\n\nDynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries. (enabled by default)\n\n\n\n\n\n\n--enable-ssl-chain-completion\n\n\nAutocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true)\n\n\n\n\n\n\n--enable-ssl-passthrough\n\n\nEnable SSL Passthrough.\n\n\n\n\n\n\n--force-namespace-isolation\n\n\nForce namespace isolation. Prevents Ingress objects from referencing Secrets and ConfigMaps located in a different namespace than their own. May be used together with watch-namespace.\n\n\n\n\n\n\n--health-check-path string\n\n\nURL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\")\n\n\n\n\n\n\n--healthz-port int\n\n\nPort to use for the healthz endpoint. (default 10254)\n\n\n\n\n\n\n--http-port int\n\n\nPort to use for servicing HTTP traffic. (default 80)\n\n\n\n\n\n\n--https-port int\n\n\nPort to use for servicing HTTPS traffic. (default 443)\n\n\n\n\n\n\n--ingress-class string\n\n\nName of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty.\n\n\n\n\n\n\n--kubeconfig string\n\n\nPath to a kubeconfig file containing authorization and API server information.\n\n\n\n\n\n\n--log_backtrace_at traceLocation\n\n\nwhen logging hits line file:N, emit a stack trace (default :0)\n\n\n\n\n\n\n--log_dir string\n\n\nIf non-empty, write log files in this directory\n\n\n\n\n\n\n--logtostderr\n\n\nlog to standard error instead of files (default true)\n\n\n\n\n\n\n--profiling\n\n\nEnable profiling via web interface host:port/debug/pprof/ (default true)\n\n\n\n\n\n\n--publish-service string\n\n\nService fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies.\n\n\n\n\n\n\n--publish-status-address string\n\n\nCustomized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter.\n\n\n\n\n\n\n--report-node-internal-ip-address\n\n\nSet the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter.\n\n\n\n\n\n\n--sort-backends\n\n\nSort servers inside NGINX upstreams.\n\n\n\n\n\n\n--ssl-passthrough-proxy-port int\n\n\nPort to use internally for SSL Passthrough. (default 442)\n\n\n\n\n\n\n--status-port int\n\n\nPort to use for exposing NGINX status pages. (default 18080)\n\n\n\n\n\n\n--stderrthreshold severity\n\n\nlogs at or above this threshold go to stderr (default 2)\n\n\n\n\n\n\n--sync-period duration\n\n\nPeriod at which the controller forces the repopulation of its local object stores. (default 10m0s)\n\n\n\n\n\n\n--sync-rate-limit float32\n\n\nDefine the sync frequency upper limit (default 0.3)\n\n\n\n\n\n\n--tcp-services-configmap string\n\n\nName of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic.\n\n\n\n\n\n\n--udp-services-configmap string\n\n\nName of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number.\n\n\n\n\n\n\n--update-status\n\n\nUpdate the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true)\n\n\n\n\n\n\n--update-status-on-shutdown\n\n\nUpdate the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true)\n\n\n\n\n\n\n--v Level\n\n\nlog level for V logs\n\n\n\n\n\n\n--version\n\n\nShow release information about the NGINX Ingress controller and exit.\n\n\n\n\n\n\n--vmodule moduleSpec\n\n\ncomma-separated list of pattern=N settings for file-filtered logging\n\n\n\n\n\n\n--watch-namespace string\n\n\nNamespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.", "title": "Command line arguments" }, { "location": "/user-guide/cli-arguments/#command-line-arguments", - "text": "The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest (see deploy/with-rbac.yaml or deploy/without-rbac.yaml ). Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port int Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-configuration Dynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries. --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --force-namespace-isolation Force namespace isolation. Prevents Ingress objects from referencing Secrets and ConfigMaps located in a different namespace than their own. May be used together with watch-namespace. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --sort-backends Sort servers inside NGINX upstreams. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --status-port int Port to use for exposing NGINX status pages. (default 18080) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. (default 10m0s) --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.", + "text": "The following command line arguments are accepted by the Ingress controller executable. They are set in the container spec of the nginx-ingress-controller Deployment manifest (see deploy/with-rbac.yaml or deploy/without-rbac.yaml ). Argument Description --alsologtostderr log to standard error as well as files --annotations-prefix string Prefix of the Ingress annotations specific to the NGINX controller. (default \"nginx.ingress.kubernetes.io\") --apiserver-host string Address of the Kubernetes API server. Takes the form \"protocol://address:port\". If not specified, it is assumed the program runs inside a Kubernetes cluster and local discovery is attempted. --configmap string Name of the ConfigMap containing custom global configurations for the controller. --default-backend-service string Service used to serve HTTP requests not matching any known server name (catch-all). Takes the form \"namespace/name\". The controller configures NGINX to forward requests to the first port of this Service. --default-server-port int Port to use for exposing the default server (catch-all). (default 8181) --default-ssl-certificate string Secret containing a SSL certificate to be used by the default HTTPS server (catch-all). Takes the form \"namespace/name\". --election-id string Election id to use for Ingress status updates. (default \"ingress-controller-leader\") --enable-dynamic-configuration Dynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries. (enabled by default) --enable-ssl-chain-completion Autocomplete SSL certificate chains with missing intermediate CA certificates. A valid certificate chain is required to enable OCSP stapling. Certificates uploaded to Kubernetes must have the \"Authority Information Access\" X.509 v3 extension for this to succeed. (default true) --enable-ssl-passthrough Enable SSL Passthrough. --force-namespace-isolation Force namespace isolation. Prevents Ingress objects from referencing Secrets and ConfigMaps located in a different namespace than their own. May be used together with watch-namespace. --health-check-path string URL path of the health check endpoint. Configured inside the NGINX status server. All requests received on the port defined by the healthz-port parameter are forwarded internally to this path. (default \"/healthz\") --healthz-port int Port to use for the healthz endpoint. (default 10254) --http-port int Port to use for servicing HTTP traffic. (default 80) --https-port int Port to use for servicing HTTPS traffic. (default 443) --ingress-class string Name of the ingress class this controller satisfies. The class of an Ingress object is set using the annotation \"kubernetes.io/ingress.class\". All ingress classes are satisfied if this parameter is left empty. --kubeconfig string Path to a kubeconfig file containing authorization and API server information. --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --logtostderr log to standard error instead of files (default true) --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) --publish-service string Service fronting the Ingress controller. Takes the form \"namespace/name\". When used together with update-status, the controller mirrors the address of this service's endpoints to the load-balancer status of all Ingress objects it satisfies. --publish-status-address string Customized address to set as the load-balancer status of Ingress objects this controller satisfies. Requires the update-status parameter. --report-node-internal-ip-address Set the load-balancer status of Ingress objects to internal Node addresses instead of external. Requires the update-status parameter. --sort-backends Sort servers inside NGINX upstreams. --ssl-passthrough-proxy-port int Port to use internally for SSL Passthrough. (default 442) --status-port int Port to use for exposing NGINX status pages. (default 18080) --stderrthreshold severity logs at or above this threshold go to stderr (default 2) --sync-period duration Period at which the controller forces the repopulation of its local object stores. (default 10m0s) --sync-rate-limit float32 Define the sync frequency upper limit (default 0.3) --tcp-services-configmap string Name of the ConfigMap containing the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port number or name. TCP ports 80 and 443 are reserved by the controller for servicing HTTP traffic. --udp-services-configmap string Name of the ConfigMap containing the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is a reference to a Service in the form \"namespace/name:port\", where \"port\" can either be a port name or number. --update-status Update the load-balancer status of Ingress objects this controller satisfies. Requires setting the publish-service parameter to a valid Service reference. (default true) --update-status-on-shutdown Update the load-balancer status of Ingress objects when the controller shuts down. Requires the update-status parameter. (default true) --v Level log level for V logs --version Show release information about the NGINX Ingress controller and exit. --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging --watch-namespace string Namespace the controller watches for updates to Kubernetes objects. This includes Ingresses, Services and all configuration resources. All namespaces are watched if this parameter is left empty.", "title": "Command line arguments" }, { @@ -1282,7 +1282,7 @@ }, { "location": "/examples/auth/oauth-external-auth/README/", - "text": "External Authentication\n\u00b6\n\n\nOverview\n\u00b6\n\n\nThe \nauth-url\n and \nauth-signin\n annotations allow you to use an external\nauthentication provider to protect your Ingress resources.\n\n\n\n\nImportant\n\n\nthis annotation requires \nnginx-ingress-controller v0.9.0\n or greater.)\n\n\n\n\nKey Detail\n\u00b6\n\n\nThis functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication.\n\n\nOther Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect \n401\ns to the\nsame endpoint.\n\n\nSample:\n\n\n...\n\n\nmetadata\n:\n\n \nname\n:\n \napplication\n\n \nannotations\n:\n\n \nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"https://$host/oauth2/auth\"\n\n \nnginx.ingress.kubernetes.io/auth-signin\n:\n \n\"https://$host/oauth2/start?rd=$request_uri\"\n\n\n...\n\n\n\n\n\n\nExample: OAuth2 Proxy + Kubernetes-Dashboard\n\u00b6\n\n\nThis example will show you how to deploy \noauth2_proxy\n\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider\n\n\nPrepare\n\u00b6\n\n\n\n\nInstall the kubernetes dashboard\n\n\n\n\nkubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml\n\n\n\n\n\n\n\n\nCreate a \ncustom Github OAuth application\n\n\n\n\n\n\n\n\nHomepage URL is the FQDN in the Ingress rule, like \nhttps://foo.bar.com\n\n\nAuthorization callback URL is the same as the base FQDN plus \n/oauth2\n, like \nhttps://foo.bar.com/oauth2\n\n\n\n\n\n\n\n\n\n\nConfigure oauth2_proxy values in the file oauth2-proxy.yaml with the values:\n\n\n\n\n\n\nOAUTH2_PROXY_CLIENT_ID with the github \n\n\n\n\n\nOAUTH2_PROXY_CLIENT_SECRET with the github \n\n\n\n\n\nOAUTH2_PROXY_COOKIE_SECRET with value of \npython\n \n-\nc\n \n'import os,base64; print base64.b64encode(os.urandom(16))'\n \n\n\n\n\n\n\nCustomize the contents of the file dashboard-ingress.yaml:\n\n\n\n\n\n\nReplace \n__INGRESS_HOST__\n with a valid FQDN and \n__INGRESS_SECRET__\n with a Secret with a valid SSL certificate.\n\n\n\n\nDeploy the oauth2 proxy and the ingress rules running:\n\n\n\n\n$\n kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml\n\n\n\n\n\nTest the oauth integration accessing the configured URL, like \nhttps://foo.bar.com", + "text": "External Authentication\n\u00b6\n\n\nOverview\n\u00b6\n\n\nThe \nauth-url\n and \nauth-signin\n annotations allow you to use an external\nauthentication provider to protect your Ingress resources.\n\n\n\n\nImportant\n\n\nthis annotation requires \nnginx-ingress-controller v0.9.0\n or greater.)\n\n\n\n\nKey Detail\n\u00b6\n\n\nThis functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication.\n\n\nOther Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect \n401\ns to the\nsame endpoint.\n\n\nSample:\n\n\n...\n\n\nmetadata\n:\n\n \nname\n:\n \napplication\n\n \nannotations\n:\n\n \nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"https://$host/oauth2/auth\"\n\n \nnginx.ingress.kubernetes.io/auth-signin\n:\n \n\"https://$host/oauth2/start?rd=$escaped_request_uri\"\n\n\n...\n\n\n\n\n\n\nExample: OAuth2 Proxy + Kubernetes-Dashboard\n\u00b6\n\n\nThis example will show you how to deploy \noauth2_proxy\n\ninto a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider\n\n\nPrepare\n\u00b6\n\n\n\n\nInstall the kubernetes dashboard\n\n\n\n\nkubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml\n\n\n\n\n\n\n\n\nCreate a \ncustom Github OAuth application\n\n\n\n\n\n\n\n\nHomepage URL is the FQDN in the Ingress rule, like \nhttps://foo.bar.com\n\n\nAuthorization callback URL is the same as the base FQDN plus \n/oauth2\n, like \nhttps://foo.bar.com/oauth2\n\n\n\n\n\n\n\n\n\n\nConfigure oauth2_proxy values in the file oauth2-proxy.yaml with the values:\n\n\n\n\n\n\nOAUTH2_PROXY_CLIENT_ID with the github \n\n\n\n\n\nOAUTH2_PROXY_CLIENT_SECRET with the github \n\n\n\n\n\nOAUTH2_PROXY_COOKIE_SECRET with value of \npython\n \n-\nc\n \n'import os,base64; print base64.b64encode(os.urandom(16))'\n \n\n\n\n\n\n\nCustomize the contents of the file dashboard-ingress.yaml:\n\n\n\n\n\n\nReplace \n__INGRESS_HOST__\n with a valid FQDN and \n__INGRESS_SECRET__\n with a Secret with a valid SSL certificate.\n\n\n\n\nDeploy the oauth2 proxy and the ingress rules running:\n\n\n\n\n$\n kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml\n\n\n\n\n\nTest the oauth integration accessing the configured URL, like \nhttps://foo.bar.com", "title": "External Authentication" }, { @@ -1297,7 +1297,7 @@ }, { "location": "/examples/auth/oauth-external-auth/README/#key-detail", - "text": "This functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect 401 s to the\nsame endpoint. Sample: ... metadata : \n name : application \n annotations : \n nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" \n nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$request_uri\" ...", + "text": "This functionality is enabled by deploying multiple Ingress objects for a single host.\nOne Ingress object has no special annotations and handles authentication. Other Ingress objects can then be annotated in such a way that require the user to\nauthenticate against the first Ingress's endpoint, and can redirect 401 s to the\nsame endpoint. Sample: ... metadata : \n name : application \n annotations : \n nginx.ingress.kubernetes.io/auth-url : \"https://$host/oauth2/auth\" \n nginx.ingress.kubernetes.io/auth-signin : \"https://$host/oauth2/start?rd=$escaped_request_uri\" ...", "title": "Key Detail" }, { diff --git a/sitemap.xml b/sitemap.xml index 71300a5b8..f77c51dd4 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,217 +2,217 @@ / - 2018-07-19 + 2018-07-28 daily /deploy/ - 2018-07-19 + 2018-07-28 daily /deploy/rbac/ - 2018-07-19 + 2018-07-28 daily /deploy/upgrade/ - 2018-07-19 + 2018-07-28 daily /user-guide/nginx-configuration/ - 2018-07-19 + 2018-07-28 daily /user-guide/nginx-configuration/annotations/ - 2018-07-19 + 2018-07-28 daily /user-guide/nginx-configuration/configmap/ - 2018-07-19 + 2018-07-28 daily /user-guide/nginx-configuration/custom-template/ - 2018-07-19 + 2018-07-28 daily /user-guide/nginx-configuration/log-format/ - 2018-07-19 + 2018-07-28 daily /user-guide/cli-arguments/ - 2018-07-19 + 2018-07-28 daily /user-guide/custom-errors/ - 2018-07-19 + 2018-07-28 daily /user-guide/default-backend/ - 2018-07-19 + 2018-07-28 daily /user-guide/exposing-tcp-udp-services/ - 2018-07-19 + 2018-07-28 daily /user-guide/external-articles/ - 2018-07-19 + 2018-07-28 daily /user-guide/miscellaneous/ - 2018-07-19 + 2018-07-28 daily /user-guide/multiple-ingress/ - 2018-07-19 + 2018-07-28 daily /user-guide/tls/ - 2018-07-19 + 2018-07-28 daily /user-guide/third-party-addons/modsecurity/ - 2018-07-19 + 2018-07-28 daily /user-guide/third-party-addons/opentracing/ - 2018-07-19 + 2018-07-28 daily /examples/ - 2018-07-19 + 2018-07-28 daily /examples/PREREQUISITES/ - 2018-07-19 + 2018-07-28 daily /examples/affinity/cookie/README/ - 2018-07-19 + 2018-07-28 daily /examples/auth/basic/README/ - 2018-07-19 + 2018-07-28 daily /examples/auth/client-certs/README/ - 2018-07-19 + 2018-07-28 daily /examples/auth/external-auth/README/ - 2018-07-19 + 2018-07-28 daily /examples/auth/oauth-external-auth/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/configuration-snippets/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/custom-configuration/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/custom-errors/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/custom-headers/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/custom-upstream-check/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/external-auth-headers/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/ssl-dh-param/README/ - 2018-07-19 + 2018-07-28 daily /examples/customization/sysctl/README/ - 2018-07-19 + 2018-07-28 daily /examples/docker-registry/README/ - 2018-07-19 + 2018-07-28 daily /examples/grpc/README/ - 2018-07-19 + 2018-07-28 daily /examples/multi-tls/README/ - 2018-07-19 + 2018-07-28 daily /examples/rewrite/README/ - 2018-07-19 + 2018-07-28 daily /examples/static-ip/README/ - 2018-07-19 + 2018-07-28 daily /examples/tls-termination/README/ - 2018-07-19 + 2018-07-28 daily /development/ - 2018-07-19 + 2018-07-28 daily /how-it-works/ - 2018-07-19 + 2018-07-28 daily /troubleshooting/ - 2018-07-19 + 2018-07-28 daily \ No newline at end of file diff --git a/user-guide/cli-arguments/index.html b/user-guide/cli-arguments/index.html index a80c85649..efa953165 100644 --- a/user-guide/cli-arguments/index.html +++ b/user-guide/cli-arguments/index.html @@ -1080,7 +1080,7 @@ --enable-dynamic-configuration -Dynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries. +Dynamically refresh backends on topology changes instead of reloading NGINX. Feature backed by OpenResty Lua libraries. (enabled by default) --enable-ssl-chain-completion diff --git a/user-guide/nginx-configuration/configmap/index.html b/user-guide/nginx-configuration/configmap/index.html index 9eef28c7f..502b0be10 100644 --- a/user-guide/nginx-configuration/configmap/index.html +++ b/user-guide/nginx-configuration/configmap/index.html @@ -2980,7 +2980,7 @@ Same for numbers, like "100".

load-balance string -"least_conn" +"round_robin" variables-hash-bucket-size @@ -3435,11 +3435,11 @@ By default worker processes are not bound to any specific CPUs. The value can be The value can either be:

  • round_robin: to use the default round robin loadbalancer
  • -
  • least_conn: to use the least connected method
  • -
  • ip_hash: to use a hash of the server for routing.
  • -
  • ewma: to use the peak ewma method for routing (only available with enable-dynamic-configuration flag)
  • +
  • least_conn: to use the least connected method (note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false)
  • +
  • ip_hash: to use a hash of the server for routing (note that this is available only in non-dynamic mode: --enable-dynamic-configuration=false, but alternatively you can consider using nginx.ingress.kubernetes.io/upstream-hash-by)
  • +
  • ewma: to use the Peak EWMA method for routing (implementation)
-

The default is least_conn.

+

The default is round_robin.

References: http://nginx.org/en/docs/http/load_balancing.html

variables-hash-bucket-size