diff --git a/.empty b/.empty deleted file mode 100644 index e69de29bb..000000000 diff --git a/404.html b/404.html new file mode 100644 index 000000000..c2b250595 --- /dev/null +++ b/404.html @@ -0,0 +1,1026 @@ + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +The following resources are required for a generic deployment.
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \ + | kubectl apply -f - +
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/without-rbac.yaml \ + | kubectl apply -f - +
Please check the RBAC document.
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \ + | kubectl apply -f - +
There are cloud provider specific yaml files.
+Kubernetes is available for Docker for Mac's Edge channel. Switch to the Edge +channel and enable Kubernetes.
+Patch the nginx ingress controller deployment to add the flag --publish-service
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' \ + --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml)" +
Create a service
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/docker-for-mac/service.yaml \ + | kubectl apply -f - +
For standard usage:
+minikube addons enable ingress
+
For development:
+$ minikube addons disable ingress
+
nginx-ingress-controller
deployment without RBAC roles or with RBAC rolesnginx-ingress-controller
deployment to use your custom image. Local images can be seen by performing docker images
.$ kubectl edit deployment nginx-ingress-controller -n ingress-nginx
+
edit the following section:
+image: <IMAGE-NAME>:<TAG> +imagePullPolicy: IfNotPresent +name: nginx-ingress-controller +
nginx-ingress-controller
deployment exists:$ kubectl get pods -n ingress-nginx +NAME READY STATUS RESTARTS AGE +default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s +nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s +
In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer
.
+Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB)
+Please check the elastic load balancing AWS details page
This setup requires to choose in which layer (L4 or L7) we want to configure the ELB:
+Patch the nginx ingress controller deployment to add the flag --publish-service
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' \ + --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml)" +
For L4:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml +
For L7:
+Change line of the file provider/aws/service-l7.yaml
replacing the dummy id with a valid one "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
+Then execute:
kubectl apply -f provider/aws/service-l7.yaml +kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml +
This example creates an ELB with just two listeners, one in port 80 and another in port 443
+If the ingress controller uses RBAC run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml
+
If not run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-without-rbac.yaml
+
This type of load balancer is supported since v1.10.0 as an ALPHA feature.
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml
+
If the ingress controller uses RBAC run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml
+
If not run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-without-rbac.yaml
+
Patch the nginx ingress controller deployment to add the flag --publish-service
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' \ + --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml)" +
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/gce-gke/service.yaml \ + | kubectl apply -f - +
If the ingress controller uses RBAC run:
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml | kubectl apply -f -
+
If not run:
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-without-rbac.yaml | kubectl apply -f -
+
Important Note: proxy protocol is not supported in GCE/GKE
+Patch the nginx ingress controller deployment to add the flag --publish-service
kubectl patch deployment -n ingress-nginx nginx-ingress-controller --type='json' \ + --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/publish-service-patch.yaml)" +
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/azure/service.yaml \ + | kubectl apply -f - +
If the ingress controller uses RBAC run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-with-rbac.yaml
+
If not run:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/patch-service-without-rbac.yaml
+
Important Note: proxy protocol is not supported in GCE/GKE
+Using NodePort:
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml \ + | kubectl apply -f - +
NGINX Ingress controller can be installed via Helm using the chart stable/nginx from the official charts repository.
+To install the chart with the release name my-nginx
:
helm install stable/nginx-ingress --name my-nginx
+
If the kubernetes cluster has RBAC enabled, then run:
+helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
+
To check if the ingress controller pods have started, run the following command:
+kubectl get pods --all-namespaces -l app=ingress-nginx --watch
+
Once the operator pods are running, you can cancel the above command by typing Ctrl+C
.
+Now, you are ready to create your first ingress.
To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version
command.
POD_NAMESPACE=ingress-nginx +POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app=ingress-nginx -o jsonpath={.items[0].metadata.name}) +kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version +
A config map can be used to configure system components for the nginx-controller. In order to begin using a config-map +make sure it has been created and is being used in the deployment.
+It is created as seen in the Mandatory Commands section above.
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ + | kubectl apply -f - +
and is setup to be used in the deployment without-rbac or with-rbac with the following line:
+- --configmap=$(POD_NAMESPACE)/nginx-configuration +
For information on using the config-map, see its user-guide.
+ + + + + + + + + +This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.
+Role Based Access Control is comprised of four layers:
+ClusterRole
- permissions assigned to a role that apply to an entire clusterClusterRoleBinding
- binding a ClusterRole to a specific accountRole
- permissions assigned to a role that apply to a specific namespaceRoleBinding
- binding a Role to a specific accountIn order for RBAC to be applied to an nginx-ingress-controller, that controller
+should be assigned to a ServiceAccount
. That ServiceAccount
should be
+bound to the Role
s and ClusterRole
s defined for the nginx-ingress-controller.
One ServiceAccount is created in this example, nginx-ingress-serviceaccount
.
There are two sets of permissions defined in this example. Cluster-wide
+permissions defined by the ClusterRole
named nginx-ingress-clusterrole
, and
+namespace specific permissions defined by the Role
named nginx-ingress-role
.
These permissions are granted in order for the nginx-ingress-controller to be
+able to function as an ingress across the cluster. These permissions are
+granted to the ClusterRole named nginx-ingress-clusterrole
configmaps
, endpoints
, nodes
, pods
, secrets
: list, watchnodes
: getservices
, ingresses
: get, list, watchevents
: create, patchingresses/status
: updateThese permissions are granted specific to the nginx-ingress namespace. These
+permissions are granted to the Role named nginx-ingress-role
configmaps
, pods
, secrets
: getendpoints
: getFurthermore to support leader-election, the nginx-ingress-controller needs to
+have access to a configmap
using the resourceName ingress-controller-leader-nginx
++Note that resourceNames can NOT be used to limit requests using the “create” +verb because authorizers only have access to information that can be obtained +from the request URL, method, and headers (resource names in a “create” request +are part of the request body).
+
configmaps
: get, update (for resourceName ingress-controller-leader-nginx
)configmaps
: createThis resourceName is the concatenation of the election-id
and the
+ingress-class
as defined by the ingress-controller, which defaults to:
election-id
: ingress-controller-leader
ingress-class
: nginx
resourceName
: <election-id>-<ingress-class>
Please adapt accordingly if you overwrite either parameter when launching the +nginx-ingress-controller.
+The ServiceAccount nginx-ingress-serviceaccount
is bound to the Role
+nginx-ingress-role
and the ClusterRole nginx-ingress-clusterrole
.
The serviceAccountName associated with the containers in the deployment must +match the serviceAccount. The namespace references in the Deployment metadata, +container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.
+ + + + + + + + + +This document explains how to get started with developing for NGINX Ingress controller. +It includes how to build, test, and release ingress controllers.
+Prequisites: Minikube must be installed; See releases for installation instructions.
+If you are using MacOS and deploying to minikube, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx
:
$ make dev-env +
The nginx controller container image can be rebuilt using:
+$ ARCH=amd64 TAG=dev REGISTRY=$USER/ingress-controller make build container +
The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up:
+$ kubectl get pods -n ingress-nginx +$ kubectl delete pod -n ingress-nginx nginx-ingress-controller-<unique-pod-id> +
The build uses dependencies in the vendor
directory, which
+must be installed before building a binary/image. Occasionally, you
+might need to update the dependencies.
This guide requires you to install the dep dependency tool.
+Check the version of dep
you are using and make sure it is up to date.
$ dep version +dep: + version : devel + build date : + git hash : + go version : go1.9 + go compiler : gc + platform : linux/amd64 +
If you have an older version of dep
, you can update it as follows:
$ go get -u github.com/golang/dep
+
This will automatically save the dependencies to the vendor/
directory.
$ cd $GOPATH/src/k8s.io/ingress-nginx +$ dep ensure +$ dep ensure -update +$ dep prune +
All ingress controllers are built through a Makefile. Depending on your +requirements you can build a raw server binary, a local container image, +or push an image to a remote repository.
+In order to use your local Docker, you may need to set the following environment variables:
+# "gcloud docker" (default) or "docker" +$ export DOCKER=<docker> + +# "quay.io/kubernetes-ingress-controller" (default), "index.docker.io", or your own registry +$ export REGISTRY=<your-docker-registry> +
To find the registry simply run: docker system info | grep Registry
Build a raw server binary
+$ make build
+
TODO: add more specific instructions needed for raw server binary.
+Build a local container image
+$ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-build +
Push the container image to a remote repository
+$ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-push +
There are several ways to deploy the ingress controller onto a cluster. +Please check the deployment guide
+To run unit-tests, just run
+$ cd $GOPATH/src/k8s.io/ingress-nginx +$ make test +
If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo.
+$ cd $GOPATH/src/k8s.io/ingress-nginx +$ make e2e-test +
To run unit-tests for lua code locally, run:
+$ cd $GOPATH/src/k8s.io/ingress-nginx +$ ./rootfs/etc/nginx/lua/test/up.sh +$ make lua-test +
Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test
. When creating a new test file it must follow the naming convention <mytest>_test.lua
or it will be ignored.
All Makefiles will produce a release binary, as shown above. To publish this
+to a wider Kubernetes user base, push the image to a container registry, like
+gcr.io. All release images are hosted under gcr.io/google_containers
and
+tagged according to a semver scheme.
An example release might look like:
+$ make release +
Please follow these guidelines to cut a release:
+controller-release-version
. Typically, pre-releases are cut from HEAD.
+All major feature work is done in HEAD. Specific bug fixes are
+cherry-picked into a release branch.Many of the examples in this directory have common prerequisites.
+Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA +key/cert pair with an arbitrarily chosen hostname, created as follows
+$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc" +Generating a 2048 bit RSA private key +................+++ +................+++ +writing new private key to 'tls.key' +----- + +$ kubectl create secret tls tls-secret --key tls.key --cert tls.crt +secret "tls-secret" created +
You can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our +own CA, and also generate a client certificate.
+These instructions are based on CoreOS OpenSSL instructions
+First of all, you've to generate a CA. This is going to be the one who will sign your client certificates. +In real production world, you may face CAs with intermediate certificates, as the following:
+$ openssl s_client -connect www.google.com:443 +[...] +--- +Certificate chain + 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com + i:/C=US/O=Google Inc/CN=Google Internet Authority G2 + 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2 + i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA + 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA + i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority +
To generate our CA Certificate, we've to run the following commands:
+$ openssl genrsa -out ca.key 2048 +$ openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.crt -subj "/CN=example-ca" +
This will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days. +The ca.crt can be used later in the step of creation of CA authentication secret.
+The following steps generate a client certificate signed by the CA generated above. This client can be +used to authenticate in a tls-auth configured ingress.
+First, we need to generate an 'openssl.cnf' file that will be used while signing the keys:
+[req] +req_extensions = v3_req +distinguished_name = req_distinguished_name +[req_distinguished_name] +[ v3_req ] +basicConstraints = CA:FALSE +keyUsage = nonRepudiation, digitalSignature, keyEncipherment +
Then, a user generates his very own private key (that he needs to keep secret) +and a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate.
+$ openssl genrsa -out client1.key 2048 +$ openssl req -new -key client1.key -out client1.csr -subj "/CN=client1" -config openssl.cnf +
As the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate:
+$ openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days 365 -extensions v3_req -extfile openssl.cnf +
Then, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).
+If you're using the CA Authentication feature, you need to generate a secret containing +all the authorized CAs. You must download them from your CA site in PEM format (like the following):
+-----BEGIN CERTIFICATE----- +[....] +-----END CERTIFICATE----- +
You can have as many certificates as you want. If they're in the binary DER format, +you can convert them as the following:
+$ openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
+
Then, you've to concatenate them all in only one file, named 'ca.crt' as the following:
+$ cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt
+
The final step is to create a secret with the content of this file. This secret is going to be used in +the TLS Auth directive:
+$ kubectl create secret generic caingress --namespace=default --from-file=ca.crt=<ca.crt> +
Note: You can also generate the CA Authentication Secret along with the TLS Secret by using:
+$ kubectl create secret generic caingress --namespace=default --from-file=ca.crt=<ca.crt> --from-file=tls.crt=<tls.crt> --from-file=tls.key=<tls.key> +
All examples that require a test HTTP Service use the standard http-svc pod, +which you can deploy as follows
+$ kubectl create -f http-svc.yaml +service "http-svc" created +replicationcontroller "http-svc" created + +$ kubectl get po +NAME READY STATUS RESTARTS AGE +http-svc-p1t3t 1/1 Running 0 1d + +$ kubectl get svc +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +http-svc 10.0.122.116 <pending> 80:30301/TCP 1d +
You can test that the HTTP Service works by exposing it temporarily
+$ kubectl patch svc http-svc -p '{"spec":{"type": "LoadBalancer"}}' +"http-svc" patched + +$ kubectl get svc http-svc +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +http-svc 10.0.122.116 <pending> 80:30301/TCP 1d + +$ kubectl describe svc http-svc +Name: http-svc +Namespace: default +Labels: app=http-svc +Selector: app=http-svc +Type: LoadBalancer +IP: 10.0.122.116 +LoadBalancer Ingress: 108.59.87.136 +Port: http 80/TCP +NodePort: http 30301/TCP +Endpoints: 10.180.1.6:8080 +Session Affinity: None +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer + 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer + 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer + +$ curl 108.59.87.126 +CLIENT VALUES: +client_address=10.240.0.3 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://108.59.87.136:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +host=108.59.87.136 +user-agent=curl/7.46.0 +BODY: +-no body in request- + +$ kubectl patch svc http-svc -p '{"spec":{"type": "NodePort"}}' +"http-svc" patched +
This directory contains a catalog of examples on how to run, configure and +scale Ingress. Please review the prerequisites before +trying them.
+Name | +Description | +Complexity Level | +
---|---|---|
Static-ip | +a single ingress gets a single static ip | +Intermediate | +
Name | +Description | +Complexity Level | +
---|---|---|
Session stickyness | +route requests consistently to the same endpoint | +Advanced | +
Name | +Description | +Complexity Level | +
---|---|---|
Basic auth | +password protect your website | +nginx | +
Client certificate authentication | +secure your website with client certificate authentication | +nginx | +
External auth plugin | +defer to an external auth service | +Intermediate | +
Name | +Description | +Complexity Level | +
---|---|---|
configuration-snippets | +customize nginx location configuration using annotations | +Advanced | +
custom-headers | +set custom headers before send traffic to backends | +Advanced | +
This example demonstrates how to achieve session affinity using cookies
+Session stickiness is achieved through 3 annotations on the Ingress, as shown in the example.
+Name | +Description | +Values | +
---|---|---|
nginx.ingress.kubernetes.io/affinity | +Sets the affinity type | +string (in NGINX only cookie is possible |
+
nginx.ingress.kubernetes.io/session-cookie-name | +Name of the cookie that will be used | +string (default to INGRESSCOOKIE) | +
nginx.ingress.kubernetes.io/session-cookie-hash | +Type of hash that will be used in cookie value | +sha1/md5/index | +
You can create the ingress to test this
+kubectl create -f ingress.yaml
+
You can confirm that the Ingress works.
+$ kubectl describe ing nginx-test +Name: nginx-test +Namespace: default +Address: +Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) +Rules: + Host Path Backends + ---- ---- -------- + stickyingress.example.com + / nginx-service:80 (<none>) +Annotations: + affinity: cookie + session-cookie-hash: sha1 + session-cookie-name: INGRESSCOOKIE +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test + + +$ curl -I http://stickyingress.example.com +HTTP/1.1 200 OK +Server: nginx/1.11.9 +Date: Fri, 10 Feb 2017 14:11:12 GMT +Content-Type: text/html +Content-Length: 612 +Connection: keep-alive +Set-Cookie: INGRESSCOOKIE=a9907b79b248140b56bb13723f72b67697baac3d; Path=/; HttpOnly +Last-Modified: Tue, 24 Jan 2017 14:02:19 GMT +ETag: "58875e6b-264" +Accept-Ranges: bytes +
In the example above, you can see a line containing the 'Set-Cookie: INGRESSCOOKIE' setting the right defined stickiness cookie. +This cookie is created by NGINX containing the hash of the used upstream in that request. +If the user changes this cookie, NGINX creates a new one and redirect the user to another upstream.
+If the backend pool grows up NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.
+When the backend server is removed, the requests are then re-routed to another upstream server and NGINX creates a new cookie, as the previous hash became invalid.
+When you have more than one Ingress Object pointing to the same Service, but one containing affinity configuration and other don't, the first created Ingress will be used. +This means that you can face the situation that you've configured Session Affinity in one Ingress and it doesn't reflects in NGINX configuration, because there is another Ingress Object pointing to the same service that doesn't configure this.
+ + + + + + + + + +This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd
.
$ htpasswd -c auth foo +New password: <bar> +New password: +Re-type new password: +Adding password for user foo +
$ kubectl create secret generic basic-auth --from-file=auth +secret "basic-auth" created +
$ kubectl get secret basic-auth -o yaml +apiVersion: v1 +data: + auth: Zm9vOiRhcHIxJE9GRzNYeWJwJGNrTDBGSERBa29YWUlsSDkuY3lzVDAK +kind: Secret +metadata: + name: basic-auth + namespace: default +type: Opaque +
echo " +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: ingress-with-auth + annotations: + # type of authentication + nginx.ingress.kubernetes.io/auth-type: basic + # name of the secret that contains the user/password definitions + nginx.ingress.kubernetes.io/auth-secret: basic-auth + # message to display with an appropriate context why the authentication is required + nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo" +spec: + rules: + - host: foo.bar.com + http: + paths: + - path: / + backend: + serviceName: http-svc + servicePort: 80 +" | kubectl create -f - +
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' +* Trying 10.2.29.4... +* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) +> GET / HTTP/1.1 +> Host: foo.bar.com +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 401 Unauthorized +< Server: nginx/1.10.0 +< Date: Wed, 11 May 2016 05:27:23 GMT +< Content-Type: text/html +< Content-Length: 195 +< Connection: keep-alive +< WWW-Authenticate: Basic realm="Authentication Required - foo" +< +<html> +<head><title>401 Authorization Required</title></head> +<body bgcolor="white"> +<center><h1>401 Authorization Required</h1></center> +<hr><center>nginx/1.10.0</center> +</body> +</html> +* Connection #0 to host 10.2.29.4 left intact +
$ curl -v http://10.2.29.4/ -H 'Host: foo.bar.com' -u 'foo:bar' +* Trying 10.2.29.4... +* Connected to 10.2.29.4 (10.2.29.4) port 80 (#0) +* Server auth using Basic with user 'foo' +> GET / HTTP/1.1 +> Host: foo.bar.com +> Authorization: Basic Zm9vOmJhcg== +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.10.0 +< Date: Wed, 11 May 2016 06:05:26 GMT +< Content-Type: text/plain +< Transfer-Encoding: chunked +< Connection: keep-alive +< Vary: Accept-Encoding +< +CLIENT VALUES: +client_address=10.2.29.4 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://foo.bar.com:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +authorization=Basic Zm9vOmJhcg== +connection=close +host=foo.bar.com +user-agent=curl/7.43.0 +x-forwarded-for=10.2.29.1 +x-forwarded-host=foo.bar.com +x-forwarded-port=80 +x-forwarded-proto=http +x-real-ip=10.2.29.1 +BODY: +* Connection #0 to host 10.2.29.4 left intact +-no body in request- +
It is possible to enable Client Certificate Authentication using additional annotations in the Ingress.
+Create a file named ca.crt
containing the trusted certificate authority chain (all ca certificates in PEM format) to verify client certificates.
Create a secret from this file:
+kubectl create secret generic auth-tls-chain --from-file=ca.crt --namespace=default
Add the annotations as provided in the ingress.yaml example to your ingress object.
+Use an external service (Basic Auth) located in https://httpbin.org
$ kubectl create -f ingress.yaml +ingress "external-auth" created + +$ kubectl get ing external-auth +NAME HOSTS ADDRESS PORTS AGE +external-auth external-auth-01.sample.com 172.17.4.99 80 13s + +$ kubectl get ing external-auth -o yaml +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd + creationTimestamp: 2016-10-03T13:50:35Z + generation: 1 + name: external-auth + namespace: default + resourceVersion: "2068378" + selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/external-auth + uid: 5c388f1d-8970-11e6-9004-080027d2dc94 +spec: + rules: + - host: external-auth-01.sample.com + http: + paths: + - backend: + serviceName: http-svc + servicePort: 80 + path: / +status: + loadBalancer: + ingress: + - ip: 172.17.4.99 +$ +
Test 1: no username/password (expect code 401)
+$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' +* Rebuilt URL to: http://172.17.4.99/ +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +> GET / HTTP/1.1 +> Host: external-auth-01.sample.com +> User-Agent: curl/7.50.1 +> Accept: */* +> +< HTTP/1.1 401 Unauthorized +< Server: nginx/1.11.3 +< Date: Mon, 03 Oct 2016 14:52:08 GMT +< Content-Type: text/html +< Content-Length: 195 +< Connection: keep-alive +< WWW-Authenticate: Basic realm="Fake Realm" +< +<html> +<head><title>401 Authorization Required</title></head> +<body bgcolor="white"> +<center><h1>401 Authorization Required</h1></center> +<hr><center>nginx/1.11.3</center> +</body> +</html> +* Connection #0 to host 172.17.4.99 left intact +
Test 2: valid username/password (expect code 200)
+$ curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:passwd' +* Rebuilt URL to: http://172.17.4.99/ +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +* Server auth using Basic with user 'user' +> GET / HTTP/1.1 +> Host: external-auth-01.sample.com +> Authorization: Basic dXNlcjpwYXNzd2Q= +> User-Agent: curl/7.50.1 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.11.3 +< Date: Mon, 03 Oct 2016 14:52:50 GMT +< Content-Type: text/plain +< Transfer-Encoding: chunked +< Connection: keep-alive +< +CLIENT VALUES: +client_address=10.2.60.2 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://external-auth-01.sample.com:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +authorization=Basic dXNlcjpwYXNzd2Q= +connection=close +host=external-auth-01.sample.com +user-agent=curl/7.50.1 +x-forwarded-for=10.2.60.1 +x-forwarded-host=external-auth-01.sample.com +x-forwarded-port=80 +x-forwarded-proto=http +x-real-ip=10.2.60.1 +BODY: +* Connection #0 to host 172.17.4.99 left intact +-no body in request- +
Test 3: invalid username/password (expect code 401)
+curl -k http://172.17.4.99 -v -H 'Host: external-auth-01.sample.com' -u 'user:user' +* Rebuilt URL to: http://172.17.4.99/ +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +* Server auth using Basic with user 'user' +> GET / HTTP/1.1 +> Host: external-auth-01.sample.com +> Authorization: Basic dXNlcjp1c2Vy +> User-Agent: curl/7.50.1 +> Accept: */* +> +< HTTP/1.1 401 Unauthorized +< Server: nginx/1.11.3 +< Date: Mon, 03 Oct 2016 14:53:04 GMT +< Content-Type: text/html +< Content-Length: 195 +< Connection: keep-alive +* Authentication problem. Ignoring this. +< WWW-Authenticate: Basic realm="Fake Realm" +< +<html> +<head><title>401 Authorization Required</title></head> +<body bgcolor="white"> +<center><h1>401 Authorization Required</h1></center> +<hr><center>nginx/1.11.3</center> +</body> +</html> +* Connection #0 to host 172.17.4.99 left intact +
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.
+$ kubectl apply -f ingress.yaml
+
Check if the contents of the annotation are present in the nginx.conf file using:
+kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf
Using a ConfigMap is possible to customize the NGINX configuration
+For example, if we want to change the timeouts we need to create a ConfigMap:
+$ cat configmap.yaml +apiVersion: v1 +data: + proxy-connect-timeout: "10" + proxy-read-timeout: "120" + proxy-send-timeout: "120" +kind: ConfigMap +metadata: + name: nginx-load-balancer-conf +
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-configuration/configmap.yaml \ + | kubectl apply -f - +
If the Configmap it is updated, NGINX will be reloaded with the new configuration.
+ + + + + + + + + +This example shows how is possible to use a custom backend to render custom error pages. The code of this example is located here custom-error-pages
+The idea is to use the headers X-Code
and X-Format
that NGINX pass to the backend in case of an error to find out the best existent representation of the response to be returned. i.e. if the request contains an Accept
header of type json
the error should be in that format and not in html
(the default in NGINX).
First create the custom backend to use in the Ingress controller
+$ kubectl create -f custom-default-backend.yaml +service "nginx-errors" created +replicationcontroller "nginx-errors" created +
$ kubectl get svc +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +echoheaders 10.3.0.7 nodes 80/TCP 23d +kubernetes 10.3.0.1 <none> 443/TCP 34d +nginx-errors 10.3.0.102 <none> 80/TCP 11s +
$ kubectl get rc +CONTROLLER REPLICAS AGE +echoheaders 1 19d +nginx-errors 1 19s +
Next create the Ingress controller executing
+$ kubectl create -f rc-custom-errors.yaml +
Now to check if this is working we use curl:
+$ curl -v http://172.17.4.99/ +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +> GET / HTTP/1.1 +> Host: 172.17.4.99 +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.10.0 +< Date: Wed, 04 May 2016 02:53:45 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Vary: Accept-Encoding +< +<span>The page you're looking for could not be found.</span> + +* Connection #0 to host 172.17.4.99 left intact +
Specifying json as expected format:
+$ curl -v http://172.17.4.99/ -H 'Accept: application/json' +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +> GET / HTTP/1.1 +> Host: 172.17.4.99 +> User-Agent: curl/7.43.0 +> Accept: application/json +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.10.0 +< Date: Wed, 04 May 2016 02:54:00 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Vary: Accept-Encoding +< +{ "message": "The page you're looking for could not be found" } + +* Connection #0 to host 172.17.4.99 left intact +
This example aims to demonstrate the deployment of an nginx ingress controller and +use a ConfigMap to configure a custom list of headers to be passed to the upstream +server
+curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/configmap.yaml \ + | kubectl apply -f - + +curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/customization/custom-headers/custom-headers.yaml \ + | kubectl apply -f - +
Check the contents of the configmap is present in the nginx.conf file using:
+kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf
This example shows how is possible to create a custom configuration for a particular upstream associated with an Ingress rule.
+echo " +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + name: http-svc + annotations: + nginx.ingress.kubernetes.io/upstream-fail-timeout: "30" +spec: + rules: + - host: foo.bar.com + http: + paths: + - path: / + backend: + serviceName: http-svc + servicePort: 80 +" | kubectl create -f - +
Check the annotation is present in the Ingress rule:
+kubectl get ingress http-svc -o yaml +
Check the NGINX configuration is updated using kubectl or the status page:
+$ kubectl exec nginx-ingress-controller-v1ppm cat /etc/nginx/nginx.conf
+
.... + upstream default-http-svc-x-80 { + least_conn; + server 10.2.92.2:8080 max_fails=5 fail_timeout=30; + + } +.... +
This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to enable nginx vts module to export metrics in prometheus format.
+Vts-metrics export NGINX metrics. To deploy all the files simply run kubectl apply -f nginx
. A deployment and service will be
+created which already has a prometheus.io/scrape: 'true'
annotation and if you added
+the recommended Prometheus service-endpoint scraping configuration,
+Prometheus will scrape it automatically and you start using the generated metrics right away.
apiVersion: v1 +data: + enable-vts-status: "true" +kind: ConfigMap +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +
$ kubectl apply -f nginx-vts-metrics-conf.yaml
+
Check whether the ingress controller successfully generated the NGINX vts status:
+$ kubectl exec nginx-ingress-controller-873061567-4n3k2 -n ingress-nginx cat /etc/nginx/nginx.conf|grep vhost_traffic_status_display + vhost_traffic_status_display; + vhost_traffic_status_display_format html; +
The vts dashboard provides real time metrics.
+Because the vts port it's not yet exposed, you should forward the controller port to see it.
+$ kubectl port-forward $(kubectl get pods --selector=k8s-app=nginx-ingress-controller -n ingress-nginx --output=jsonpath={.items..metadata.name}) -n ingress-nginx 18080 +
Now open the url http://localhost:18080/nginx_status in your browser.
+NGINX Ingress controller already has a parser to convert vts metrics to Prometheus format. It exports prometheus metrics to the address :10254/metrics
.
$ kubectl exec -ti -n ingress-nginx $(kubectl get pods --selector=k8s-app=nginx-ingress-controller -n kube-system --output=jsonpath={.items..metadata.name}) curl localhost:10254/metrics +ingress_controller_ssl_expire_time_seconds{host="foo.bar.com"} -6.21355968e+10 +# HELP ingress_controller_success Cumulative number of Ingress controller reload operations +# TYPE ingress_controller_success counter +ingress_controller_success{count="reloads"} 3 +# HELP nginx_bytes_total Nginx bytes count +# TYPE nginx_bytes_total counter +nginx_bytes_total{direction="in",ingress_class="nginx",namespace="",server_zone="*"} 3708 +nginx_bytes_total{direction="in",ingress_class="nginx",namespace="",server_zone="_"} 3708 +nginx_bytes_total{direction="out",ingress_class="nginx",namespace="",server_zone="*"} 5256 +nginx_bytes_total{direction="out",ingress_class="nginx",namespace="",server_zone="_"} 5256 +
The default vts vhost key is $geoip_country_code country::*
that expose metrics grouped by server and country code. The example below show how to have metrics grouped by server and server path.
apiVersion: v1 + kind: ConfigMap + data: + enable-vts-status: "true" + vts-default-filter-key: "$server_name" +... +
apiVersion: extensions/v1beta1 + kind: Ingress + metadata: + annotations: + nginx.ingress.kubernetes.io/vts-filter-key: $uri $server_name + name: ingress +
This example demonstrates propagation of selected authentication service response headers +to backend service.
+Sample configuration includes:
+User
containing string internal
are considered authenticatedUserID
and UserRole
You can deploy the controller as +follows:
+$ kubectl create -f deploy/ +deployment "demo-auth-service" created +service "demo-auth-service" created +ingress "demo-auth-service" created +deployment "demo-echo-service" created +service "demo-echo-service" created +ingress "public-demo-echo-service" created +ingress "secure-demo-echo-service" created + +$ kubectl get po +NAME READY STATUS RESTARTS AGE +NAME READY STATUS RESTARTS AGE +demo-auth-service-2769076528-7g9mh 1/1 Running 0 30s +demo-echo-service-3636052215-3vw8c 1/1 Running 0 29s + +kubectl get ing +NAME HOSTS ADDRESS PORTS AGE +public-demo-echo-service public-demo-echo-service.kube.local 80 1m +secure-demo-echo-service secure-demo-echo-service.kube.local 80 1m +
Test 1: public service with no auth header
+$ curl -H 'Host: public-demo-echo-service.kube.local' -v 192.168.99.100 +* Rebuilt URL to: 192.168.99.100/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) +> GET / HTTP/1.1 +> Host: public-demo-echo-service.kube.local +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.11.10 +< Date: Mon, 13 Mar 2017 20:19:21 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 20 +< Connection: keep-alive +< +* Connection #0 to host 192.168.99.100 left intact +UserID: , UserRole: +
Test 2: secure service with no auth header
+$ curl -H 'Host: secure-demo-echo-service.kube.local' -v 192.168.99.100 +* Rebuilt URL to: 192.168.99.100/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) +> GET / HTTP/1.1 +> Host: secure-demo-echo-service.kube.local +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 403 Forbidden +< Server: nginx/1.11.10 +< Date: Mon, 13 Mar 2017 20:18:48 GMT +< Content-Type: text/html +< Content-Length: 170 +< Connection: keep-alive +< +<html> +<head><title>403 Forbidden</title></head> +<body bgcolor="white"> +<center><h1>403 Forbidden</h1></center> +<hr><center>nginx/1.11.10</center> +</body> +</html> +* Connection #0 to host 192.168.99.100 left intact +
Test 3: public service with valid auth header
+$ curl -H 'Host: public-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 +* Rebuilt URL to: 192.168.99.100/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) +> GET / HTTP/1.1 +> Host: public-demo-echo-service.kube.local +> User-Agent: curl/7.43.0 +> Accept: */* +> User:internal +> +< HTTP/1.1 200 OK +< Server: nginx/1.11.10 +< Date: Mon, 13 Mar 2017 20:19:59 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 44 +< Connection: keep-alive +< +* Connection #0 to host 192.168.99.100 left intact +UserID: 1443635317331776148, UserRole: admin +
Test 4: public service with valid auth header
+$ curl -H 'Host: secure-demo-echo-service.kube.local' -H 'User:internal' -v 192.168.99.100 +* Rebuilt URL to: 192.168.99.100/ +* Trying 192.168.99.100... +* Connected to 192.168.99.100 (192.168.99.100) port 80 (#0) +> GET / HTTP/1.1 +> Host: secure-demo-echo-service.kube.local +> User-Agent: curl/7.43.0 +> Accept: */* +> User:internal +> +< HTTP/1.1 200 OK +< Server: nginx/1.11.10 +< Date: Mon, 13 Mar 2017 20:17:23 GMT +< Content-Type: text/plain; charset=utf-8 +< Content-Length: 43 +< Connection: keep-alive +< +* Connection #0 to host 192.168.99.100 left intact +UserID: 605394647632969758, UserRole: admin +
This example aims to demonstrate the deployment of an nginx ingress controller and +use a ConfigMap to configure custom Diffie-Hellman parameters file to help with +"Perfect Forward Secrecy".
+$ cat configmap.yaml +apiVersion: v1 +data: + ssl-dh-param: "ingress-nginx/lb-dhparam" +kind: ConfigMap +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +
$ kubectl create -f configmap.yaml
+
$> openssl dhparam 1024 2> /dev/null | base64 +LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ... +
$ cat ssl-dh-param.yaml +apiVersion: v1 +data: + dhparam.pem: "LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ..." +kind: ConfigMap +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +
$ kubectl create -f ssl-dh-param.yaml
+
Check the contents of the configmap is present in the nginx.conf file using:
+kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf
This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet
+First we deploy the docker registry in the cluster:
+kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml
+
Important: DO NOT RUN THIS IN PRODUCTION.
+This deployment uses emptyDir
in the volumeMount
which means the contents of the registry will be deleted when the pod dies.
The next required step is creation of the ingress rules. To do this we have two options: with and without TLS
+Download and edit the yaml deployment replacing registry.<your domain>
with a valid DNS name pointing to the ingress controller:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml
+
Important: running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag. +Please check deploy a plain http registry
+Download and edit the yaml deployment replacing registry.<your domain>
with a valid DNS name pointing to the ingress controller:
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml
+
Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.
+To test the registry is working correctly we download a known image from docker hub, create a tag pointing to the new registry and upload the image:
+docker pull ubuntu:16.04 +docker tag ubuntu:16.04 `registry.<your domain>/ubuntu:16.04` +docker push `registry.<your domain>/ubuntu:16.04` +
Please replace registry.<your domain>
with your domain.
The auth-url
and auth-signin
annotations allow you to use an external
+authentication provider to protect your Ingress resources.
(Note, this annotation requires nginx-ingress-controller v0.9.0
or greater.)
This functionality is enabled by deploying multiple Ingress objects for a single host. +One Ingress object has no special annotations and handles authentication.
+Other Ingress objects can then be annotated in such a way that require the user to
+authenticate against the first Ingress's endpoint, and can redirect 401
s to the
+same endpoint.
Sample:
+... +metadata: + name: application + annotations: + "nginx.ingress.kubernetes.io/auth-url": "https://$host/oauth2/auth" + "nginx.ingress.kubernetes.io/auth-signin": "https://$host/oauth2/sign_in" +... +
This example will show you how to deploy oauth2_proxy
+into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.5.0.yaml
+
https://foo.bar.com
/oauth2
, like https://foo.bar.com/oauth2
Configure oauth2_proxy values in the file oauth2-proxy.yaml with the values:
+OAUTH2_PROXY_CLIENT_ID with the github <Client ID>
<Client Secret>
OAUTH2_PROXY_COOKIE_SECRET with value of python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
Customize the contents of the file dashboard-ingress.yaml:
+Replace __INGRESS_HOST__
with a valid FQDN and __INGRESS_SECRET__
with a Secret with a valid SSL certificate.
$ kubectl create -f oauth2-proxy.yaml,dashboard-ingress.yaml
+
Test the oauth integration accessing the configured URL, like https://foo.bar.com
This example uses 2 different certificates to terminate SSL for 2 hostnames.
+This should generate a segment like:
+$ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35 + server { + listen 80; + listen 443 ssl http2; + ssl_certificate /etc/nginx-ssl/default-foobar.pem; + ssl_certificate_key /etc/nginx-ssl/default-foobar.pem; + + + server_name foo.bar.com; + + + if ($scheme = http) { + return 301 https://$host$request_uri; + } + + + + location / { + proxy_set_header Host $host; + + # Pass Real IP + proxy_set_header X-Real-IP $remote_addr; + + # Allow websocket connections + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection $connection_upgrade; + + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + proxy_set_header X-Forwarded-Host $host; + proxy_set_header X-Forwarded-Proto $pass_access_scheme; + + proxy_connect_timeout 5s; + proxy_send_timeout 60s; + proxy_read_timeout 60s; + + proxy_redirect off; + proxy_buffering off; + + proxy_http_version 1.1; + + proxy_pass http://default-http-svc-80; + } +
And you should be able to reach your nginx service or http-svc service using a hostname switch:
+$ kubectl get ing +NAME RULE BACKEND ADDRESS AGE +foo-tls - 104.154.30.67 13m + foo.bar.com + / http-svc:80 + bar.baz.com + / nginx:80 + +$ curl https://104.154.30.67 -H 'Host:foo.bar.com' -k +CLIENT VALUES: +client_address=10.245.0.6 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://foo.bar.com:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +connection=close +host=foo.bar.com +user-agent=curl/7.35.0 +x-forwarded-for=10.245.0.1 +x-forwarded-host=foo.bar.com +x-forwarded-proto=https + +$ curl https://104.154.30.67 -H 'Host:bar.baz.com' -k +<!DOCTYPE html> +<html> +<head> +<title>Welcome to nginx on Debian!</title> + +$ curl 104.154.30.67 +default backend - 404 +
This example demonstrates how to use the Rewrite annotations
+You will need to make sure your Ingress targets exactly one Ingress +controller by specifying the ingress.class annotation, +and that you have an ingress controller running in your cluster.
+Rewriting can be controlled using the following annotations:
+Name | +Description | +Values | +
---|---|---|
nginx.ingress.kubernetes.io/rewrite-target | +Target URI where the traffic must be redirected | +string | +
nginx.ingress.kubernetes.io/add-base-url | +indicates if is required to add a base tag in the head of the responses from the upstream servers | +bool | +
nginx.ingress.kubernetes.io/base-url-scheme | +Override for the scheme passed to the base tag | +string | +
nginx.ingress.kubernetes.io/ssl-redirect | +Indicates if the location section is accessible SSL only (defaults to True when Ingress contains a Certificate) | +bool | +
nginx.ingress.kubernetes.io/force-ssl-redirect | +Forces the redirection to HTTPS even if the Ingress is not TLS Enabled | +bool | +
nginx.ingress.kubernetes.io/app-root | +Defines the Application Root that the Controller must redirect if it's in '/' context | +string | +
Create an Ingress rule with a rewrite annotation:
+$ echo " +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / + name: rewrite + namespace: default +spec: + rules: + - host: rewrite.bar.com + http: + paths: + - backend: + serviceName: http-svc + servicePort: 80 + path: /something +" | kubectl create -f - +
Check the rewrite is working
+$ curl -v http://172.17.4.99/something -H 'Host: rewrite.bar.com' +* Trying 172.17.4.99... +* Connected to 172.17.4.99 (172.17.4.99) port 80 (#0) +> GET /something HTTP/1.1 +> Host: rewrite.bar.com +> User-Agent: curl/7.43.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Server: nginx/1.11.0 +< Date: Tue, 31 May 2016 16:07:31 GMT +< Content-Type: text/plain +< Transfer-Encoding: chunked +< Connection: keep-alive +< +CLIENT VALUES: +client_address=10.2.56.9 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://rewrite.bar.com:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +connection=close +host=rewrite.bar.com +user-agent=curl/7.43.0 +x-forwarded-for=10.2.56.1 +x-forwarded-host=rewrite.bar.com +x-forwarded-port=80 +x-forwarded-proto=http +x-real-ip=10.2.56.1 +BODY: +* Connection #0 to host 172.17.4.99 left intact +-no body in request- +
Create an Ingress rule with a app-root annotation:
+$ echo " +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + nginx.ingress.kubernetes.io/app-root: /app1 + name: approot + namespace: default +spec: + rules: + - host: approot.bar.com + http: + paths: + - backend: + serviceName: http-svc + servicePort: 80 + path: / +" | kubectl create -f - +
Check the rewrite is working
+$ curl -I -k http://approot.bar.com/ +HTTP/1.1 302 Moved Temporarily +Server: nginx/1.11.10 +Date: Mon, 13 Mar 2017 14:57:15 GMT +Content-Type: text/html +Content-Length: 162 +Location: http://stickyingress.example.com/app1 +Connection: keep-alive +
This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.
+You need a TLS cert and a test HTTP service for this example. +You will also need to make sure your Ingress targets exactly one Ingress +controller by specifying the ingress.class annotation, +and that you have an ingress controller running in your cluster.
+Since instances of the nginx controller actually run on nodes in your cluster, +by default nginx Ingresses will only get static IPs if your cloudprovider +supports static IP assignments to nodes. On GKE/GCE for example, even though +nodes get static IPs, the IPs are not retained across upgrade.
+To acquire a static IP for the nginx ingress controller, simply put it
+behind a Service of Type=LoadBalancer
.
First, create a loadbalancer Service and wait for it to acquire an IP
+$ kubectl create -f static-ip-svc.yaml +service "nginx-ingress-lb" created + +$ kubectl get svc nginx-ingress-lb +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +nginx-ingress-lb 10.0.138.113 104.154.109.191 80:31457/TCP,443:32240/TCP 15m +
then, update the ingress controller so it adopts the static IP of the Service
+by passing the --publish-service
flag (the example yaml used in the next step
+already has it set to "nginx-ingress-lb").
$ kubectl create -f nginx-ingress-controller.yaml +deployment "nginx-ingress-controller" created +
From here on every Ingress created with the ingress.class
annotation set to
+nginx
will get the IP allocated in the previous step
$ kubectl create -f nginx-ingress.yaml +ingress "nginx-ingress" created + +$ kubectl get ing nginx-ingress +NAME HOSTS ADDRESS PORTS AGE +nginx-ingress * 104.154.109.191 80, 443 13m + +$ curl 104.154.109.191 -kL +CLIENT VALUES: +client_address=10.180.1.25 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://104.154.109.191:8080/ +... +
You can test retention by deleting the Ingress
+$ kubectl delete ing nginx-ingress +ingress "nginx-ingress" deleted + +$ kubectl create -f nginx-ingress.yaml +ingress "nginx-ingress" created + +$ kubectl get ing nginx-ingress +NAME HOSTS ADDRESS PORTS AGE +nginx-ingress * 104.154.109.191 80, 443 13m +
Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all +Ingresses, because all requests are proxied through the same set of nginx +controllers.
+To promote the allocated IP to static, you can update the Service manifest
+$ kubectl patch svc nginx-ingress-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}' +"nginx-ingress-lb" patched +
and promote the IP to static (promotion works differently for cloudproviders, +provided example is for GKE/GCE) +`
+$ gcloud compute addresses create nginx-ingress-lb --addresses 104.154.109.191 --region us-central1 +Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb]. +--- +address: 104.154.109.191 +creationTimestamp: '2017-01-31T16:34:50.089-08:00' +description: '' +id: '5208037144487826373' +kind: compute#address +name: nginx-ingress-lb +region: us-central1 +selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb +status: IN_USE +users: +- us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000 +
Now even if the Service is deleted, the IP will persist, so you can recreate the
+Service with spec.loadBalancerIP
set to 104.154.109.191
.
This example demonstrates how to terminate TLS through the nginx Ingress controller.
+You need a TLS cert and a test HTTP service for this example.
+The following command instructs the controller to terminate traffic using the provided +TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.
+kubectl apply -f ingress.yaml
+
You can confirm that the Ingress works.
+$ kubectl describe ing nginx-test +Name: nginx-test +Namespace: default +Address: 104.198.183.6 +Default backend: default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080) +TLS: + tls-secret terminates +Rules: + Host Path Backends + ---- ---- -------- + * + http-svc:80 (<none>) +Annotations: +Events: + FirstSeen LastSeen Count From SubObjectPath Type Reason Message + --------- -------- ----- ---- ------------- -------- ------ ------- + 7s 7s 1 {nginx-ingress-controller } Normal CREATE default/nginx-test + 7s 7s 1 {nginx-ingress-controller } Normal UPDATE default/nginx-test + 7s 7s 1 {nginx-ingress-controller } Normal CREATE ip: 104.198.183.6 + 7s 7s 1 {nginx-ingress-controller } Warning MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming / + +$ curl 104.198.183.6 -L +curl: (60) SSL certificate problem: self signed certificate +More details here: http://curl.haxx.se/docs/sslcerts.html + +$ curl 104.198.183.6 -Lk +CLIENT VALUES: +client_address=10.240.0.4 +command=GET +real path=/ +query=nil +request_version=1.1 +request_uri=http://35.186.221.137:8080/ + +SERVER VALUES: +server_version=nginx: 1.9.11 - lua: 10001 + +HEADERS RECEIVED: +accept=*/* +connection=Keep-Alive +host=35.186.221.137 +user-agent=curl/7.46.0 +via=1.1 google +x-cloud-trace-context=f708ea7e369d4514fc90d51d7e27e91d/13322322294276298106 +x-forwarded-for=104.132.0.80, 35.186.221.137 +x-forwarded-proto=https +BODY: +
This is the documentation for the NGINX Ingress Controller.
+It is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.
+Learn more about using Ingress on k8s.io.
+See Deployment for a whirlwind tour that will get you started.
+ + + + + + + + + +This is a non-comprehensive list of existing ingress controllers.
+Using the flag --v=XX
it is possible to increase the level of logging.
+In particular:
--v=2
shows details using diff
about the changes in the configuration in nginxI0316 12:24:37.581267 1 utils.go:148] NGINX configuration diff a//etc/nginx/nginx.conf b//etc/nginx/nginx.conf +I0316 12:24:37.581356 1 utils.go:149] --- /tmp/922554809 2016-03-16 12:24:37.000000000 +0000 ++++ /tmp/079811012 2016-03-16 12:24:37.000000000 +0000 +@@ -235,7 +235,6 @@ + + upstream default-http-svcx { + least_conn; +- server 10.2.112.124:5000; + server 10.2.208.50:5000; + + } +I0316 12:24:37.610073 1 command.go:69] change in configuration detected. Reloading... +
--v=3
shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format--v=5
configures NGINX in debug modeA number of components are involved in the authentication process and the first step is to narrow +down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. +Both authentications must work:
++-------------+ service +------------+ +| | authentication | | ++ apiserver +<-------------------+ ingress | +| | | controller | ++-------------+ +------------+ +
Service authentication
+The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways:
+Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.
+Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig
flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig
does not requires the flag --apiserver-host
.
+The format of the file is identical to ~/.kube/config
which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.
Using the flag --apiserver-host
: Using this flag --apiserver-host=http://localhost:8080
it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy.
+Please do not use this approach in production.
In the diagram below you can see the full authentication flow with all options, starting with the browser +on the lower left hand side.
+Kubernetes Workstation ++---------------------------------------------------+ +------------------+ +| | | | +| +-----------+ apiserver +------------+ | | +------------+ | +| | | proxy | | | | | | | +| | apiserver | | ingress | | | | ingress | | +| | | | controller | | | | controller | | +| | | | | | | | | | +| | | | | | | | | | +| | | service account/ | | | | | | | +| | | kubeconfig | | | | | | | +| | +<-------------------+ | | | | | | +| | | | | | | | | | +| +------+----+ kubeconfig +------+-----+ | | +------+-----+ | +| |<--------------------------------------------------------| | +| | | | ++---------------------------------------------------+ +------------------+ +
If using a service account to connect to the API server, Dashboard expects the file
+/var/run/secrets/kubernetes.io/serviceaccount/token
to be present. It provides a secret
+token that is required to authenticate with the API server.
Verify with the following commands:
+# start a container that contains curl +$ kubectl run test --image=tutum/curl -- sleep 10000 + +# check that container is running +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +test-701078429-s5kca 1/1 Running 0 16s + +# check if secret exists +$ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ +ca.crt +namespace +token + +# get service IP of master +$ kubectl get services +NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE +kubernetes 10.0.0.1 <none> 443/TCP 1d + +# check base connectivity from cluster inside +$ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 +Unauthorized + +# connect using tokens +$ TOKEN_VALUE=$(kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token) +$ echo $TOKEN_VALUE +eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A +$ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $TOKEN_VALUE" https://10.0.0.1 +{ + "paths": [ + "/api", + "/api/v1", + "/apis", + "/apis/apps", + "/apis/apps/v1alpha1", + "/apis/authentication.k8s.io", + "/apis/authentication.k8s.io/v1beta1", + "/apis/authorization.k8s.io", + "/apis/authorization.k8s.io/v1beta1", + "/apis/autoscaling", + "/apis/autoscaling/v1", + "/apis/batch", + "/apis/batch/v1", + "/apis/batch/v2alpha1", + "/apis/certificates.k8s.io", + "/apis/certificates.k8s.io/v1alpha1", + "/apis/extensions", + "/apis/extensions/v1beta1", + "/apis/policy", + "/apis/policy/v1alpha1", + "/apis/rbac.authorization.k8s.io", + "/apis/rbac.authorization.k8s.io/v1alpha1", + "/apis/storage.k8s.io", + "/apis/storage.k8s.io/v1beta1", + "/healthz", + "/healthz/ping", + "/logs", + "/metrics", + "/swaggerapi/", + "/ui/", + "/version" + ] +} +
If it is not working, there are two possible reasons:
+The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account
and
+delete it with kubectl delete secret <name>
. It will automatically be recreated.
You have a non-standard Kubernetes installation and the file containing the token may not be present.
+The API server will mount a volume containing this file, but only if the API server is configured to use
+the ServiceAccount admission controller.
+If you experience this error, verify that your API server is using the ServiceAccount admission controller.
+If you are configuring the API server by hand, you can set this with the --admission-control
parameter.
+Please note that you should use other admission controllers as well. Before configuring this option, you should
+read about admission controllers.
More information:
+ +If you want to use a kubeconfig file for authentication, follow the deploy procedure and
+add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml
to the deployment
The following command line arguments are accepted by the main controller executable.
+They are set in the container spec of the nginx-ingress-controller
Deployment object (see deploy/with-rbac.yaml
or deploy/without-rbac.yaml
).
Argument | +Description | +
---|---|
--alsologtostderr |
+log to standard error as well as files | +
--annotations-prefix string |
+Prefix of the ingress annotations. (default "nginx.ingress.kubernetes.io") | +
--apiserver-host string |
+The address of the Kubernetes Apiserver to connect to in the format of protocol://address:port, e.g., http://localhost:8080. If not specified, the assumption is that the binary runs inside a Kubernetes cluster and local discovery is attempted. | +
--configmap string |
+Name of the ConfigMap that contains the custom configuration to use | +
--default-backend-service string |
+Service used to serve a 404 page for the default backend. Takes the form namespace/name. The controller uses the first node port of this Service for the default backend. | +
--default-server-port int |
+Default port to use for exposing the default server (catch all) (default 8181) | +
--default-ssl-certificate string |
+Name of the secret that contains a SSL certificate to be used as default for a HTTPS catch-all server. Takes the form |
+
--election-id string |
+Election id to use for status update. (default "ingress-controller-leader") | +
--enable-dynamic-configuration |
+When enabled controller will try to avoid Nginx reloads as much as possible by using Lua. Disabled by default. | +
--enable-ssl-chain-completion |
+Defines if the nginx ingress controller should check the secrets for missing intermediate CA certificates. If the certificate contain issues chain issues is not possible to enable OCSP. Default is true. (default true) | +
--enable-ssl-passthrough |
+Enable SSL passthrough feature. Default is disabled | +
--force-namespace-isolation |
+Force namespace isolation. This flag is required to avoid the reference of secrets or configmaps located in a different namespace than the specified in the flag --watch-namespace. | +
--health-check-path string |
+Defines the URL to be used as health check inside in the default server in NGINX. (default "/healthz") | +
--healthz-port int |
+port for healthz endpoint. (default 10254) | +
--http-port int |
+Indicates the port to use for HTTP traffic (default 80) | +
--https-port int |
+Indicates the port to use for HTTPS traffic (default 443) | +
--ingress-class string |
+Name of the ingress class to route through this controller. | +
--kubeconfig string |
+Path to kubeconfig file with authorization and master location information. | +
--log_backtrace_at traceLocation |
+when logging hits line file:N, emit a stack trace (default :0) | +
--log_dir string |
+If non-empty, write log files in this directory | +
--logtostderr |
+log to standard error instead of files (default true) | +
--profiling |
+Enable profiling via web interface host:port/debug/pprof/ (default true) | +
--publish-service string |
+Service fronting the ingress controllers. Takes the form namespace/name. The controller will set the endpoint records on the ingress objects to reflect those on the service. | +
--publish-status-address string |
+User customized address to be set in the status of ingress resources. The controller will set the endpoint records on the ingress using this address. | +
--report-node-internal-ip-address |
+Defines if the nodes IP address to be returned in the ingress status should be the internal instead of the external IP address | +
--sort-backends |
+Defines if backends and its endpoints should be sorted | +
--ssl-passtrough-proxy-port int |
+Default port to use internally for SSL when SSL Passthgough is enabled (default 442) | +
--status-port int |
+Indicates the TCP port to use for exposing the nginx status page (default 18080) | +
--stderrthreshold severity |
+logs at or above this threshold go to stderr (default 2) | +
--sync-period duration |
+Relist and confirm cloud resources this often. Default is 10 minutes (default 10m0s) | +
--sync-rate-limit float32 |
+Define the sync frequency upper limit (default 0.3) | +
--tcp-services-configmap string |
+Name of the ConfigMap that contains the definition of the TCP services to expose. The key in the map indicates the external port to be used. The value is the name of the service with the format namespace/serviceName and the port of the service could be a number of the name of the port. The ports 80 and 443 are not allowed as external ports. This ports are reserved for the backend | +
--udp-services-configmap string |
+Name of the ConfigMap that contains the definition of the UDP services to expose. The key in the map indicates the external port to be used. The value is the name of the service with the format namespace/serviceName and the port of the service could be a number of the name of the port. | +
--update-status |
+Indicates if the ingress controller should update the Ingress status IP/hostname. Default is true (default true) | +
--update-status-on-shutdown |
+Indicates if the ingress controller should update the Ingress status IP/hostname when the controller is being stopped. Default is true (default true) | +
-v , --v Level |
+log level for V logs | +
--version |
+Shows release information about the NGINX Ingress controller | +
--vmodule moduleSpec |
+comma-separated list of pattern=N settings for file-filtered logging | +
--watch-namespace string |
+Namespace to watch for Ingress. Default is to watch all namespaces | +
In case of an error in a request the body of the response is obtained from the default backend
.
+Each request to the default backend includes two headers:
X-Code
indicates the HTTP code to be returned to the client.X-Format
the value of the Accept
header.Important: the custom backend must return the correct HTTP status code to be returned. NGINX do not changes the response from the custom default backend.
+Using this two headers is possible to use a custom backend service like this one that inspect each request and returns a custom error page with the format expected by the client. Please check the example custom-errors
+NGINX sends additional headers that can be used to build custom response:
+Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap
and --udp-services-configmap
to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format:
+<namespace/service name>:<service port>:[PROXY]:[PROXY]
It is also possible to use a number or the name of the port. The two last fields are optional.
+Adding PROXY
in either or both of the two last fields we can use Proxy Protocol decoding (listen) and/or encoding (proxy_pass) in a TCP service (https://www.nginx.com/resources/admin-guide/proxy-protocol/).
The next example shows how to expose the service example-go
running in the namespace default
in the port 8080
using the port 9000
apiVersion: v1 +kind: ConfigMap +metadata: + name: tcp-configmap-example +data: + 9000: "default/example-go:8080" +
Since 1.9.13 NGINX provides UDP Load Balancing.
+The next example shows how to expose the service kube-dns
running in the namespace kube-system
in the port 53
using the port 53
```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: udp-configmap-example +data: + 53: "kube-system/kube-dns:53"
+ + + + + + + + + +Anytime we reference a tls secret, we mean (x509, pem encoded, RSA 2048, etc). You can generate such a certificate with:
+openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
+and create the secret via kubectl create secret tls ${CERT_NAME} --key ${KEY_FILE} --cert ${CERT_FILE}
The default backend is a service which handles all url paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress). +Basically a default backend exposes two URLs:
+/healthz
that returns 200/
that returns 404The sub-directory /images/404-server
provides a service which satisfies the requirements for a default backend. The sub-directory /images/custom-error-pages
provides an additional service for the purpose of customizing the error pages served via the default backend.
By default NGINX uses the content of the header X-Forwarded-For
as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr
with the correct information of the IP/network address of trusted external load balancer.
If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.
+Another option is to enable proxy protocol using use-proxy-protocol: "true"
.
In this mode NGINX does not use the content of the header to get the source IP address of the connection.
+If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.
+Amongst others ELBs in AWS and HAProxy support Proxy Protocol.
+Support for websockets is provided by NGINX out of the box. No special configuration required.
+The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout
and proxy-send-timeout
.
The default value of this settings is 60 seconds
.
A more adequate value to support websockets is a value higher than one hour (3600
).
Important: If the NGINX ingress controller is exposed with a service type=LoadBalancer
make sure the protocol between the loadbalancer and NGINX is TCP.
NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.
+This improves the TLS Time To First Byte (TTTFB).
+The default value in the Ingress controller is 4k
(NGINX default is 16k
).
Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error.
+The previous behavior can be restored using retry-non-idempotent=true
in the configuration ConfigMap.
host
The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.
+ + + + + + + + + +If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress, you need to specify the annotation kubernetes.io/ingress.class: "nginx"
in all ingresses that you would like this controller to claim. This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic). When utilizing this functionality the option --ingress-class
should be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:
spec: + template: + spec: + containers: + - name: nginx-ingress-internal-controller + args: + - /nginx-ingress-controller + - '--default-backend-service=ingress/nginx-ingress-default-backend' + - '--election-id=ingress-controller-leader-internal' + - '--ingress-class=nginx-internal' + - '--configmap=ingress/nginx-ingress-internal-controller' +
If you have multiple Ingress controllers in a single cluster, you can pick one by specifying the ingress.class
+annotation, eg creating an Ingress with an annotation like
metadata: + name: foo + annotations: + kubernetes.io/ingress.class: "gce" +
will target the GCE controller, forcing the nginx controller to ignore it, while an annotation like
+metadata: + name: foo + annotations: + kubernetes.io/ingress.class: "nginx" +
will target the nginx controller, forcing the GCE controller to ignore it.
+Note: Deploying multiple ingress controller and not specifying the annotation will result in both controllers fighting to satisfy the Ingress.
+Setting the annotation kubernetes.io/ingress.class
to any other value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting this to any value except "nginx" or an empty string.
Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.
+ + + + + + + + + +You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.
+Tip
+Annotation keys and values can only be strings.
+Other types, such as boolean or numeric values must be quoted,
+i.e. "true"
, "false"
, "100"
.
In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.
+Set the annotation nginx.ingress.kubernetes.io/rewrite-target
to the path expected by the service.
If the application contains relative links it is possible to add an additional annotation nginx.ingress.kubernetes.io/add-base-url
that will prepend a base
tag in the header of the returned HTML from the backend.
If the scheme of base
tag need to be specific, set the annotation nginx.ingress.kubernetes.io/base-url-scheme
to the scheme such as http
and https
.
If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root
to redirect requests for /
.
Please check the rewrite example.
+The annotation nginx.ingress.kubernetes.io/affinity
enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.
+The only affinity type available for NGINX is cookie
.
Please check the affinity example.
+Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key auth
.
The annotations are:
+nginx.ingress.kubernetes.io/auth-type: [basic|digest] +
Indicates the HTTP Authentication Type: Basic or Digest Access Authentication.
+nginx.ingress.kubernetes.io/auth-secret: secretName +
The name of the Secret that contains the usernames and passwords which are granted access to the path
s defined in the Ingress rules.
+This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
nginx.ingress.kubernetes.io/auth-realm: "realm string" +
Please check the auth example.
+NGINX exposes some flags in the upstream configuration that enable the configuration of each server in the upstream. The Ingress controller allows custom max_fails
and fail_timeout
parameters in a global context using upstream-max-fails
and upstream-fail-timeout
in the NGINX ConfigMap or in a particular Ingress rule. upstream-max-fails
defaults to 0. This means NGINX will respect the container's readinessProbe
if it is defined. If there is no probe and no values for upstream-max-fails
NGINX will continue to send traffic to the container.
With the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.
+To use custom values in an Ingress rule define these annotations:
+nginx.ingress.kubernetes.io/upstream-max-fails
: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the upstream-fail-timeout
parameter to consider the server unavailable.
nginx.ingress.kubernetes.io/upstream-fail-timeout
: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.
In NGINX, backend server pools are called "upstreams". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.
+Important: All Ingress rules using the same service will use the same upstream. Only one of the Ingress rules should define annotations to configure the upstream servers.
+Please check the custom upstream check example.
+NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.
+To enable consistent hashing for a backend:
+nginx.ingress.kubernetes.io/upstream-hash-by
: the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri"
to consistently hash upstream requests by the current request URI.
This is similar to https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/configmap.md#load-balance but configures load balancing algorithm per ingress.
+Note that nginx.ingress.kubernetes.io/upstream-hash-by
takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by
are not set then we fallback to using globally configured load balancing algorithm.
This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host
, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host
.
It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.
+The annotations are:
+nginx.ingress.kubernetes.io/auth-tls-secret: secretName +
The name of the Secret that contains the full Certificate Authority chain ca.crt
that is enabled to authenticate against this Ingress.
+This annotation also accepts the alternative form "namespace/secretName", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.
nginx.ingress.kubernetes.io/auth-tls-verify-depth +
The validation depth between the provided client certificate and the Certification Authority chain.
+nginx.ingress.kubernetes.io/auth-tls-verify-client +
Enables verification of client certificates.
+nginx.ingress.kubernetes.io/auth-tls-error-page +
The URL/Page that user should be redirected in case of a Certificate Authentication Error
+nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream +
Indicates if the received certificates should be passed or not to the upstream server. +By default this is disabled.
+Please check the client-certs example.
+Important:
+TLS with Client Authentication is NOT possible in Cloudflare as is not allowed it and might result in unexpected behavior.
+Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: +https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/
+Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: +https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls
+Using this annotation you can add additional configuration to the NGINX location. For example:
+nginx.ingress.kubernetes.io/configuration-snippet: | + more_set_headers "Request-Id: $req_id"; +
The ingress controller requires a default backend. This service handles the response when the service in the Ingress rule does not have endpoints.
+This is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation nginx.ingress.kubernetes.io/default-backend: <svc name>
to specify a custom default backend.
To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule add the annotation nginx.ingress.kubernetes.io/enable-cors: "true"
. This will add a section in the server location enabling this functionality.
CORS can be controlled with the following annotations:
+nginx.ingress.kubernetes.io/cors-allow-methods
controls which methods are accepted. This is a multi-valued field, separated by ',' and accepts only letters (upper and lower case).Example: nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS"
nginx.ingress.kubernetes.io/cors-allow-headers
controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters, numbers, _ and -.Example: nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For, X-app123-XPTO"
nginx.ingress.kubernetes.io/cors-allow-origin
controls what's the accepted Origin for CORS and defaults to '*'. This is a single field value, with the following format: http(s)://origin-site.com or http(s)://origin-site.com:portExample: nginx.ingress.kubernetes.io/cors-allow-origin: "https://origin-site.com:4443"
nginx.ingress.kubernetes.io/cors-allow-credentials
controls if credentials can be passed during CORS operations.Example: nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-max-age
controls how long preflight requests can be cached.Example: nginx.ingress.kubernetes.io/cors-max-age: 600
For more information please check https://enable-cors.org/server_nginx.html
+To add Server Aliases to an Ingress rule add the annotation nginx.ingress.kubernetes.io/server-alias: "<alias>"
.
+This will create a server with the same configuration, but a different server_name as the provided host.
Note: A server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias +annotation will be ignored. If a server-alias is created and later a new server with the same hostname is created +the new server configuration will take place over the alias configuration.
+For more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#server_name
+Using the annotation nginx.ingress.kubernetes.io/server-snippet
it is possible to add custom configuration in the server configuration block.
apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + nginx.ingress.kubernetes.io/server-snippet: | +set $agentflag 0; + +if ($http_user_agent ~* "(Mobile)" ){ + set $agentflag 1; +} + +if ( $agentflag = 1 ) { + return 301 https://m.example.com; +} +
Important: This annotation can be used only once per host
+Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, +the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. +This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is +applied to each location provided in the ingress rule.
+Note: The annotation value must be given in a valid format otherwise the +For example to set the client-body-buffer-size the following can be done:
+nginx.ingress.kubernetes.io/client-body-buffer-size: "1000"
# 1000 bytesnginx.ingress.kubernetes.io/client-body-buffer-size: 1k
# 1 kilobytenginx.ingress.kubernetes.io/client-body-buffer-size: 1K
# 1 kilobytenginx.ingress.kubernetes.io/client-body-buffer-size: 1m
# 1 megabytenginx.ingress.kubernetes.io/client-body-buffer-size: 1M
# 1 megabyteFor more information please see http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
+To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url
to indicate the URL where the HTTP request should be sent.
nginx.ingress.kubernetes.io/auth-url: "URL to the authentication service" +
Additionally it is possible to set:
+nginx.ingress.kubernetes.io/auth-method
: <Method>
to specify the HTTP method to use.
nginx.ingress.kubernetes.io/auth-signin
: <SignIn_URL>
to specify the location of the error page.
nginx.ingress.kubernetes.io/auth-response-headers
: <Response_Header_1, ..., Response_Header_n>
to specify headers to pass to backend once authorization request completes.
nginx.ingress.kubernetes.io/auth-request-redirect
: <Request_Redirect_URL>
to specify the X-Auth-Request-Redirect header value.
Please check the external-auth example.
+The annotations nginx.ingress.kubernetes.io/limit-connections
, nginx.ingress.kubernetes.io/limit-rps
, and nginx.ingress.kubernetes.io/limit-rpm
define a limit on the connections that can be opened by a single client IP address. This can be used to mitigate DDoS Attacks.
nginx.ingress.kubernetes.io/limit-connections
: number of concurrent connections allowed from a single IP address.
nginx.ingress.kubernetes.io/limit-rps
: number of connections that may be accepted from a given IP each second.
nginx.ingress.kubernetes.io/limit-rpm
: number of connections that may be accepted from a given IP each minute.
You can specify the client IP source ranges to be excluded from rate-limiting through the nginx.ingress.kubernetes.io/limit-whitelist
annotation. The value is a comma separated list of CIDRs.
If you specify multiple annotations in a single Ingress rule, limit-rpm
, and then limit-rps
takes precedence.
The annotation nginx.ingress.kubernetes.io/limit-rate
, nginx.ingress.kubernetes.io/limit-rate-after
define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
nginx.ingress.kubernetes.io/limit-rate-after
: sets the initial amount after which the further transmission of a response to a client will be rate limited.
nginx.ingress.kubernetes.io/limit-rate
: rate of request that accepted from a client each second.
To configure this setting globally for all Ingress rules, the limit-rate-after
and limit-rate
value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.
This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com
would redirect everything to Google.
The annotation nginx.ingress.kubernetes.io/ssl-passthrough
allows to configure TLS termination in the pod and not in NGINX.
Important:
+nginx.ingress.kubernetes.io/ssl-passthrough
invalidates all the other available annotations. This is because SSL Passthrough works in L4 (TCP).--enable-ssl-passthrough
is required (by default it is disabled).By default NGINX uses http
to reach the services. Adding the annotation nginx.ingress.kubernetes.io/secure-backends: "true"
in the Ingress rule changes the protocol to https
.
+If you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation nginx.ingress.kubernetes.io/secure-verify-ca-secret
.
Please note that if an invalid or non-existent secret is given, the NGINX ingress controller will ignore the secure-backends
annotation.
By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration. This annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port. This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257.
+If the service-upstream
annotation is specified the following things should be taken into consideration:
proxy_next_upstream
directive will not have any effect meaning on error the request will not be dispatched to another upstream.By default the controller redirects (301) to HTTPS
if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use ssl-redirect: "false"
in the NGINX config map.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false"
annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS
even when there is not TLS cert available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
annotation in the particular resource.
In some scenarios is required to redirect from www.domain.com
to domain.com
or viceversa.
+To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
Important:
+If at some point a new Ingress is created with a host equal to one of the options (like domain.com
) the annotation will be omitted.
You can specify the allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range
annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1
.
To configure this setting globally for all Ingress rules, the whitelist-source-range
value may be set in the NGINX ConfigMap.
Note: Adding an annotation to an Ingress rule overrides any global restriction.
+If you use the cookie
type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name
. The default is to create a cookie named 'INGRESSCOOKIE'.
In case of NGINX the annotation nginx.ingress.kubernetes.io/session-cookie-hash
defines which algorithm will be used to 'hash' the used upstream. Default value is md5
and possible values are md5
, sha1
and index
.
+The index
option is not hashed, an in-memory index is used instead, it's quicker and the overhead is shorter Warning: the matching against upstream servers list is inconsistent. So, at reload, if upstreams servers has changed, index values are not guaranteed to correspond to the same server as before! USE IT WITH CAUTION and only if you need to!
In NGINX this feature is implemented by the third party module nginx-sticky-module-ng. The workflow used to define which upstream server will be used is explained here
+Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. +In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:
+nginx.ingress.kubernetes.io/proxy-connect-timeout
nginx.ingress.kubernetes.io/proxy-send-timeout
nginx.ingress.kubernetes.io/proxy-read-timeout
nginx.ingress.kubernetes.io/proxy-next-upstream
nginx.ingress.kubernetes.io/proxy-next-upstream-tries
nginx.ingress.kubernetes.io/proxy-request-buffering
With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from
and nginx.ingress.kubernetes.io/proxy-redirect-to
it is possible to set the text that should be changed in the Location
and Refresh
header fields of a proxied server response (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)
+Setting "off" or "default" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from
disables nginx.ingress.kubernetes.io/proxy-redirect-to
+Both annotations will be used in any other case
+By default the value is "off".
For NGINX, 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size
.
To configure this setting globally for all Ingress rules, the proxy-body-size
value may be set in the NGINX ConfigMap.
+To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-body-size: 8m +
Enable or disable proxy buffering proxy_buffering
.
+By default proxy buffering is disabled in the nginx config.
To configure this setting globally for all Ingress rules, the proxy-buffering
value may be set in the NGINX ConfigMap.
+To use custom values in an Ingress rule define these annotation:
nginx.ingress.kubernetes.io/proxy-buffering: "on" +
Specifies the enabled ciphers.
+Using this annotation will set the ssl_ciphers
directive at the server level. This configuration is active for all the paths in the host.
nginx.ingress.kubernetes.io/ssl-ciphers: "ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP" +
Using this annotation will override the default connection header set by nginx. To use custom values in an Ingress rule, define the annotation:
+nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive" +
In some scenarios could be required to disable NGINX access logs. To enable this feature use the annotation:
+nginx.ingress.kubernetes.io/enable-access-log: "false" +
Using lua-resty-waf-*
annotations we can enable and control lua-resty-waf per location.
+Following configuration will enable WAF for the paths defined in the corresponding ingress:
nginx.ingress.kubernetes.io/lua-resty-waf: "active" +
In order to run it in debugging mode you can set nginx.ingress.kubernetes.io/lua-resty-waf-debug
to "true"
in addition to the above configuration.
+The other possible values for nginx.ingress.kubernetes.io/lua-resty-waf
are inactive
and simulate
. In inactive
mode WAF won't do anything, whereas
+in simulate
mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.
lua-resty-waf
comes with predefined set of rules(https://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules) that covers ModSecurity CRS.
+You can use nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets
to ignore subset of those rulesets. For an example:
nginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets: "41000_sqli, 42000_xss" +
will ignore the two mentioned rulesets.
+It is also possible to configure custom WAF rules per ingress using nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules
annotation. For an example the following snippet will
+configure a WAF rule to deny requests with query string value that contains word foo
:
nginx.ingress.kubernetes.io/lua-resty-waf-extra-rules: '[=[ { "access": [ { "actions": { "disrupt" : "DENY" }, "id": 10001, "msg": "my custom rule", "operator": "STR_CONTAINS", "pattern": "foo", "vars": [ { "parse": [ "values", 1 ], "type": "REQUEST_ARGS" } ] } ], "body_filter": [], "header_filter":[] } ]=]' +
For details on how to write WAF rules, please refer to https://github.com/p0pr0ck5/lua-resty-waf.
+ + + + + + + + + +ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.
+The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system +components for the nginx-controller. Before you can begin using a config-map it must be deployed.
+In order to overwrite nginx-controller configuration values as seen in config.go, +you can add key-value pairs to the data section of the config-map. For Example:
+data: + map-hash-bucket-size: "128" + ssl-protocols: SSLv2 +
IMPORTANT:
+The key and values in a ConfigMap can only be strings. +This means that we want a value with boolean values we need to quote the values, like "true" or "false". +Same for numbers, like "100".
+"Slice" types (defined below as []string
or []int
can be provided as a comma-delimited string.
The following table shows a configuration option's name, type, and the default value:
+name | +type | +default | +
---|---|---|
add-headers | +string | +"" | +
allow-backend-server-header | +bool | +"false" | +
hide-headers | +string array | +empty | +
access-log-path | +string | +"/var/log/nginx/access.log" | +
error-log-path | +string | +"/var/log/nginx/error.log" | +
enable-dynamic-tls-records | +bool | +"true" | +
enable-modsecurity | +bool | +"false" | +
enable-owasp-modsecurity-crs | +bool | +"false" | +
client-header-buffer-size | +string | +"1k" | +
client-header-timeout | +int | +60 | +
client-body-buffer-size | +string | +"8k" | +
client-body-timeout | +int | +60 | +
disable-access-log | +bool | +false | +
disable-ipv6 | +bool | +false | +
disable-ipv6-dns | +bool | +false | +
enable-underscores-in-headers | +bool | +false | +
ignore-invalid-headers | +bool | +true | +
enable-vts-status | +bool | +false | +
vts-status-zone-size | +string | +"10m" | +
vts-sum-key | +string | +"*" | +
vts-default-filter-key | +string | +"$geoip_country_code country::*" | +
retry-non-idempotent | +bool | +"false" | +
error-log-level | +string | +"notice" | +
http2-max-field-size | +string | +"4k" | +
http2-max-header-size | +string | +"16k" | +
hsts | +bool | +"true" | +
hsts-include-subdomains | +bool | +"true" | +
hsts-max-age | +string | +"15724800" | +
hsts-preload | +bool | +"false" | +
keep-alive | +int | +75 | +
keep-alive-requests | +int | +100 | +
large-client-header-buffers | +string | +"4 8k" | +
log-format-escape-json | +bool | +"false" | +
log-format-upstream | +string | +%v - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status |
+
log-format-stream | +string | +[$time_local] $protocol $status $bytes_sent $bytes_received $session_time |
+
max-worker-connections | +int | +16384 | +
map-hash-bucket-size | +int | +64 | +
nginx-status-ipv4-whitelist | +[]string | +"127.0.0.1" | +
nginx-status-ipv6-whitelist | +[]string | +"::1" | +
proxy-real-ip-cidr | +[]string | +"0.0.0.0/0" | +
proxy-set-headers | +string | +"" | +
server-name-hash-max-size | +int | +1024 | +
server-name-hash-bucket-size | +int | +<size of the processor’s cache line> |
+
proxy-headers-hash-max-size | +int | +512 | +
proxy-headers-hash-bucket-size | +int | +64 | +
server-tokens | +bool | +"true" | +
ssl-ciphers | +string | +"ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256" | +
ssl-ecdh-curve | +string | +"auto" | +
ssl-dh-param | +string | +"" | +
ssl-protocols | +string | +"TLSv1.2" | +
ssl-session-cache | +bool | +"true" | +
ssl-session-cache-size | +string | +"10m" | +
ssl-session-tickets | +bool | +"true" | +
ssl-session-ticket-key | +string | +<Randomly Generated> |
+
ssl-session-timeout | +string | +"10m" | +
ssl-buffer-size | +string | +"4k" | +
use-proxy-protocol | +bool | +"false" | +
use-gzip | +bool | +"true" | +
use-geoip | +bool | +"true" | +
enable-brotli | +bool | +"true" | +
brotli-level | +int | +4 | +
brotli-types | +string | +"application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component" | +
use-http2 | +bool | +"true" | +
gzip-types | +string | +"application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component" | +
worker-processes | +string | +<Number of CPUs> |
+
worker-cpu-affinity | +string | +"" | +
worker-shutdown-timeout | +string | +"10s" | +
load-balance | +string | +"least_conn" | +
variables-hash-bucket-size | +int | +128 | +
variables-hash-max-size | +int | +2048 | +
upstream-keepalive-connections | +int | +32 | +
limit-conn-zone-variable | +string | +"$binary_remote_addr" | +
proxy-stream-timeout | +string | +"600s" | +
proxy-stream-responses | +int | +1 | +
bind-address-ipv4 | +[]string | +"" | +
bind-address-ipv6 | +[]string | +"" | +
forwarded-for-header | +string | +"X-Forwarded-For" | +
compute-full-forwarded-for | +bool | +"false" | +
proxy-add-original-uri-header | +bool | +"true" | +
enable-opentracing | +bool | +"false" | +
zipkin-collector-host | +string | +"" | +
zipkin-collector-port | +int | +9411 | +
zipkin-service-name | +string | +"nginx" | +
jaeger-collector-host | +string | +"" | +
jaeger-collector-port | +int | +6831 | +
jaeger-service-name | +string | +"nginx" | +
jaeger-sampler-type | +string | +"const" | +
jaeger-sampler-param | +string | +"1" | +
http-snippet | +string | +"" | +
server-snippet | +string | +"" | +
location-snippet | +string | +"" | +
custom-http-errors | +[]int] | +[]int{} | +
proxy-body-size | +string | +"1m" | +
proxy-connect-timeout | +int | +5 | +
proxy-read-timeout | +int | +60 | +
proxy-send-timeout | +int | +60 | +
proxy-buffer-size | +string | +"4k" | +
proxy-cookie-path | +string | +"off" | +
proxy-cookie-domain | +string | +"off" | +
proxy-next-upstream | +string | +"error timeout invalid_header http_502 http_503 http_504" | +
proxy-next-upstream-tries | +int | +0 | +
proxy-redirect-from | +string | +"off" | +
proxy-request-buffering | +string | +"on" | +
ssl-redirect | +bool | +"true" | +
whitelist-source-range | +[]string | +[]string{} | +
skip-access-log-urls | +[]string | +[]string{} | +
limit-rate | +int | +0 | +
limit-rate-after | +int | +0 | +
http-redirect-code | +int | +308 | +
proxy-buffering | +string | +"off" | +
limit-req-status-code | +int | +503 | +
no-tls-redirect-locations | +string | +"/.well-known/acme-challenge" | +
no-auth-locations | +string | +"/.well-known/acme-challenge" | +
Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example
+Enables the return of the header Server from the backend instead of the generic nginx string. By default this is disabled.
+Sets additional header that will not be passed from the upstream server to the client response. +Default: empty
+References: +- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header
+Access log path. Goes to /var/log/nginx/access.log
by default.
Note: the file /var/log/nginx/access.log
is a symlink to /dev/stdout
Error log path. Goes to /var/log/nginx/error.log
by default.
Note: the file /var/log/nginx/error.log
is a symlink to /dev/stderr
References: +- http://nginx.org/en/docs/ngx_core_module.html#error_log
+Enables dynamically sized TLS records to improve time-to-first-byte. By default this is enabled. See CloudFlare's blog for more information.
+Enables the modsecurity module for NGINX. By default this is disabled.
+Enables the OWASP ModSecurity Core Rule Set (CRS). By default this is disabled.
+Allows to configure a custom buffer size for reading client request header.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
+Defines a timeout for reading client request header, in seconds.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout
+Sets buffer size for reading client request body.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size
+Defines a timeout for reading client request body, in seconds.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout
+Disables the Access Log from the entire Ingress Controller. This is '"false"' by default.
+References: +- http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log
+Disable listening on IPV6. By default this is disabled.
+Disable IPV6 for nginx DNS resolver. By default this is disabled.
+Enables underscores in header names. By default this is disabled.
+Set if header fields with invalid names should be ignored. +By default this is enabled.
+Allows the replacement of the default status page with a third party module named nginx-module-vts. +By default this is disabled.
+Vts config on http level sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processes. Default value is 10m
+References: +- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
+Vts config on http level enables the keys by user defined variable. The key is a key string to calculate traffic. The name is a group string to calculate traffic. The key and name can contain variables such as $host, $server_name. The name's group belongs to filterZones if specified. The key's group belongs to serverZones if not specified second argument name. Default value is $geoip_country_code country::*
+References: +- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_filter_by_set_key
+For metrics keyed (or when using Prometheus, labeled) by server zone, this value is used to indicate metrics for all server zones combined. Default value is *
+References: +- https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_display_sum_key
+Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".
+Configures the logging level of errors. Log levels above are listed in the order of increasing severity.
+References: +- http://nginx.org/en/docs/ngx_core_module.html#error_log
+Limits the maximum size of an HPACK-compressed request header field.
+References: +- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size
+Limits the maximum size of the entire request header list after HPACK decompression.
+References: +- https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size
+Enables or disables the header HSTS in servers running SSL. +HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.
+References: +- https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security +- https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
+Enables or disables the use of HSTS in all the subdomains of the server-name.
+Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.
+Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd
+Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout
+Sets the maximum number of requests that can be served through one keep-alive connection.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests
+Sets the maximum number and size of buffers used for reading large client request header. Default: 4 8k.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers
+Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx log format.
+Sets the nginx log format. +Example for json output:
+consolelog-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr","x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":$status, "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri","request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":"$http_user_agent" }'
Please check log-format for definition of each field.
+Sets the nginx stream format.
+Sets the maximum number of simultaneous connections that can be opened by each worker process
+Sets the bucket size for the map variables hash tables. The details of setting up hash tables are provided in a separate document.
+If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.
+Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example
+Sets the maximum size of the server names hash tables used in server names,map directive’s values, MIME types, names of request header strings, etc.
+References: +- http://nginx.org/en/docs/hash.html
+Sets the size of the bucket for the server names hash tables.
+References: +- http://nginx.org/en/docs/hash.html +- http://nginx.org/en/docs/http/ngx_http_core_module.html#server_names_hash_bucket_size
+Sets the maximum size of the proxy headers hash tables.
+References: +- http://nginx.org/en/docs/hash.html +- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_max_size
+Sets the size of the bucket for the proxy headers hash tables.
+References: +- http://nginx.org/en/docs/hash.html +- https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_headers_hash_bucket_size
+Send NGINX Server header in responses and display NGINX version in error pages. By default this is enabled.
+Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.
+The default cipher list is:
+ ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
.
The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.
+Please check the Mozilla SSL Configuration Generator.
+Specifies a curve for ECDHE ciphers.
+References: +- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve
+Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy".
+References: +- https://wiki.openssl.org/index.php/Manual:Dhparam(1) +- https://wiki.mozilla.org/Security/Server_Side_TLS#DHE_handshake_and_dhparam +- http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_dhparam
+Sets the SSL protocols to use. The default is: TLSv1.2
.
Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html
or https://testssl.sh
.
Enables or disables the use of shared SSL cache among worker processes.
+Sets the size of the SSL shared session cache between all worker processes.
+Enables or disables session resumption through TLS session tickets.
+Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string.
+TLS session ticket-key, by default, a randomly generated key is used. To create a ticket: openssl rand 80 | base64 -w0
Sets the time during which a client may reuse the session parameters stored in a cache.
+Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).
+References: +- https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/
+Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).
+Enables or disables compression of HTTP responses using the "gzip" module.
+The default mime type list to compress is: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component
.
Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. +The default value is true.
+Enables or disables compression of HTTP responses using the "brotli" module.
+The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component
. This is disabled by default.
Note: Brotli does not works in Safari < 11 https://caniuse.com/#feat=brotli
+Sets the Brotli Compression Level that will be used. Defaults to 4.
+Sets the MIME Types that will be compressed on-the-fly by brotli.
+Defaults to application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component
.
Enables or disables HTTP/2 support in secure connections.
+Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if use-gzip
is enabled.
Sets the number of worker processes. +The default of "auto" means number of available CPU cores.
+Binds worker processes to the sets of CPUs. worker_cpu_affinity. +By default worker processes are not bound to any specific CPUs. The value can be:
+0001 0010 0100 1000
to bind processes to specific cpus.Sets a timeout for Nginx to wait for worker to gracefully shutdown. The default is "10s".
+Sets the algorithm to use for load balancing. +The value can either be:
+enable-dynamic-configuration
flag) The default is least_conn.
+References: +- http://nginx.org/en/docs/http/load_balancing.html.
+Sets the bucket size for the variables hash table.
+References: +- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size
+Sets the maximum size of the variables hash table.
+References: +- http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size
+Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this +number is exceeded, the least recently used connections are closed. Default: 32
+References: +- http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
+Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.
+Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.
+References: +- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout
+Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.
+References: +- http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses
+Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
+Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.
+Sets the header field for identifying the originating IP address of a client. Default is X-Forwarded-For
+Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.
+Adds an X-Original-Uri header with the original request URI to the backend request
+Enables the nginx Opentracing extension. By default this is disabled.
+References: +- https://github.com/opentracing-contrib/nginx-opentracing
+Specifies the host to use when uploading traces. It must be a valid URL.
+Specifies the port to use when uploading traces. Default: 9411
+Specifies the service name to use for any traces created. Default: nginx
+Specifies the host to use when uploading traces. It must be a valid URL.
+Specifies the port to use when uploading traces. Default: 6831
+Specifies the service name to use for any traces created. Default: nginx
+Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. Default const.
+Specifies the argument to be passed to the sampler constructor. Must be a number. +For const this should be 0 to never sample and 1 to always sample. Default: 1
+Adds custom configuration to the http section of the nginx configuration. +Default: ""
+Adds custom configuration to all the servers in the nginx configuration. +Default: ""
+Adds custom configuration to all the locations in the nginx configuration. +Default: ""
+Enables which HTTP codes should be passed for processing with the error_page directive
+Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
+Example usage: custom-http-errors: 404,415
Sets the maximum allowed size of the client request body. +See NGINX client_max_body_size.
+Sets the timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.
+Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.
+Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.
+Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.
+Sets a text that should be changed in the path attribute of the “Set-Cookie” header fields of a proxied server response.
+Sets a text that should be changed in the domain attribute of the “Set-Cookie” header fields of a proxied server response.
+Specifies in which cases a request should be passed to the next server.
+Limit the number of possible tries a request should be passed to the next server.
+Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. Default: off.
+References: +- http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect
+Enables or disables buffering of a client request body.
+Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). +Default is "true".
+Sets the default whitelisted IPs for each server
block. This can be overwritten by an annotation on an Ingress rule.
+See ngx_http_access_module.
Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health
or health-check
that make "complex" reading the logs. By default this list is empty
Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate
+Sets the initial amount after which the further transmission of a response to a client will be rate limited.
+References: +- http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after
+Sets the HTTP status code to be used in redirects. +Supported codes are 301,302,307 and 308 +Default code is 308.
+Why the default code is 308?
+RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.
+Enables or disables buffering of responses from the proxied server.
+Sets the status code to return in response to rejected requests.Default: 503
+A comma-separated list of locations on which http requests will never get redirected to their https counterpart. +Default: "/.well-known/acme-challenge"
+A comma-separated list of locations that should not get authenticated. +Default: "/.well-known/acme-challenge"
+ + + + + + + + + +The NGINX template is located in the file /etc/nginx/template/nginx.tmpl
.
Using a Volume it is possible to use a custom template. +This includes using a Configmap as source of the template
+volumeMounts: + - mountPath: /etc/nginx/template + name: nginx-template-volume + readOnly: true + volumes: + - name: nginx-template-volume + configMap: + name: nginx-template + items: + - key: nginx.tmpl + path: nginx.tmpl +
Please note the template is tied to the Go code. Do not change names in the variable $cfg
.
For more information about the template syntax please check the Go template package. +In addition to the built-in functions provided by the Go package the following functions are also available:
+TODO:
+There are three ways to customize NGINX:
+rcvbuf
or when is not possible to change the configuration through the ConfigMap.The default configuration uses a custom logging format to add additional information about upstreams, response time and status
+log_format upstreaminfo '{{ if $cfg.useProxyProtocol }}$proxy_protocol_addr{{ else }}$remote_addr{{ end }} - ' + '[$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" ' + '$request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status'; +
Sources:
+ +Description:
+$proxy_protocol_addr
: if PROXY protocol is enabled$remote_addr
: if PROXY protocol is disabled (default)$the_real_ip
: the source IP address of the client$remote_user
: user name supplied with the Basic authentication$time_local
: local time in the Common Log Format$request
: full original request line$status
: response status$body_bytes_sent
: number of bytes sent to a client, not counting the response header$http_referer
: value of the Referer header$http_user_agent
: value of User-Agent header$request_length
: request length (including request line, header, and request body)$request_time
: time elapsed since the first bytes were read from the client$proxy_upstream_name
: name of the upstream. The format is upstream-<namespace>-<service name>-<service port>
$upstream_addr
: keeps the IP address and port, or the path to the UNIX-domain socket of the upstream server. If several servers were contacted during request processing, their addresses are separated by commas$upstream_response_length
: keeps the length of the response obtained from the upstream server$upstream_response_time
: keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution$upstream_status
: keeps status code of the response obtained from the upstream serverThe ngx_http_stub_status_module module provides access to basic status information.
+This is the default module active in the url /nginx_status
in the status port (default is 18080).
This controller provides an alternative to this module using the nginx-module-vts module.
+To use this module just set in the configuration configmap enable-vts-status: "true"
.
To extract the information in JSON format the module provides a custom URL: /nginx_status/format/json
ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org
+The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).
+The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf
. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration.
+To enable the ModSecurity feature we need to specify enable-modsecurity: "true"
in the configuration configmap.
NOTE: the default configuration use detection only, because that minimises the chances of post-installation disruption.
+The file /var/log/modsec_audit.log
contains the log of ModSecurity.
The OWASP ModSecurity Core Rule Set (CRS) is a set of generic attack detection rules for use with ModSecurity or compatible web application firewalls. The CRS aims to protect web applications from a wide range of attacks, including the OWASP Top Ten, with a minimum of false alerts.
+The directory /etc/nginx/owasp-modsecurity-crs
contains the https://github.com/SpiderLabs/owasp-modsecurity-crs repository.
+Using enable-owasp-modsecurity-crs: "true"
we enable the use of the rules.
Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. +By default this feature is disabled.
+To enable the instrumentation we just need to enable the instrumentation in the configuration configmap and set the host where we should send the traces.
+In the rnburn/zipkin-date-server +github repository is an example of a dockerized date service. To install the example and zipkin collector run:
+kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml +kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml +
Also we need to configure the NGINX controller configmap with the required values:
+$ echo ' +apiVersion: v1 +kind: ConfigMap +data: + enable-opentracing: "true" + zipkin-collector-host: zipkin.default.svc.cluster.local +metadata: + name: nginx-configuration + namespace: ingress-nginx + labels: + app: ingress-nginx +' | kubectl replace -f - +
Using curl we can generate some traces:
+$ curl -v http://$(minikube ip) +$ curl -v http://$(minikube ip) +
In the zipkin interface we can see the details:
+NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. This configuration works without issues for HTTP traffic.
+In case of HTTPS, NGINX requires a certificate.
+For this reason the Ingress controller provides the flag --default-ssl-certificate
. The secret behind this flag contains the default certificate to be used in the mentioned scenario. If this flag is not provided NGINX will use a self signed certificate.
Running without the flag --default-ssl-certificate
:
$ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.4... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Thu, 21 Jul 2016 15:38:46 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +<span>The page you're looking for could not be found.</span> + +* Connection #0 to host 10.2.78.7 left intact +
Specifying --default-ssl-certificate=default/foo-tls
:
core@localhost ~ $ curl -v https://10.2.78.7:443 -k +* Rebuilt URL to: https://10.2.78.7:443/ +* Trying 10.2.78.7... +* Connected to 10.2.78.7 (10.2.78.7) port 443 (#0) +* ALPN, offering http/1.1 +* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH +* successfully set certificate verify locations: +* CAfile: /etc/ssl/certs/ca-certificates.crt + CApath: /etc/ssl/certs +* TLSv1.2 (OUT), TLS header, Certificate Status (22): +* TLSv1.2 (OUT), TLS handshake, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Server hello (2): +* TLSv1.2 (IN), TLS handshake, Certificate (11): +* TLSv1.2 (IN), TLS handshake, Server key exchange (12): +* TLSv1.2 (IN), TLS handshake, Server finished (14): +* TLSv1.2 (OUT), TLS handshake, Client key exchange (16): +* TLSv1.2 (OUT), TLS change cipher, Client hello (1): +* TLSv1.2 (OUT), TLS handshake, Finished (20): +* TLSv1.2 (IN), TLS change cipher, Client hello (1): +* TLSv1.2 (IN), TLS handshake, Finished (20): +* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256 +* ALPN, server accepted to use http/1.1 +* Server certificate: +* subject: CN=foo.bar.com +* start date: Apr 13 00:50:56 2016 GMT +* expire date: Apr 13 00:50:56 2017 GMT +* issuer: CN=foo.bar.com +* SSL certificate verify result: self signed certificate (18), continuing anyway. +> GET / HTTP/1.1 +> Host: 10.2.78.7 +> User-Agent: curl/7.47.1 +> Accept: */* +> +< HTTP/1.1 404 Not Found +< Server: nginx/1.11.1 +< Date: Mon, 18 Jul 2016 21:02:59 GMT +< Content-Type: text/html +< Transfer-Encoding: chunked +< Connection: keep-alive +< Strict-Transport-Security: max-age=15724800; includeSubDomains; preload +< +<span>The page you're looking for could not be found.</span> + +* Connection #0 to host 10.2.78.7 left intact +
The flag --enable-ssl-passthrough
enables SSL passthrough feature.
+By default this feature is disabled
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.
+By default the controller redirects (301) to HTTPS if there is a TLS Ingress rule.
+To disable this behavior use hsts: "false"
in the configuration ConfigMap.
By default the controller redirects (301) to HTTPS
if TLS is enabled for that ingress. If you want to disable that behavior globally, you can use ssl-redirect: "false"
in the NGINX config map.
To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false"
annotation in the particular resource.
When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS
even when there is not TLS cert available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
annotation in the particular resource.
Kube-Lego automatically requests missing or expired certificates from Let's Encrypt by monitoring ingress resources and their referenced secrets. To enable this for an ingress resource you have to add an annotation:
+kubectl annotate ing ingress-demo kubernetes.io/tls-acme="true"
+
To setup Kube-Lego you can take a look at this full example. The first +version to fully support Kube-Lego is nginx Ingress controller 0.8.
+To provide the most secure baseline configuration possible, nginx-ingress defaults to using TLS 1.2 and a secure set of TLS ciphers
+The default configuration, though secure, does not support some older browsers and operating systems. For instance, 20% of Android phones in use today are not compatible with nginx-ingress's default configuration. To change this default behavior, use a ConfigMap.
+A sample ConfigMap to allow these older clients connect could look something like the following:
+kind: ConfigMap +apiVersion: v1 +metadata: + name: nginx-config +data: + ssl-ciphers: "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA" + ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2" +