diff --git a/deploy/baremetal/index.html b/deploy/baremetal/index.html index 53a8a3ba9..13cb98e02 100644 --- a/deploy/baremetal/index.html +++ b/deploy/baremetal/index.html @@ -1249,7 +1249,7 @@ -

Bare-metal considerations

+

Bare-metal considerations

In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly @@ -1258,7 +1258,7 @@ different setup to offer the same kind of access to external consumers.

Bare-metal environment

The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.

-

A pure software solution: MetalLB

+

A pure software solution: MetalLB

MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.

This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX @@ -1328,7 +1328,7 @@ the ports configured in the LoadBalancer Service:

traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.

-

Over a NodePort Service

+

Over a NodePort Service

Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.

@@ -1469,7 +1469,7 @@ NodePort:

-

Via the host network

+

Via the host network

In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network @@ -1566,7 +1566,7 @@ address of all nodes running the NGINX Ingress controller.

Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments.

-

Using a self-provisioned edge

+

Using a self-provisioned edge

Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy) and is usually managed outside of the Kubernetes landscape by operations teams.

@@ -1577,7 +1577,7 @@ This is particularly suitable for private Kubernetes clusters where none of the nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:

User edge

-

External IPs

+

External IPs

Source IP address

This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not diff --git a/deploy/index.html b/deploy/index.html index 0b76b6699..1c56bd1e6 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -1425,8 +1425,8 @@ -

Installation Guide

-

Contents

+

Installation Guide

+

Contents

-

AWS

+

AWS

In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer. Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page

-
Elastic Load Balancer - ELB
+
Elastic Load Balancer - ELB

This setup requires to choose in which layer (L4 or L7) we want to configure the ELB:

  • Layer 4: use TCP as the listener protocol for ports 80 and 443.
  • @@ -1525,26 +1525,26 @@ Please check the

    This example creates an ELB with just two listeners, one in port 80 and another in port 443

    Listeners

    -
    ELB Idle Timeouts
    +
    ELB Idle Timeouts

    In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s.

    The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured.

    Please Note: An idle timeout of 3600s is recommended when using WebSockets.

    More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation.

    -
    Network Load Balancer (NLB)
    +
    Network Load Balancer (NLB)

    This type of load balancer is supported since v1.10.0 as an ALPHA feature.

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml
     
    -

    GCE-GKE

    +

    GCE-GKE

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
     

    Important Note: proxy protocol is not supported in GCE/GKE

    -

    Azure

    +

    Azure

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml
     
    -

    Bare-metal

    +

    Bare-metal

    Using NodePort:

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
     
    @@ -1553,14 +1553,14 @@ Please check the

    Tip

    For extended notes regarding deployments on bare-metal, see Bare-metal considerations.

    -

    Verify installation

    +

    Verify installation

    To check if the ingress controller pods have started, run the following command:

    kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
     

    Once the operator pods are running, you can cancel the above command by typing Ctrl+C. Now, you are ready to create your first ingress.

    -

    Detect installed version

    +

    Detect installed version

    To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command.

    POD_NAMESPACE=ingress-nginx
     POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
    @@ -1568,7 +1568,7 @@ Now, you are ready to create your first ingress.

    kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
    -

    Using Helm

    +

    Using Helm

    NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx:

    helm install stable/nginx-ingress --name my-nginx
    diff --git a/deploy/rbac/index.html b/deploy/rbac/index.html
    index bf68d57de..631e125f7 100644
    --- a/deploy/rbac/index.html
    +++ b/deploy/rbac/index.html
    @@ -1275,8 +1275,8 @@
                       
                     
                     
    -                

    Role Based Access Control (RBAC)

    -

    Overview

    +

    Role Based Access Control (RBAC)

    +

    Overview

    This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.

    Role Based Access Control is comprised of four layers:

      @@ -1288,13 +1288,13 @@

      In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.

      -

      Service Accounts created in this example

      +

      Service Accounts created in this example

      One ServiceAccount is created in this example, nginx-ingress-serviceaccount.

      -

      Permissions Granted in this example

      +

      Permissions Granted in this example

      There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.

      -

      Cluster Permissions

      +

      Cluster Permissions

      These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole

      @@ -1305,7 +1305,7 @@ granted to the ClusterRole named nginx-ingress-clusterr
    1. events: create, patch
    2. ingresses/status: update
-

Namespace Permissions

+

Namespace Permissions

These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role

    @@ -1333,7 +1333,7 @@ are part of the request body).

Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.

-

Bindings

+

Bindings

The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.

The serviceAccountName associated with the containers in the deployment must diff --git a/deploy/upgrade/index.html b/deploy/upgrade/index.html index 5f34430ba..1039baaf2 100644 --- a/deploy/upgrade/index.html +++ b/deploy/upgrade/index.html @@ -1207,13 +1207,13 @@ -

Upgrading

+

Upgrading

Important

No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

-

Without Helm

+

Without Helm

To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

I.e. if your deployment resource looks like (partial example):

@@ -1240,7 +1240,7 @@ The easiest way to do this is e.g. (do note you may need to change the name para

For interactive editing, use kubectl edit deployment nginx-ingress-controller.

-

With Helm

+

With Helm

If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress, you should be able to upgrade using

helm upgrade --reuse-values ngx-ingress stable/nginx-ingress
diff --git a/deploy/validating-webhook/index.html b/deploy/validating-webhook/index.html
index 32927df9c..4890b8fb2 100644
--- a/deploy/validating-webhook/index.html
+++ b/deploy/validating-webhook/index.html
@@ -1341,14 +1341,14 @@
                   
                 
                 
-                

Validating webhook (admission controller)

-

Overview

+

Validating webhook (admission controller)

+

Overview

Nginx ingress controller offers the option to validate ingresses before they enter the cluster, ensuring controller will generate a valid configuration.

This controller is called, when ValidatingAdmissionWebhook is enabled, by the Kubernetes API server each time a new ingress is to enter the cluster, and rejects objects for which the generated nginx configuration fails to be validated.

This feature requires some further configuration of the cluster, hence it is an optional feature, this section explains how to enable it for your cluster.

-

Configure the webhook

-

Generate the webhook certificate

-

Self signed certificate

+

Configure the webhook

+

Generate the webhook certificate

+

Self signed certificate

Validating webhook must be served using TLS, you need to generate a certificate. Note that kube API server is checking the hostname of the certificate, the common name of your certificate will need to match the service name.

Example

@@ -1357,7 +1357,7 @@
-
Using Kubernetes CA
+
Using Kubernetes CA

Kubernetes also provides primitives to sign a certificate request. Here is an example on how to use it

Example

@@ -1426,7 +1426,7 @@ kubectl create secret generic ingress-nginx.svc \
-

Using helm

+

Using helm

To generate the certificate using helm, you can use the following snippet

Example

@@ -1436,7 +1436,7 @@ kubectl create secret generic ingress-nginx.svc \
-

Ingress controller flags

+

Ingress controller flags

To enable the feature in the ingress controller, you need to provide 3 flags to the command line.

@@ -1464,10 +1464,10 @@ kubectl create secret generic ingress-nginx.svc \
-

kube API server flags

+

kube API server flags

Validating webhook feature requires specific setup on the kube API server side. Depending on your kubernetes version, the flag can, or not, be enabled by default. To check that your kube API server runs with the required flags, please refer to the kubernetes documentation.

-

Additional kubernetes objects

+

Additional kubernetes objects

Once both the ingress controller and the kube API server are configured to serve the webhook, add the you can configure the webhook with the following objects:

apiVersion: v1
 kind: Service
diff --git a/development/index.html b/development/index.html
index 6a19af214..e933e2b3a 100644
--- a/development/index.html
+++ b/development/index.html
@@ -1357,11 +1357,11 @@
                   
                 
                 
-                

Developing for NGINX Ingress Controller

+

Developing for NGINX Ingress Controller

This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers.

-

Quick Start

-

Getting the code

+

Quick Start

+

Getting the code

The code must be checked out as a subdirectory of k8s.io, and not github.com.

mkdir -p $GOPATH/src/k8s.io
 cd $GOPATH/src/k8s.io
@@ -1370,16 +1370,16 @@ git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git
 cd ingress-nginx
 
-

Initial developer environment build

+

Initial developer environment build

Prequisites: Minikube must be installed. -See releases for installation instructions.

+See releases for installation instructions.

If you are using MacOS and deploying to minikube, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx:

$ make dev-env
 
-

Updating the deployment

+

Updating the deployment

The nginx controller container image can be rebuilt using:

$ ARCH=amd64 TAG=dev REGISTRY=$USER/ingress-controller make build container
 

@@ -1387,7 +1387,7 @@ See releases for i
$ kubectl get pods -n ingress-nginx
 $ kubectl delete pod -n ingress-nginx nginx-ingress-controller-<unique-pod-id>
 

-

Dependencies

+

Dependencies

The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies.

@@ -1396,8 +1396,8 @@ might need to update the dependencies.

$ dep version
 dep:
  version     : devel
- build date  : 
- git hash    : 
+ build date  :
+ git hash    :
  go version  : go1.9
  go compiler : gc
  platform    : linux/amd64
@@ -1414,7 +1414,7 @@ might need to update the dependencies.

$ dep prune
-

Building

+

Building

All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository.

@@ -1427,7 +1427,7 @@ or push an image to a remote repository.

To find the registry simply run: docker system info | grep Registry

-

Building the e2e test image

+

Building the e2e test image

The e2e test image can also be built through the Makefile.

$ make e2e-test-image
 
@@ -1436,7 +1436,7 @@ or push an image to a remote repository.

$ docker save nginx-ingress-controller:e2e |  (eval $(minikube docker-env) && docker load)
 
-

Nginx Controller

+

Nginx Controller

Build a raw server binary

$ make build
 

@@ -1449,10 +1449,10 @@ or push an image to a remote repository.

$ TAG=<tag> REGISTRY=$USER/ingress-controller make push
 
-

Deploying

+

Deploying

There are several ways to deploy the ingress controller onto a cluster. -Please check the deployment guide

-

Testing

+Please check the deployment guide

+

Testing

To run unit-tests, just run

$ cd $GOPATH/src/k8s.io/ingress-nginx
 $ make test
@@ -1463,15 +1463,15 @@ Please check the deployment guide

$ make e2e-test
-

NOTE: if your e2e pod keeps hanging in an ImagePullBackoff, make sure you've made your e2e nginx-ingress-controller image available to minikube as explained in Building the e2e test image

+

NOTE: if your e2e pod keeps hanging in an ImagePullBackoff, make sure you've made your e2e nginx-ingress-controller image available to minikube as explained in the Building the e2e test image section

To run unit-tests for lua code locally, run:

$ cd $GOPATH/src/k8s.io/ingress-nginx
 $ ./rootfs/etc/nginx/lua/test/up.sh
 $ make lua-test
 
-

Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test. When creating a new test file it must follow the naming convention <mytest>_test.lua or it will be ignored.

-

Releasing

+

Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test. When creating a new test file it must follow the naming convention <mytest>_test.lua or it will be ignored.

+

Releasing

All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io. All release images are hosted under gcr.io/google_containers and diff --git a/enhancements/20190724-only-dynamic-ssl/index.html b/enhancements/20190724-only-dynamic-ssl/index.html index 58bd827e8..bb9a6b794 100644 --- a/enhancements/20190724-only-dynamic-ssl/index.html +++ b/enhancements/20190724-only-dynamic-ssl/index.html @@ -1219,8 +1219,8 @@ -

Remove static SSL configuration mode

-

Table of Contents

+

Remove static SSL configuration mode

+

Table of Contents

-

Summary

+

Summary

Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

-

Motivation

+

Motivation

The static configuration implies reloads, something that affects the majority of the users.

-

Goals

+

Goals

  • Deprecation of the flag --enable-dynamic-certificates.
  • Cleanup of the codebase.
-

Non-Goals

+

Non-Goals

  • Features related to certificate authentication are not changed in any way.
-

Proposal

+

Proposal

  • Remove static SSL configuration
-

Implementation Details/Notes/Constraints

+

Implementation Details/Notes/Constraints

  • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
  • Remove any action of the flag --enable-dynamic-certificates
-

Drawbacks

-

Alternatives

+

Drawbacks

+

Alternatives

Keep both implementations

diff --git a/enhancements/20190815-zone-aware-routing/index.html b/enhancements/20190815-zone-aware-routing/index.html index 0d5047345..fa854c9b5 100644 --- a/enhancements/20190815-zone-aware-routing/index.html +++ b/enhancements/20190815-zone-aware-routing/index.html @@ -1206,8 +1206,8 @@ -

Availability zone aware routing

-

Table of Contents

+

Availability zone aware routing

+

Table of Contents

-

Summary

+

Summary

Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

-

Motivation

+

Motivation

When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, Amazon EC charges money for that. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to ingress-nginx pod is considered as @@ -1231,18 +1231,18 @@ inter zone traffic and costs money.

According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money sa GCP for cross zone, egress traffic.

This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

Arguably inter-zone network latency should also be better than cross zone.

-

Goals

+

Goals

  • Given a regional cluster running ingress-nginx, ingress-nginx should do best effort to pick zone-local endpoint when proxying
  • This should not impact canary feature
  • ingress-nginx should be able to operate successfully if there's no zonal endpoints
-

Non-Goals

+

Non-Goals

  • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
  • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
-

Proposal

+

Proposal

The idea here is to have controller part of ingress-nginx to (1) detect what zone its current pod is running in and (2) detect the zone for every endpoints it knows about. After that it will post that data as part of endpoints to Lua land. Then Lua balancer when picking an endpoint will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fallback to current behaviour.

@@ -1269,12 +1269,12 @@ needs to serve the request, we will first try to use zonal balancer for that bac then we will use general balancer. In case of zonal outages we assume that readiness probe will fail and controller will see no endpoints for the backend and therefore we will use general balancer.

We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

-

Implementation History

+

Implementation History

  • initial version of KEP is shipped
  • proposal and implementation details is done
-

Drawbacks [optional]

+

Drawbacks [optional]

More load on the Kubernetes API server.

diff --git a/enhancements/YYYYMMDD-kep-template/index.html b/enhancements/YYYYMMDD-kep-template/index.html index 06c24d95e..5bf5fa769 100644 --- a/enhancements/YYYYMMDD-kep-template/index.html +++ b/enhancements/YYYYMMDD-kep-template/index.html @@ -1293,7 +1293,7 @@ -

Title

+

Title

This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

@@ -1320,7 +1320,7 @@ A good title can help communicate what the KEP is and should be considered as pa

The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

-

Table of Contents

+

Table of Contents

A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

@@ -1348,42 +1348,42 @@ See the KEP process for details on each of these items.

  • Alternatives [optional]
  • -

    Summary

    +

    Summary

    The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementors to split their attention between writing release notes and implementing the feature itself.

    A good summary is probably at least a paragraph in length.

    -

    Motivation

    +

    Motivation

    This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

    -

    Goals

    +

    Goals

    List the specific goals of the KEP. How will we know that this has succeeded?

    -

    Non-Goals

    +

    Non-Goals

    What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

    -

    Proposal

    +

    Proposal

    This is where we get down to the nitty gritty of what the proposal actually is.

    -

    User Stories [optional]

    +

    User Stories [optional]

    Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.

    -

    Story 1

    -

    Story 2

    -

    Implementation Details/Notes/Constraints [optional]

    +

    Story 1

    +

    Story 2

    +

    Implementation Details/Notes/Constraints [optional]

    What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they releate.

    -

    Risks and Mitigations

    +

    Risks and Mitigations

    What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

    How will security be reviewed and by whom? How will UX be reviewed and by whom?

    Consider including folks that also work outside project.

    -

    Design Details

    -

    Test Plan

    +

    Design Details

    +

    Test Plan

    Note: Section not required until targeted at a release.

    Consider the following in developing a test plan for this enhancement:

      @@ -1394,14 +1394,14 @@ How will UX be reviewed and by whom?

      Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

      All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

      -

      Removing a deprecated flag

      +

      Removing a deprecated flag

      • Announce deprecation and support policy of the existing flag
      • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
      • Address feedback on usage/changed behavior, provided on GitHub issues
      • Deprecate the flag
      -

      Implementation History

      +

      Implementation History

      Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

        @@ -1412,9 +1412,9 @@ Major milestones might include

      • the version of Kubernetes where the KEP graduated to general availability
      • when the KEP was retired or superseded
      -

      Drawbacks [optional]

      +

      Drawbacks [optional]

      Why should this KEP not be implemented.

      -

      Alternatives [optional]

      +

      Alternatives [optional]

      Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

      diff --git a/enhancements/index.html b/enhancements/index.html index 60acb3614..18ef6a089 100644 --- a/enhancements/index.html +++ b/enhancements/index.html @@ -1171,15 +1171,15 @@ -

      Kubernetes Enhancement Proposals (KEPs)

      +

      Kubernetes Enhancement Proposals (KEPs)

      A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

      -

      Quick start for the KEP process

      +

      Quick start for the KEP process

      Follow the process outlined in the KEP template

      -

      Do I have to use the KEP process?

      +

      Do I have to use the KEP process?

      No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

      KEPs are only required when the changes are wide ranging and impact most of the project.

      -

      Why would I want to use the KEP process?

      +

      Why would I want to use the KEP process?

      Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

      Benefits to KEP users (in the limit):

      diff --git a/examples/PREREQUISITES/index.html b/examples/PREREQUISITES/index.html index 5a8ae0b71..c99af7038 100644 --- a/examples/PREREQUISITES/index.html +++ b/examples/PREREQUISITES/index.html @@ -1221,9 +1221,9 @@ -

      Prerequisites

      +

      Prerequisites

      Many of the examples in this directory have common prerequisites.

      -

      TLS certificates

      +

      TLS certificates

      Unless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA key/cert pair with an arbitrarily chosen hostname, created as follows

      $ openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=nginxsvc/O=nginxsvc"
      @@ -1238,7 +1238,7 @@ key/cert pair with an arbitrarily chosen hostname, created as follows

      Note: If using CA Authentication, described below, you will need to sign the server certificate with the CA.

      -

      Client Certificate Authentication

      +

      Client Certificate Authentication

      CA Authentication also known as Mutual Authentication allows both the server and client to verify each others identity via a common CA.

      We have a CA Certificate which we obtain usually from a Certificate Authority and use that to sign @@ -1260,7 +1260,7 @@ pass the client certificate.

      Once this is complete you can continue to follow the instructions here

      -

      Test HTTP Service

      +

      Test HTTP Service

      All examples that require a test HTTP Service use the standard http-svc pod, which you can deploy as follows

      $ kubectl create -f http-svc.yaml
      diff --git a/examples/affinity/cookie/index.html b/examples/affinity/cookie/index.html
      index a4595df96..62424c873 100644
      --- a/examples/affinity/cookie/index.html
      +++ b/examples/affinity/cookie/index.html
      @@ -1207,9 +1207,9 @@
                         
                       
                       
      -                

      Sticky sessions

      +

      Sticky sessions

      This example demonstrates how to achieve session affinity using cookies.

      -

      Deployment

      +

      Deployment

      Session affinity can be configured using the following annotations:

      @@ -1237,8 +1237,8 @@ - - + + @@ -1261,17 +1261,17 @@
      kubectl create -f ingress.yaml
       
      -

      Validation

      +

      Validation

      You can confirm that the Ingress works:

      $ kubectl describe ing nginx-test
       Name:           nginx-test
       Namespace:      default
      -Address:        
      +Address:
       Default backend:    default-http-backend:80 (10.180.0.4:8080,10.240.0.2:8080)
       Rules:
         Host                          Path    Backends
         ----                          ----    --------
      -  stickyingress.example.com     
      +  stickyingress.example.com
                                       /        nginx-service:80 (<none>)
       Annotations:
         affinity: cookie
      @@ -1302,7 +1302,7 @@ This cookie is created by NGINX, it contains a randomly generated key correspond
       If the user changes this cookie, NGINX creates a new one and redirects the user to another upstream.

      If the backend pool grows NGINX will keep sending the requests through the same server of the first request, even if it's overloaded.

      When the backend server is removed, the requests are re-routed to another upstream server. This does not require the cookie to be updated because the key's consistent hash will change.

      -

      When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. +

      When you have a Service pointing to more than one Ingress, with only one containing affinity configuration, the first created Ingress will be used. This means that you can face the situation that you've configured session affinity on one Ingress and it doesn't work because the Service is pointing to another Ingress that doesn't configure this.

      diff --git a/examples/auth/basic/index.html b/examples/auth/basic/index.html index e1c7d7c6e..eeed44270 100644 --- a/examples/auth/basic/index.html +++ b/examples/auth/basic/index.html @@ -1150,7 +1150,7 @@ -

      Basic Authentication

      +

      Basic Authentication

      This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.

      $ htpasswd -c auth foo
      diff --git a/examples/auth/client-certs/index.html b/examples/auth/client-certs/index.html
      index c16d579ed..227b663b3 100644
      --- a/examples/auth/client-certs/index.html
      +++ b/examples/auth/client-certs/index.html
      @@ -1209,7 +1209,7 @@
                         
                       
                       
      -                

      Client Certificate Authentication

      +

      Client Certificate Authentication

      It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:

        @@ -1228,7 +1228,7 @@ Before getting started you must have the following Certificates Setup:

        Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.

        -

        Creating Certificate Secrets

        +

        Creating Certificate Secrets

        There are many different ways of configuring your secrets to enable Client-Certificate Authentication to work properly.

          @@ -1255,7 +1255,7 @@ kubectl create secret generic tls-secret --from-file=tls.

        Note: The CA Certificate must contain the trusted certificate authority chain to verify client certificates.

        -

        Setup Instructions

        +

        Setup Instructions

        1. Add the annotations as provided in the ingress.yaml example to your own ingress resources as required.
        2. Test by performing a curl against the Ingress Path without the Client Cert and expect a Status Code 400.
        3. diff --git a/examples/auth/external-auth/index.html b/examples/auth/external-auth/index.html index 956786c17..305126649 100644 --- a/examples/auth/external-auth/index.html +++ b/examples/auth/external-auth/index.html @@ -1195,8 +1195,8 @@ -

          External Basic Authentication

          -

          Example 1:

          +

          External Basic Authentication

          +

          Example 1:

          Use an external service (Basic Auth) located in https://httpbin.org

          $ kubectl create -f ingress.yaml
           ingress "external-auth" created
          diff --git a/examples/auth/oauth-external-auth/index.html b/examples/auth/oauth-external-auth/index.html
          index 61cda48bb..bea4267d8 100644
          --- a/examples/auth/oauth-external-auth/index.html
          +++ b/examples/auth/oauth-external-auth/index.html
          @@ -1249,15 +1249,15 @@
                             
                           
                           
          -                

          External OAUTH Authentication

          -

          Overview

          +

          External OAUTH Authentication

          +

          Overview

          The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

          Important

          This annotation requires nginx-ingress-controller v0.9.0 or greater.)

          -

          Key Detail

          +

          Key Detail

          This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

          Other Ingress objects can then be annotated in such a way that require the user to @@ -1273,10 +1273,10 @@ same endpoint.

          ...
          -

          Example: OAuth2 Proxy + Kubernetes-Dashboard

          +

          Example: OAuth2 Proxy + Kubernetes-Dashboard

          This example will show you how to deploy oauth2_proxy into a Kubernetes cluster and use it to protect the Kubernetes Dashboard using github as oAuth2 provider

          -

          Prepare

          +

          Prepare

          1. Install the kubernetes dashboard
          diff --git a/examples/customization/configuration-snippets/index.html b/examples/customization/configuration-snippets/index.html index 926e71931..93261d5bd 100644 --- a/examples/customization/configuration-snippets/index.html +++ b/examples/customization/configuration-snippets/index.html @@ -1209,13 +1209,13 @@ -

          Configuration Snippets

          -

          Ingress

          +

          Configuration Snippets

          +

          Ingress

          The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.

          $ kubectl apply -f ingress.yaml
           
          -

          Test

          +

          Test

          Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf

          diff --git a/examples/customization/custom-configuration/index.html b/examples/customization/custom-configuration/index.html index 915814ccf..101b098a7 100644 --- a/examples/customization/custom-configuration/index.html +++ b/examples/customization/custom-configuration/index.html @@ -1150,7 +1150,7 @@ -

          Custom Configuration

          +

          Custom Configuration

          Using a ConfigMap is possible to customize the NGINX configuration

          For example, if we want to change the timeouts we need to create a ConfigMap:

          $ cat configmap.yaml
          diff --git a/examples/customization/custom-errors/index.html b/examples/customization/custom-errors/index.html
          index f7f488f5a..8b95dceed 100644
          --- a/examples/customization/custom-errors/index.html
          +++ b/examples/customization/custom-errors/index.html
          @@ -1223,9 +1223,9 @@
                             
                           
                           
          -                

          Custom Errors

          +

          Custom Errors

          This example demonstrates how to use a custom backend to render custom error pages.

          -

          Customized default backend

          +

          Customized default backend

          First, create the custom default-backend. It will be used by the Ingress controller later on.

          $ kubectl create -f custom-default-backend.yaml
           service "nginx-errors" created
          @@ -1241,7 +1241,7 @@ NAME                   TYPE        CLUSTER-IP  EXTERNAL-IP   PORT10.0.0.12   <none>        80/TCP    10s
           
          -

          Ingress controller configuration

          +

          Ingress controller configuration

          If you do not already have an instance of the NGINX Ingress controller running, deploy it according to the deployment guide, then follow these steps:

            @@ -1265,7 +1265,7 @@ ingress-nginx ClusterIP 10.0.0.13 <none>

            The ingress-nginx Service is of type ClusterIP in this example. This may vary depending on your environment. Make sure you can use the Service to reach NGINX before proceeding with the rest of this example.

          -

          Testing error pages

          +

          Testing error pages

          Let us send a couple of HTTP requests using cURL and validate everything is working as expected.

          A request to the default backend returns a 404 error with a custom message:

          $ curl -D- http://10.0.0.13/
          diff --git a/examples/customization/custom-headers/index.html b/examples/customization/custom-headers/index.html
          index 5b0cd47e7..0f1e35542 100644
          --- a/examples/customization/custom-headers/index.html
          +++ b/examples/customization/custom-headers/index.html
          @@ -1195,7 +1195,7 @@
                             
                           
                           
          -                

          Custom Headers

          +

          Custom Headers

          This example demonstrates configuration of the nginx ingress controller via a ConfigMap to pass a custom list of headers to the upstream server.

          @@ -1208,7 +1208,7 @@ server.

          The nginx ingress controller will read the ingress-nginx/nginx-configuration ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.

          -

          Test

          +

          Test

          Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n ingress-nginx cat /etc/nginx/nginx.conf

          diff --git a/examples/customization/external-auth-headers/index.html b/examples/customization/external-auth-headers/index.html index 3bca3929f..563e25515 100644 --- a/examples/customization/external-auth-headers/index.html +++ b/examples/customization/external-auth-headers/index.html @@ -1150,7 +1150,7 @@ -

          External authentication, authentication service response headers propagation

          +

          External authentication, authentication service response headers propagation

          This example demonstrates propagation of selected authentication service response headers to backend service.

          Sample configuration includes:

          diff --git a/examples/customization/ssl-dh-param/index.html b/examples/customization/ssl-dh-param/index.html index 187de7194..66515cbe6 100644 --- a/examples/customization/ssl-dh-param/index.html +++ b/examples/customization/ssl-dh-param/index.html @@ -1223,11 +1223,11 @@ -

          Custom DH parameters for perfect forward secrecy

          +

          Custom DH parameters for perfect forward secrecy

          This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".

          -

          Custom configuration

          +

          Custom configuration

          $ cat configmap.yaml
           apiVersion: v1
           data:
          @@ -1244,7 +1244,7 @@ use a ConfigMap to configure custom Diffie-Hellman parameters file to help with
           
          $ kubectl create -f configmap.yaml
           
          -

          Custom DH parameters secret

          +

          Custom DH parameters secret

          $> openssl dhparam 1024 2> /dev/null | base64
           LS0tLS1CRUdJTiBESCBQQVJBTUVURVJ...
           
          @@ -1265,7 +1265,7 @@ use a ConfigMap to configure custom Diffie-Hellman parameters file to help with
          $ kubectl create -f ssl-dh-param.yaml
           
          -

          Test

          +

          Test

          Check the contents of the configmap is present in the nginx.conf file using: kubectl exec nginx-ingress-controller-873061567-4n3k2 -n kube-system cat /etc/nginx/nginx.conf

          diff --git a/examples/customization/sysctl/index.html b/examples/customization/sysctl/index.html index a8c8b0760..58c8e08b4 100644 --- a/examples/customization/sysctl/index.html +++ b/examples/customization/sysctl/index.html @@ -1150,7 +1150,7 @@ -

          Sysctl tuning

          +

          Sysctl tuning

          This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch

          kubectl patch deployment -n ingress-nginx nginx-ingress-controller --patch="$(cat patch.json)"
          diff --git a/examples/docker-registry/index.html b/examples/docker-registry/index.html
          index 718c1e254..aba072e88 100644
          --- a/examples/docker-registry/index.html
          +++ b/examples/docker-registry/index.html
          @@ -1247,9 +1247,9 @@
                             
                           
                           
          -                

          Docker registry

          +

          Docker registry

          This example demonstrates how to deploy a docker registry in the cluster and configure Ingress enable access from Internet

          -

          Deployment

          +

          Deployment

          First we deploy the docker registry in the cluster:

          kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/deployment.yaml
           
          @@ -1260,7 +1260,7 @@

          This deployment uses emptyDir in the volumeMount which means the contents of the registry will be deleted when the pod dies.

          The next required step is creation of the ingress rules. To do this we have two options: with and without TLS

          -

          Without TLS

          +

          Without TLS

          Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

          wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-without-tls.yaml
           
          @@ -1270,13 +1270,13 @@

          Running a docker registry without TLS requires we configure our local docker daemon with the insecure registry flag.

          Please check deploy a plain http registry

          -

          With TLS

          +

          With TLS

          Download and edit the yaml deployment replacing registry.<your domain> with a valid DNS name pointing to the ingress controller:

          wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/docs/examples/docker-registry/ingress-with-tls.yaml
           

          Deploy kube lego use Let's Encrypt certificates or edit the ingress rule to use a secret with an existing SSL certificate.

          -

          Testing

          +

          Testing

          To test the registry is working correctly we download a known image from docker hub, create a tag pointing to the new registry and upload the image:

          docker pull ubuntu:16.04
           docker tag ubuntu:16.04 `registry.<your domain>/ubuntu:16.04`
          diff --git a/examples/grpc/index.html b/examples/grpc/index.html
          index d112c058e..a64ff5e98 100644
          --- a/examples/grpc/index.html
          +++ b/examples/grpc/index.html
          @@ -1289,10 +1289,10 @@
                             
                           
                           
          -                

          gRPC

          +

          gRPC

          This example demonstrates how to route traffic to a gRPC service through the nginx controller.

          -

          Prerequisites

          +

          Prerequisites

          1. You have a kubernetes cluster running.
          2. You have a domain name such as example.com that is configured to route @@ -1309,7 +1309,7 @@ nginx controller.

            fortune-teller application provided here as an example.
          -

          Step 1: kubernetes Deployment

          +

          Step 1: kubernetes Deployment

          $ kubectl create -f app.yaml
           
          @@ -1332,13 +1332,13 @@ inside the cluster and arrive "insecure").

          For your own application you may or may not want to do this. If you prefer to forward encrypted traffic to your POD and terminate TLS at the gRPC server itself, add the ingress annotation nginx.ingress.kubernetes.io/backend-protocol: "GRPCS".

          -

          Step 2: the kubernetes Service

          +

          Step 2: the kubernetes Service

          $ kubectl create -f svc.yaml
           

          Here we have a typical service. Nothing special, just routing traffic to the backend application on port 50051.

          -

          Step 3: the kubernetes Ingress

          +

          Step 3: the kubernetes Ingress

          $ kubectl create -f ingress.yaml
           
          @@ -1353,7 +1353,7 @@ backend application on port 50051.

          https://fortune-teller.stack.build:443 and routes unencrypted messages to our kubernetes service.
        -

        Step 4: test the connection

        +

        Step 4: test the connection

        Once we've applied our configuration to kubernetes, it's time to test that we can actually talk to the backend. To do this, we'll use the grpcurl utility:

        @@ -1363,7 +1363,7 @@ can actually talk to the backend. To do this, we'll use the }
      -

      Debugging Hints

      +

      Debugging Hints

      1. Obviously, watch the logs on your app.
      2. Watch the logs for the nginx-ingress-controller (increasing verbosity as @@ -1379,7 +1379,7 @@ https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.

        See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html

        -

        Notes on using response/request streams

        +

        Notes on using response/request streams

        1. If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to acommodate for this.
        2. If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the diff --git a/examples/index.html b/examples/index.html index aa58b2e92..beece6329 100644 --- a/examples/index.html +++ b/examples/index.html @@ -1148,7 +1148,7 @@ -

          Ingress examples

          +

          Ingress examples

          This directory contains a catalog of examples on how to run, configure and scale Ingress.
          Please review the prerequisites before trying them.

      nginx.ingress.kubernetes.io/session-cookie-pathPath that will be set on the cookie (required if your Ingress paths use regular expressions)string (defaults to the currently matched path)Path that will be set on the cookie (required if your Ingress paths use regular expressions)string (defaults to the currently matched path)
      nginx.ingress.kubernetes.io/session-cookie-max-age
      diff --git a/examples/multi-tls/index.html b/examples/multi-tls/index.html index f9beef581..0a95b6873 100644 --- a/examples/multi-tls/index.html +++ b/examples/multi-tls/index.html @@ -1148,7 +1148,7 @@ -

      Multi TLS certificate termination

      +

      Multi TLS certificate termination

      This example uses 2 different certificates to terminate SSL for 2 hostnames.

      1. Deploy the controller by creating the rc in the parent dir
      2. diff --git a/examples/psp/index.html b/examples/psp/index.html index 5f9e95dbd..0ccd469c9 100644 --- a/examples/psp/index.html +++ b/examples/psp/index.html @@ -1148,7 +1148,7 @@ -

        Pod Security Policy (PSP)

        +

        Pod Security Policy (PSP)

        In most clusters today, by default, all resources (e.g. Deployments and ReplicatSets) have permissions to create pods. Kubernetes however provides a more fine-grained authorization policy called diff --git a/examples/rewrite/index.html b/examples/rewrite/index.html index 0c1ff72b2..5d7cb3476 100644 --- a/examples/rewrite/index.html +++ b/examples/rewrite/index.html @@ -1261,13 +1261,13 @@ -

        Rewrite

        +

        Rewrite

        This example demonstrates how to use the Rewrite annotations

        -

        Prerequisites

        +

        Prerequisites

        You will need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, -and that you have an ingress controller running in your cluster.

        -

        Deployment

        +and that you have an ingress controller running in your cluster.

        +

        Deployment

        Rewriting can be controlled using the following annotations:

      @@ -1305,15 +1305,15 @@ and that you have an ingress controller running in yo
      -

      Examples

      -

      Rewrite Target

      +

      Examples

      +

      Rewrite Target

      Attention

      Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.

      Note

      -

      Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

      +

      Captured groups are saved in numbered placeholders, chronologically, in the form $1, $2 ... $n. These placeholders can be used as parameters in the rewrite-target annotation.

      Create an Ingress rule with a rewrite annotation:

      $ echo '
      @@ -1336,12 +1336,12 @@ and that you have an ingress controller running in yo
       ' | kubectl create -f -
       
      -

      In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.

      +

      In this ingress definition, any characters captured by (.*) will be assigned to the placeholder $2, which is then used as a parameter in the rewrite-target annotation.

      For example, the ingress definition above will result in the following rewrites: - rewrite.bar.com/something rewrites to rewrite.bar.com/ - rewrite.bar.com/something/ rewrites to rewrite.bar.com/ - rewrite.bar.com/something/new rewrites to rewrite.bar.com/new

      -

      App Root

      +

      App Root

      Create an Ingress rule with a app-root annotation:

      $ echo "
       apiVersion: extensions/v1beta1
      diff --git a/examples/static-ip/index.html b/examples/static-ip/index.html
      index 6943bd11a..361ce80cd 100644
      --- a/examples/static-ip/index.html
      +++ b/examples/static-ip/index.html
      @@ -1249,14 +1249,14 @@
                         
                       
                       
      -                

      Static IPs

      +

      Static IPs

      This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.

      -

      Prerequisites

      +

      Prerequisites

      You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, -and that you have an ingress controller running in your cluster.

      -

      Acquiring an IP

      +and that you have an ingress controller running in your cluster.

      +

      Acquiring an IP

      Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though @@ -1279,7 +1279,7 @@ already has it set to "nginx-ingress-lb").

      deployment "nginx-ingress-controller" created
      -

      Assigning the IP to an Ingress

      +

      Assigning the IP to an Ingress

      From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step

      $ kubectl create -f nginx-ingress.yaml
      @@ -1300,7 +1300,7 @@ already has it set to "nginx-ingress-lb").

      ...
      -

      Retaining the IP

      +

      Retaining the IP

      You can test retention by deleting the Ingress

      $ kubectl delete ing nginx-ingress
       ingress "nginx-ingress" deleted
      @@ -1318,7 +1318,7 @@ already has it set to "nginx-ingress-lb").

      Ingresses, because all requests are proxied through the same set of nginx controllers.

      -

      Promote ephemeral to static IP

      +

      Promote ephemeral to static IP

      To promote the allocated IP to static, you can update the Service manifest

      $ kubectl patch svc nginx-ingress-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
       "nginx-ingress-lb" patched
      diff --git a/examples/tls-termination/index.html b/examples/tls-termination/index.html
      index d61a8dd39..d4c34ce71 100644
      --- a/examples/tls-termination/index.html
      +++ b/examples/tls-termination/index.html
      @@ -1221,11 +1221,11 @@
                         
                       
                       
      -                

      TLS termination

      +

      TLS termination

      This example demonstrates how to terminate TLS through the nginx Ingress controller.

      -

      Prerequisites

      +

      Prerequisites

      You need a TLS cert and a test HTTP service for this example.

      -

      Deployment

      +

      Deployment

      Create a values.yaml file.

      apiVersion: extensions/v1beta1
       kind: Ingress
      @@ -1254,7 +1254,7 @@ TLS cert, and forward un-encrypted HTTP traffic to the test HTTP service.

      kubectl apply -f ingress.yaml
       
      -

      Validation

      +

      Validation

      You can confirm that the Ingress works.

      $ kubectl describe ing nginx-test
       Name:           nginx-test
      diff --git a/how-it-works/index.html b/how-it-works/index.html
      index abe6c485c..a19c7855e 100644
      --- a/how-it-works/index.html
      +++ b/how-it-works/index.html
      @@ -1289,16 +1289,16 @@
                         
                       
                       
      -                

      How it works

      +

      How it works

      The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.

      -

      NGINX configuration

      +

      NGINX configuration

      The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use lua-nginx-module to achieve this. Check below to learn more about how it's done.

      -

      NGINX model

      +

      NGINX model

      Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster.

      To get this object from the cluster, we use Kubernetes Informers, in particular, FilteredSharedInformer. This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload.

      One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions.

      The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.

      -

      Building the NGINX model

      +

      Building the NGINX model

      Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue.

      Operations to build the model:

        @@ -1320,7 +1320,7 @@
      • Annotations are applied to all the paths in the Ingress.
      • Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.
      -

      When a reload is required

      +

      When a reload is required

      The next list describes the scenarios when a reload is required:

      • New Ingress Resource Created.
      • @@ -1331,12 +1331,12 @@
      • Some missing referenced object from the Ingress is available, like a Service or Secret.
      • A Secret is updated.
      -

      Avoiding reloads

      +

      Avoiding reloads

      In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.

      -

      Avoiding reloads on Endpoints changes

      +

      Avoiding reloads on Endpoints changes

      On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well.

      In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.

      -

      Avoiding outage from wrong configuration

      +

      Avoiding outage from wrong configuration

      Because the ingress controller works using the synchronization loop pattern, it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account.

      To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.

      diff --git a/index.html b/index.html index 540e0dcd8..f800d9138 100644 --- a/index.html +++ b/index.html @@ -1193,12 +1193,12 @@ -

      Welcome

      +

      Welcome

      This is the documentation for the NGINX Ingress Controller.

      It is built around the Kubernetes Ingress resource, using a ConfigMap to store the NGINX configuration.

      Learn more about using Ingress on k8s.io.

      -

      Getting Started

      -

      See Deployment for a whirlwind tour that will get you started.

      +

      Getting Started

      +

      See Deployment for a whirlwind tour that will get you started.

      diff --git a/kubectl-plugin/index.html b/kubectl-plugin/index.html index 50660cbb5..85c81899f 100644 --- a/kubectl-plugin/index.html +++ b/kubectl-plugin/index.html @@ -1381,8 +1381,8 @@ Do not move it without providing redirects. ----------------------------------------------- --> -

      The ingress-nginx kubectl plugin

      -

      Installation

      +

      The ingress-nginx kubectl plugin

      +

      Installation

      Install krew, then run

      kubectl krew install ingress-nginx
       
      @@ -1442,15 +1442,15 @@ Do not move it without providing redirects.

      Replacing 0.24.0 with the recently released version.

      -

      Common Flags

      +

      Common Flags

      • Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
      • Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment> and --pod <pod> flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to nginx-ingress-controller.
      • Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.
      -

      Subcommands

      +

      Subcommands

      Note that backends, general, certs, and conf require ingress-nginx version 0.23.0 or higher.

      -

      backends

      +

      backends

      Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about:

      $ kubectl ingress-nginx backends -n ingress-nginx
       [
      @@ -1480,11 +1480,6 @@ Do not move it without providing redirects.
             }
           },
           "port": 0,
      -    "secureCACert": {
      -      "secret": "",
      -      "caFilename": "",
      -      "caSha": ""
      -    },
           "sslPassthrough": false,
           "endpoints": [
             {
      @@ -1521,7 +1516,7 @@ Do not move it without providing redirects.
       

      Add the --list option to show only the backend names. Add the --backend <backend> option to show only the backend with the given name.

      -

      certs

      +

      certs

      Use kubectl ingress-nginx certs --host <hostname> to dump the SSL cert/key information for a given host. Requires that --enable-dynamic-certificates is true (this is the default as of version 0.24.0).

      WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere.

      $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local
      @@ -1537,7 +1532,7 @@ Do not move it without providing redirects.
       -----END RSA PRIVATE KEY-----
       
      -

      conf

      +

      conf

      Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host <hostname> option to view only the server block for that host:

      kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local
       
      @@ -1563,7 +1558,7 @@ Do not move it without providing redirects.
       ...
       
      -

      exec

      +

      exec

      kubectl ingress-nginx exec is exactly the same as kubectl exec, with the same command flags. It will automatically choose an ingress-nginx pod to run the command in.

      $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx
       fastcgi_params
      @@ -1578,7 +1573,7 @@ Do not move it without providing redirects.
       template
       
      -

      general

      +

      general

      kubectl ingress-nginx general dumps miscellaneous controller state as a JSON object. Currently it just shows the number of controller pods known to a particular controller pod.

      $ kubectl ingress-nginx general -n ingress-nginx
       {
      @@ -1586,7 +1581,7 @@ Do not move it without providing redirects.
       }
       
      -

      info

      +

      info

      Shows the internal and external IP/CNAMES for an ingress-nginx service.

      $ kubectl ingress-nginx info -n ingress-nginx
       Service cluster IP address: 10.187.253.31
      @@ -1594,7 +1589,7 @@ Do not move it without providing redirects.
       

      Use the --service <service> flag if your ingress-nginx LoadBalancer service is not named ingress-nginx.

      -

      ingresses

      +

      ingresses

      kubectl ingress-nginx ingresses, alternately kubectl ingress-nginx ing, shows a more detailed view of the ingress definitions in a namespace. Compare:

      $ kubectl get ingresses --all-namespaces
       NAMESPACE   NAME               HOSTS                            ADDRESS     PORTS   AGE
      @@ -1611,7 +1606,7 @@ Do not move it without providing redirects.
       default     test-ingress-2     *                                localhost   NO    echo-service    8080           2
       
      -

      lint

      +

      lint

      kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions.

      $ kubectl ingress-nginx lint --all-namespaces --verbose
       Checking ingresses...
      @@ -1649,7 +1644,7 @@ Do not move it without providing redirects.
             https://github.com/kubernetes/ingress-nginx/issues/3808
       
      -

      logs

      +

      logs

      kubectl ingress-nginx logs is almost the same as kubectl logs, with fewer flags. It will automatically choose an ingress-nginx pod to read logs from.

      $ kubectl ingress-nginx logs -n ingress-nginx
       -------------------------------------------------------------------------------
      @@ -1669,7 +1664,7 @@ Do not move it without providing redirects.
       ...
       
      -

      ssh

      +

      ssh

      kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.

      $ kubectl ingress-nginx ssh -n ingress-nginx
       www-data@nginx-ingress-controller-7cbf77c976-wx5pn:/etc/nginx$
      diff --git a/search/search_index.json b/search/search_index.json
      index eb2f964b4..f1e9bc8bf 100644
      --- a/search/search_index.json
      +++ b/search/search_index.json
      @@ -1 +1 @@
      -{"config":{"lang":["en"],"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started.","title":"Welcome"},{"location":"#welcome","text":"This is the documentation for the NGINX Ingress Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the NGINX configuration. Learn more about using Ingress on k8s.io .","title":"Welcome"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"development/","text":"Developing for NGINX Ingress Controller \u00b6 This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers. Quick Start \u00b6 Getting the code \u00b6 The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx Initial developer environment build \u00b6 Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env Updating the deployment \u00b6 The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller- Dependencies \u00b6 The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune Building \u00b6 All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER =  # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY =  To find the registry simply run: docker system info | grep Registry Building the e2e test image \u00b6 The e2e test image can also be built through the Makefile. $ make e2e-test-image You can then make this image available on your minikube host by exporting the image and loading it with the minikube docker context: $ docker save nginx-ingress-controller:e2e | ( eval $( minikube docker-env ) && docker load ) Nginx Controller \u00b6 Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG =  REGISTRY = $USER /ingress-controller make container Push the container image to a remote repository $ TAG =  REGISTRY = $USER /ingress-controller make push Deploying \u00b6 There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide Testing \u00b6 To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test NOTE: if your e2e pod keeps hanging in an ImagePullBackoff, make sure you've made your e2e nginx-ingress-controller image available to minikube as explained in Building the e2e test image To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored. Releasing \u00b6 All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Development"},{"location":"development/#developing-for-nginx-ingress-controller","text":"This document explains how to get started with developing for NGINX Ingress controller. It includes how to build, test, and release ingress controllers.","title":"Developing for NGINX Ingress Controller"},{"location":"development/#quick-start","text":"","title":"Quick Start"},{"location":"development/#getting-the-code","text":"The code must be checked out as a subdirectory of k8s.io, and not github.com. mkdir -p $GOPATH/src/k8s.io cd $GOPATH/src/k8s.io # Replace \"$YOUR_GITHUB_USERNAME\" below with your github username git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git cd ingress-nginx","title":"Getting the code"},{"location":"development/#initial-developer-environment-build","text":"Prequisites : Minikube must be installed. See releases for installation instructions. If you are using MacOS and deploying to minikube , the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx : $ make dev-env","title":"Initial developer environment build"},{"location":"development/#updating-the-deployment","text":"The nginx controller container image can be rebuilt using: $ ARCH = amd64 TAG = dev REGISTRY = $USER /ingress-controller make build container The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: $ kubectl get pods -n ingress-nginx $ kubectl delete pod -n ingress-nginx nginx-ingress-controller-","title":"Updating the deployment"},{"location":"development/#dependencies","text":"The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies. This guide requires you to install the dep dependency tool. Check the version of dep you are using and make sure it is up to date. $ dep version dep: version : devel build date : git hash : go version : go1.9 go compiler : gc platform : linux/amd64 If you have an older version of dep , you can update it as follows: $ go get -u github.com/golang/dep This will automatically save the dependencies to the vendor/ directory. $ cd $GOPATH /src/k8s.io/ingress-nginx $ dep ensure $ dep ensure -update $ dep prune","title":"Dependencies"},{"location":"development/#building","text":"All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository. In order to use your local Docker, you may need to set the following environment variables: # \"gcloud docker\" ( default ) or \"docker\" $ export DOCKER =  # \"quay.io/kubernetes-ingress-controller\" ( default ) , \"index.docker.io\" , or your own registry $ export REGISTRY =  To find the registry simply run: docker system info | grep Registry","title":"Building"},{"location":"development/#building-the-e2e-test-image","text":"The e2e test image can also be built through the Makefile. $ make e2e-test-image You can then make this image available on your minikube host by exporting the image and loading it with the minikube docker context: $ docker save nginx-ingress-controller:e2e | ( eval $( minikube docker-env ) && docker load )","title":"Building the e2e test image"},{"location":"development/#nginx-controller","text":"Build a raw server binary $ make build TODO : add more specific instructions needed for raw server binary. Build a local container image $ TAG =  REGISTRY = $USER /ingress-controller make container Push the container image to a remote repository $ TAG =  REGISTRY = $USER /ingress-controller make push","title":"Nginx Controller"},{"location":"development/#deploying","text":"There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide","title":"Deploying"},{"location":"development/#testing","text":"To run unit-tests, just run $ cd $GOPATH /src/k8s.io/ingress-nginx $ make test If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo. $ cd $GOPATH /src/k8s.io/ingress-nginx $ make e2e-test NOTE: if your e2e pod keeps hanging in an ImagePullBackoff, make sure you've made your e2e nginx-ingress-controller image available to minikube as explained in Building the e2e test image To run unit-tests for lua code locally, run: $ cd $GOPATH /src/k8s.io/ingress-nginx $ ./rootfs/etc/nginx/lua/test/up.sh $ make lua-test Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test . When creating a new test file it must follow the naming convention _test.lua or it will be ignored.","title":"Testing"},{"location":"development/#releasing","text":"All Makefiles will produce a release binary, as shown above. To publish this to a wider Kubernetes user base, push the image to a container registry, like gcr.io . All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme. An example release might look like: $ make release Please follow these guidelines to cut a release: Update the release page with a short description of the major changes that correspond to a given image tag. Cut a release branch, if appropriate. Release branches follow the format of controller-release-version . Typically, pre-releases are cut from HEAD. All major feature work is done in HEAD. Specific bug fixes are cherry-picked into a release branch. If you're not confident about the stability of the code, tag it as alpha or beta. Typically, a release branch should have stable code.","title":"Releasing"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on. Avoiding outage from wrong configuration \u00b6 Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the NGINX Ingress controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . This informers allows reacting to changes in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big clusters with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","text":"Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the nginx ingress controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"Avoiding outage from wrong configuration"},{"location":"kubectl-plugin/","text":"The ingress-nginx kubectl plugin \u00b6 Installation \u00b6 Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command. If a new ingress-nginx version has just been released, the plugin may not yet have been updated inside the repository. In that case, you can install the latest version of the plugin by running: ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.24.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) Replacing 0.24.0 with the recently released version. Common Flags \u00b6 Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment  and --pod  flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to nginx-ingress-controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace. Subcommands \u00b6 Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher. backends \u00b6 Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"secureCACert\": { \"secret\": \"\", \"caFilename\": \"\", \"caSha\": \"\" }, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend  option to show only the backend with the given name. certs \u00b6 Use kubectl ingress-nginx certs --host  to dump the SSL cert/key information for a given host. Requires that --enable-dynamic-certificates is true (this is the default as of version 0.24.0 ). WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY-----  -----END RSA PRIVATE KEY----- conf \u00b6 Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host  option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ... exec \u00b6 kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template general \u00b6 kubectl ingress-nginx general dumps miscellaneous controller state as a JSON object. Currently it just shows the number of controller pods known to a particular controller pod. $ kubectl ingress-nginx general -n ingress-nginx { \"controllerPodsCount\": 1 } info \u00b6 Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service  flag if your ingress-nginx LoadBalancer service is not named ingress-nginx . ingresses \u00b6 kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2 lint \u00b6 kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/nginx-ingress-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 to show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/nginx-ingress-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 logs \u00b6 kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ... ssh \u00b6 kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@nginx-ingress-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"kubectl plugin"},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","text":"","title":"The ingress-nginx kubectl plugin"},{"location":"kubectl-plugin/#installation","text":"Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command. If a new ingress-nginx version has just been released, the plugin may not yet have been updated inside the repository. In that case, you can install the latest version of the plugin by running: ( set -x; cd \"$(mktemp -d)\" && curl -fsSLO \"https://github.com/kubernetes/ingress-nginx/releases/download/nginx-0.24.0/{ingress-nginx.yaml,kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz}\" && kubectl krew install \\ --manifest=ingress-nginx.yaml --archive=kubectl-ingress_nginx-$(uname | tr '[:upper:]' '[:lower:]')-amd64.tar.gz ) Replacing 0.24.0 with the recently released version.","title":"Installation"},{"location":"kubectl-plugin/#common-flags","text":"Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment  and --pod  flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to nginx-ingress-controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace.","title":"Common Flags"},{"location":"kubectl-plugin/#subcommands","text":"Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher.","title":"Subcommands"},{"location":"kubectl-plugin/#backends","text":"Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"secureCACert\": { \"secret\": \"\", \"caFilename\": \"\", \"caSha\": \"\" }, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend  option to show only the backend with the given name.","title":"backends"},{"location":"kubectl-plugin/#certs","text":"Use kubectl ingress-nginx certs --host  to dump the SSL cert/key information for a given host. Requires that --enable-dynamic-certificates is true (this is the default as of version 0.24.0 ). WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY-----  -----END RSA PRIVATE KEY-----","title":"certs"},{"location":"kubectl-plugin/#conf","text":"Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host  option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ...","title":"conf"},{"location":"kubectl-plugin/#exec","text":"kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template","title":"exec"},{"location":"kubectl-plugin/#general","text":"kubectl ingress-nginx general dumps miscellaneous controller state as a JSON object. Currently it just shows the number of controller pods known to a particular controller pod. $ kubectl ingress-nginx general -n ingress-nginx { \"controllerPodsCount\": 1 }","title":"general"},{"location":"kubectl-plugin/#info","text":"Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service  flag if your ingress-nginx LoadBalancer service is not named ingress-nginx .","title":"info"},{"location":"kubectl-plugin/#ingresses","text":"kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2","title":"ingresses"},{"location":"kubectl-plugin/#lint","text":"kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/nginx-ingress-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 to show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/nginx-ingress-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808","title":"lint"},{"location":"kubectl-plugin/#logs","text":"kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ...","title":"logs"},{"location":"kubectl-plugin/#ssh","text":"kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@nginx-ingress-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"ssh"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n  NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing  -n  Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n  NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n  nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n  NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n  nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35  80/TCP 18m default kubernetes ClusterIP 10.96.0.1  443/TCP 30m default tea-svc ClusterIP 10.104.172.12  80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236  80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10  53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17  80:30000/TCP 30m Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n  NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n  nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1  443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/networking\", \"/apis/networking/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret  . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events $ kubectl get ing -n  NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing  -n  Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1beta1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m nginx-ingress-controller Ingress default/cafe-ingress Normal UPDATE 58s nginx-ingress-controller Ingress default/cafe-ingress Check the Ingress Controller Logs $ kubectl get pods -n  NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n  nginx-ingress-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration $ kubectl get pods -n  NAME READY STATUS RESTARTS AGE nginx-ingress-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n  nginx-ingress-controller-67956bf89d-fv58j cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35  80/TCP 18m default kubernetes ClusterIP 10.96.0.1  443/TCP 30m default tea-svc ClusterIP 10.104.172.12  80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236  80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10  53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17  80:30000/TCP 30m","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n  NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m nginx-ingress-controller 1 1 1 1 35m $ kubectl edit deploy -n  nginx-ingress-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run test --image = tutum/curl -- sleep 10000 # check that container is running $ kubectl get pods NAME READY STATUS RESTARTS AGE test-701078429-s5kca 1/1 Running 0 16s # check if secret exists $ kubectl exec test-701078429-s5kca ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # get service IP of master $ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.0.0.1  443/TCP 1d # check base connectivity from cluster inside $ kubectl exec test-701078429-s5kca -- curl -k https://10.0.0.1 Unauthorized # connect using tokens $ TOKEN_VALUE = $( kubectl exec test-701078429-s5kca -- cat /var/run/secrets/kubernetes.io/serviceaccount/token ) $ echo $TOKEN_VALUE eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3Mi....9A $ kubectl exec test-701078429-s5kca -- curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $TOKEN_VALUE \" https://10.0.0.1 { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/apps\", \"/apis/apps/v1alpha1\", \"/apis/authentication.k8s.io\", \"/apis/authentication.k8s.io/v1beta1\", \"/apis/authorization.k8s.io\", \"/apis/authorization.k8s.io/v1beta1\", \"/apis/autoscaling\", \"/apis/autoscaling/v1\", \"/apis/batch\", \"/apis/batch/v1\", \"/apis/batch/v2alpha1\", \"/apis/certificates.k8s.io\", \"/apis/certificates.k8s.io/v1alpha1\", \"/apis/networking\", \"/apis/networking/v1beta1\", \"/apis/policy\", \"/apis/policy/v1alpha1\", \"/apis/rbac.authorization.k8s.io\", \"/apis/rbac.authorization.k8s.io/v1alpha1\", \"/apis/storage.k8s.io\", \"/apis/storage.k8s.io/v1beta1\", \"/healthz\", \"/healthz/ping\", \"/logs\", \"/metrics\", \"/swaggerapi/\", \"/ui/\", \"/version\" ] } If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret  . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep nginx-ingress-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a quay.io/kubernetes-ingress-controller/nginx-ingress-controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_nginx-ingress-controller_nginx-ingress-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /nginx-ingress-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"deploy/","text":"Installation Guide \u00b6 Contents \u00b6 Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm Prerequisite Generic Deployment Command \u00b6 Attention The default configuration watches Ingress object from all the namespaces . To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) The following Mandatory Command is required for all deployments. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml Tip If you are using a Kubernetes version previous to 1.14, you need to change kubernetes.io/os to beta.kubernetes.io/os at line 217 of mandatory.yaml , see Labels details . Provider Specific Steps \u00b6 There are cloud provider specific yaml files. Docker for Mac \u00b6 Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml minikube \u00b6 For standard usage: minikube addons enable ingress For development: Disable the ingress addon: minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s AWS \u00b6 In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page Elastic Load Balancer - ELB \u00b6 This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443 ELB Idle Timeouts \u00b6 In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation . Network Load Balancer (NLB) \u00b6 This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml GCE-GKE \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml Bare-metal \u00b6 Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations . Verify installation \u00b6 To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress. Detect installed version \u00b6 To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Using Helm \u00b6 NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"","title":"Installation Guide"},{"location":"deploy/#contents","text":"Prerequisite Generic Deployment Command Provider Specific Steps Docker for Mac minikube AWS GCE - GKE Azure Bare-metal Verify installation Detect installed version Using Helm","title":"Contents"},{"location":"deploy/#prerequisite-generic-deployment-command","text":"Attention The default configuration watches Ingress object from all the namespaces . To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace. Warning If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Attention If you're using GKE you need to initialize your user as a cluster-admin with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) The following Mandatory Command is required for all deployments. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml Tip If you are using a Kubernetes version previous to 1.14, you need to change kubernetes.io/os to beta.kubernetes.io/os at line 217 of mandatory.yaml , see Labels details .","title":"Prerequisite Generic Deployment Command"},{"location":"deploy/#provider-specific-steps","text":"There are cloud provider specific yaml files.","title":"Provider Specific Steps"},{"location":"deploy/#docker-for-mac","text":"Kubernetes is available in Docker for Mac (from version 18.06.0-ce ) Create a service kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml","title":"Docker for Mac"},{"location":"deploy/#minikube","text":"For standard usage: minikube addons enable ingress For development: Disable the ingress addon: minikube addons disable ingress Execute make dev-env Confirm the nginx-ingress-controller deployment exists: $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE default-http-backend-66b447d9cf-rrlf9 1/1 Running 0 12s nginx-ingress-controller-fdcdcd6dd-vvpgs 1/1 Running 0 11s","title":"minikube"},{"location":"deploy/#aws","text":"In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer . Since Kubernetes v1.9.0 it is possible to use a classic load balancer (ELB) or network load balancer (NLB) Please check the elastic load balancing AWS details page","title":"AWS"},{"location":"deploy/#elastic-load-balancer-elb","text":"This setup requires to choose in which layer (L4 or L7) we want to configure the ELB: Layer 4 : use TCP as the listener protocol for ports 80 and 443. Layer 7 : use HTTP as the listener protocol for port 80 and terminate TLS in the ELB For L4: Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l4.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l4.yaml For L7: Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one \"arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX\" Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml Then execute: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-l7.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/patch-configmap-l7.yaml This example creates an ELB with just two listeners, one in port 80 and another in port 443","title":"Elastic Load Balancer - ELB"},{"location":"deploy/#elb-idle-timeouts","text":"In some scenarios users will need to modify the value of the ELB idle timeout. Users need to ensure the idle timeout is less than the keepalive_timeout that is configured for NGINX. By default NGINX keepalive_timeout is set to 75s . The default ELB idle timeout will work for most scenarios, unless the NGINX keepalive_timeout has been modified, in which case service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout will need to be modified to ensure it is less than the keepalive_timeout the user has configured. Please Note: An idle timeout of 3600s is recommended when using WebSockets. More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation .","title":"ELB Idle Timeouts"},{"location":"deploy/#network-load-balancer-nlb","text":"This type of load balancer is supported since v1.10.0 as an ALPHA feature. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/aws/service-nlb.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#gce-gke","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml Important Note: proxy protocol is not supported in GCE/GKE","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml","title":"Azure"},{"location":"deploy/#bare-metal","text":"Using NodePort : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml Tip For extended notes regarding deployments on bare-metal, see Bare-metal considerations .","title":"Bare-metal"},{"location":"deploy/#verify-installation","text":"To check if the ingress controller pods have started, run the following command: kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch Once the operator pods are running, you can cancel the above command by typing Ctrl+C . Now, you are ready to create your first ingress.","title":"Verify installation"},{"location":"deploy/#detect-installed-version","text":"To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command. POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Detect installed version"},{"location":"deploy/#using-helm","text":"NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx : helm install stable/nginx-ingress --name my-nginx If the kubernetes cluster has RBAC enabled, then run: helm install stable/nginx-ingress --name my-nginx --set rbac.create=true Detect installed version: POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}') kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version","title":"Using Helm"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249  80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod  ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : namespace : metallb-system name : config data : config : | address-pools: - name: default protocol: layer2 addresses: - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249  80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the NGINX Ingress controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod  ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork : true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller. Example Given a nginx-ingress-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 nginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, nginx-ingress-serviceaccount . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role . Cluster Permissions \u00b6 These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update Namespace Permissions \u00b6 These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller. Bindings \u00b6 The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the nginx-ingress-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, nginx-ingress-serviceaccount .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole , and namespace specific permissions defined by the Role named nginx-ingress-role .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses : get, list, watch events : create, patch ingresses/status : update","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). configmaps : get, update (for resourceName ingress-controller-leader-nginx ) configmaps : create This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader ingress-class : nginx resourceName : - Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : nginx-ingress-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : nginx-ingress-controller image : quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args : ... simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/nginx-ingress-controller \\ nginx-ingress-controller=quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0 For interactive editing, use kubectl edit deployment nginx-ingress-controller .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress , you should be able to upgrade using helm upgrade --reuse-values ngx-ingress stable/nginx-ingress","title":"With Helm"},{"location":"deploy/validating-webhook/","text":"Validating webhook (admission controller) \u00b6 Overview \u00b6 Nginx ingress controller offers the option to validate ingresses before they enter the cluster, ensuring controller will generate a valid configuration. This controller is called, when ValidatingAdmissionWebhook is enabled, by the Kubernetes API server each time a new ingress is to enter the cluster, and rejects objects for which the generated nginx configuration fails to be validated. This feature requires some further configuration of the cluster, hence it is an optional feature, this section explains how to enable it for your cluster. Configure the webhook \u00b6 Generate the webhook certificate \u00b6 Self signed certificate \u00b6 Validating webhook must be served using TLS, you need to generate a certificate. Note that kube API server is checking the hostname of the certificate, the common name of your certificate will need to match the service name. Example To run the validating webhook with a service named ingress-validation-webhook in the namespace ingress-nginx , run openssl req -x509 -newkey rsa:2048 -keyout certificate.pem -out key.pem -days 365 -nodes -subj \"/CN=ingress-validation-webhook.ingress-nginx.svc\" Using Kubernetes CA \u00b6 Kubernetes also provides primitives to sign a certificate request. Here is an example on how to use it Example #!/bin/bash SERVICE_NAME = ingress-nginx NAMESPACE = ingress-nginx TEMP_DIRECTORY = $( mktemp -d ) echo \"creating certs in directory ${ TEMP_DIRECTORY } \" cat <> ${TEMP_DIRECTORY}/csr.conf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = ${SERVICE_NAME} DNS.2 = ${SERVICE_NAME}.${NAMESPACE} DNS.3 = ${SERVICE_NAME}.${NAMESPACE}.svc EOF openssl genrsa -out ${ TEMP_DIRECTORY } /server-key.pem 2048 openssl req -new -key ${ TEMP_DIRECTORY } /server-key.pem \\ -subj \"/CN= ${ SERVICE_NAME } . ${ NAMESPACE } .svc\" \\ -out ${ TEMP_DIRECTORY } /server.csr \\ -config ${ TEMP_DIRECTORY } /csr.conf cat < & 2 exit 1 fi echo ${ SERVER_CERT } | openssl base64 -d -A -out ${ TEMP_DIRECTORY } /server-cert.pem kubectl create secret generic ingress-nginx.svc \\ --from-file = key.pem = ${ TEMP_DIRECTORY } /server-key.pem \\ --from-file = cert.pem = ${ TEMP_DIRECTORY } /server-cert.pem \\ -n ${ NAMESPACE } Using helm \u00b6 To generate the certificate using helm, you can use the following snippet Example {{ - $ cn := printf \"%s.%s.svc\" ( include \"nginx-ingress.validatingWebhook.fullname\" . ) .Release.Namespace }} {{ - $ ca := genCA ( printf \"%s-ca\" ( include \"nginx-ingress.validatingWebhook.fullname\" . )) .Values.validatingWebhook.certificateValidity - }} {{ - $ cert := genSignedCert $ cn nil nil .Values.validatingWebhook.certificateValidity $ ca - }} Ingress controller flags \u00b6 To enable the feature in the ingress controller, you need to provide 3 flags to the command line. flag description example usage --validating-webhook The address to start an admission controller on :8080 --validating-webhook-certificate The certificate the webhook is using for its TLS handling /usr/local/certificates/validating-webhook.pem --validating-webhook-key The key the webhook is using for its TLS handling /usr/local/certificates/validating-webhook-key.pem kube API server flags \u00b6 Validating webhook feature requires specific setup on the kube API server side. Depending on your kubernetes version, the flag can, or not, be enabled by default. To check that your kube API server runs with the required flags, please refer to the kubernetes documentation. Additional kubernetes objects \u00b6 Once both the ingress controller and the kube API server are configured to serve the webhook, add the you can configure the webhook with the following objects: apiVersion : v1 kind : Service metadata : name : ingress-validation-webhook namespace : ingress-nginx spec : ports : - name : admission port : 443 protocol : TCP targetPort : 8080 selector : app : nginx-ingress component : controller --- apiVersion : admissionregistration.k8s.io/v1beta1 kind : ValidatingWebhookConfiguration metadata : name : check-ingress webhooks : - name : validate.nginx.ingress.kubernetes.io rules : - apiGroups : - extensions apiVersions : - v1beta1 operations : - CREATE - UPDATE resources : - ingresses failurePolicy : Fail clientConfig : service : namespace : ingress-nginx name : ingress-validation-webhook path : /extensions/v1beta1/ingress caBundle : ","title":"Validating Webhook (admission controller)"},{"location":"deploy/validating-webhook/#validating-webhook-admission-controller","text":"","title":"Validating webhook (admission controller)"},{"location":"deploy/validating-webhook/#overview","text":"Nginx ingress controller offers the option to validate ingresses before they enter the cluster, ensuring controller will generate a valid configuration. This controller is called, when ValidatingAdmissionWebhook is enabled, by the Kubernetes API server each time a new ingress is to enter the cluster, and rejects objects for which the generated nginx configuration fails to be validated. This feature requires some further configuration of the cluster, hence it is an optional feature, this section explains how to enable it for your cluster.","title":"Overview"},{"location":"deploy/validating-webhook/#configure-the-webhook","text":"","title":"Configure the webhook"},{"location":"deploy/validating-webhook/#generate-the-webhook-certificate","text":"","title":"Generate the webhook certificate"},{"location":"deploy/validating-webhook/#self-signed-certificate","text":"Validating webhook must be served using TLS, you need to generate a certificate. Note that kube API server is checking the hostname of the certificate, the common name of your certificate will need to match the service name. Example To run the validating webhook with a service named ingress-validation-webhook in the namespace ingress-nginx , run openssl req -x509 -newkey rsa:2048 -keyout certificate.pem -out key.pem -days 365 -nodes -subj \"/CN=ingress-validation-webhook.ingress-nginx.svc\"","title":"Self signed certificate"},{"location":"deploy/validating-webhook/#using-kubernetes-ca","text":"Kubernetes also provides primitives to sign a certificate request. Here is an example on how to use it Example #!/bin/bash SERVICE_NAME = ingress-nginx NAMESPACE = ingress-nginx TEMP_DIRECTORY = $( mktemp -d ) echo \"creating certs in directory ${ TEMP_DIRECTORY } \" cat <> ${TEMP_DIRECTORY}/csr.conf [req] req_extensions = v3_req distinguished_name = req_distinguished_name [req_distinguished_name] [ v3_req ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = serverAuth subjectAltName = @alt_names [alt_names] DNS.1 = ${SERVICE_NAME} DNS.2 = ${SERVICE_NAME}.${NAMESPACE} DNS.3 = ${SERVICE_NAME}.${NAMESPACE}.svc EOF openssl genrsa -out ${ TEMP_DIRECTORY } /server-key.pem 2048 openssl req -new -key ${ TEMP_DIRECTORY } /server-key.pem \\ -subj \"/CN= ${ SERVICE_NAME } . ${ NAMESPACE } .svc\" \\ -out ${ TEMP_DIRECTORY } /server.csr \\ -config ${ TEMP_DIRECTORY } /csr.conf cat < & 2 exit 1 fi echo ${ SERVER_CERT } | openssl base64 -d -A -out ${ TEMP_DIRECTORY } /server-cert.pem kubectl create secret generic ingress-nginx.svc \\ --from-file = key.pem = ${ TEMP_DIRECTORY } /server-key.pem \\ --from-file = cert.pem = ${ TEMP_DIRECTORY } /server-cert.pem \\ -n ${ NAMESPACE }","title":"Using Kubernetes CA"},{"location":"deploy/validating-webhook/#using-helm","text":"To generate the certificate using helm, you can use the following snippet Example {{ - $ cn := printf \"%s.%s.svc\" ( include \"nginx-ingress.validatingWebhook.fullname\" . ) .Release.Namespace }} {{ - $ ca := genCA ( printf \"%s-ca\" ( include \"nginx-ingress.validatingWebhook.fullname\" . )) .Values.validatingWebhook.certificateValidity - }} {{ - $ cert := genSignedCert $ cn nil nil .Values.validatingWebhook.certificateValidity $ ca - }}","title":"Using helm"},{"location":"deploy/validating-webhook/#ingress-controller-flags","text":"To enable the feature in the ingress controller, you need to provide 3 flags to the command line. flag description example usage --validating-webhook The address to start an admission controller on :8080 --validating-webhook-certificate The certificate the webhook is using for its TLS handling /usr/local/certificates/validating-webhook.pem --validating-webhook-key The key the webhook is using for its TLS handling /usr/local/certificates/validating-webhook-key.pem","title":"Ingress controller flags"},{"location":"deploy/validating-webhook/#kube-api-server-flags","text":"Validating webhook feature requires specific setup on the kube API server side. Depending on your kubernetes version, the flag can, or not, be enabled by default. To check that your kube API server runs with the required flags, please refer to the kubernetes documentation.","title":"kube API server flags"},{"location":"deploy/validating-webhook/#additional-kubernetes-objects","text":"Once both the ingress controller and the kube API server are configured to serve the webhook, add the you can configure the webhook with the following objects: apiVersion : v1 kind : Service metadata : name : ingress-validation-webhook namespace : ingress-nginx spec : ports : - name : admission port : 443 protocol : TCP targetPort : 8080 selector : app : nginx-ingress component : controller --- apiVersion : admissionregistration.k8s.io/v1beta1 kind : ValidatingWebhookConfiguration metadata : name : check-ingress webhooks : - name : validate.nginx.ingress.kubernetes.io rules : - apiGroups : - extensions apiVersions : - v1beta1 operations : - CREATE - UPDATE resources : - ingresses failurePolicy : Fail clientConfig : service : namespace : ingress-nginx name : ingress-validation-webhook path : /extensions/v1beta1/ingress caBundle : ","title":"Additional kubernetes objects"},{"location":"enhancements/","text":"Kubernetes Enhancement Proposals (KEPs) \u00b6 A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it. Quick start for the KEP process \u00b6 Follow the process outlined in the KEP template Do I have to use the KEP process? \u00b6 No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project. Why would I want to use the KEP process? \u00b6 Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#kubernetes-enhancement-proposals-keps","text":"A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#quick-start-for-the-kep-process","text":"Follow the process outlined in the KEP template","title":"Quick start for the KEP process"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","text":"No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project.","title":"Do I have to use the KEP process?"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","text":"Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Why would I want to use the KEP process?"},{"location":"enhancements/20190724-only-dynamic-ssl/","text":"Remove static SSL configuration mode \u00b6 Table of Contents \u00b6 Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives Summary \u00b6 Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic. Motivation \u00b6 The static configuration implies reloads, something that affects the majority of the users. Goals \u00b6 Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase. Non-Goals \u00b6 Features related to certificate authentication are not changed in any way. Proposal \u00b6 Remove static SSL configuration Implementation Details/Notes/Constraints \u00b6 Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates Drawbacks \u00b6 Alternatives \u00b6 Keep both implementations","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#remove-static-ssl-configuration-mode","text":"","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","text":"Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives","title":"Table of Contents"},{"location":"enhancements/20190724-only-dynamic-ssl/#summary","text":"Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.","title":"Summary"},{"location":"enhancements/20190724-only-dynamic-ssl/#motivation","text":"The static configuration implies reloads, something that affects the majority of the users.","title":"Motivation"},{"location":"enhancements/20190724-only-dynamic-ssl/#goals","text":"Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase.","title":"Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#non-goals","text":"Features related to certificate authentication are not changed in any way.","title":"Non-Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#proposal","text":"Remove static SSL configuration","title":"Proposal"},{"location":"enhancements/20190724-only-dynamic-ssl/#implementation-detailsnotesconstraints","text":"Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates","title":"Implementation Details/Notes/Constraints"},{"location":"enhancements/20190724-only-dynamic-ssl/#drawbacks","text":"","title":"Drawbacks"},{"location":"enhancements/20190724-only-dynamic-ssl/#alternatives","text":"Keep both implementations","title":"Alternatives"},{"location":"enhancements/20190815-zone-aware-routing/","text":"Availability zone aware routing \u00b6 Table of Contents \u00b6 Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional] Summary \u00b6 Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint. Motivation \u00b6 When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, Amazon EC charges money for that. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to ingress-nginx pod is considered as inter zone traffic and costs money. At the time of this writing GCP charges $0.01 per GB of inter zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money sa GCP for cross zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross zone. Goals \u00b6 Given a regional cluster running ingress-nginx, ingress-nginx should do best effort to pick zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there's no zonal endpoints Non-Goals \u00b6 This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases Proposal \u00b6 The idea here is to have controller part of ingress-nginx to (1) detect what zone its current pod is running in and (2) detect the zone for every endpoints it knows about. After that it will post that data as part of endpoints to Lua land. Then Lua balancer when picking an endpoint will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fallback to current behaviour. This feature at least in the beginning should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec do pass node name using downward API as an environment variable. Then on start controller can get node details from the API based on node name. Once the node details is obtained we can extract the zone from failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses [ i ]. nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of node's life. Otherwise we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagrly fetch all the nodes and build node name to zone mapping on start. And from thereon sync it during endpoints building in the main event loop iff there's no entry exist for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on Lua side. For every backend we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use zonal balancer for that backend. If zonal balancer does not exist (i.e there's no zonal endpoint) then we will use general balancer. In case of zonal outages we assume that readiness probe will fail and controller will see no endpoints for the backend and therefore we will use general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem. Implementation History \u00b6 initial version of KEP is shipped proposal and implementation details is done Drawbacks [optional] \u00b6 More load on the Kubernetes API server.","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#availability-zone-aware-routing","text":"","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","text":"Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional]","title":"Table of Contents"},{"location":"enhancements/20190815-zone-aware-routing/#summary","text":"Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.","title":"Summary"},{"location":"enhancements/20190815-zone-aware-routing/#motivation","text":"When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, Amazon EC charges money for that. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to ingress-nginx pod is considered as inter zone traffic and costs money. At the time of this writing GCP charges $0.01 per GB of inter zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money sa GCP for cross zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross zone.","title":"Motivation"},{"location":"enhancements/20190815-zone-aware-routing/#goals","text":"Given a regional cluster running ingress-nginx, ingress-nginx should do best effort to pick zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there's no zonal endpoints","title":"Goals"},{"location":"enhancements/20190815-zone-aware-routing/#non-goals","text":"This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases","title":"Non-Goals"},{"location":"enhancements/20190815-zone-aware-routing/#proposal","text":"The idea here is to have controller part of ingress-nginx to (1) detect what zone its current pod is running in and (2) detect the zone for every endpoints it knows about. After that it will post that data as part of endpoints to Lua land. Then Lua balancer when picking an endpoint will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fallback to current behaviour. This feature at least in the beginning should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec do pass node name using downward API as an environment variable. Then on start controller can get node details from the API based on node name. Once the node details is obtained we can extract the zone from failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses [ i ]. nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of node's life. Otherwise we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagrly fetch all the nodes and build node name to zone mapping on start. And from thereon sync it during endpoints building in the main event loop iff there's no entry exist for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on Lua side. For every backend we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use zonal balancer for that backend. If zonal balancer does not exist (i.e there's no zonal endpoint) then we will use general balancer. In case of zonal outages we assume that readiness probe will fail and controller will see no endpoints for the backend and therefore we will use general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.","title":"Proposal"},{"location":"enhancements/20190815-zone-aware-routing/#implementation-history","text":"initial version of KEP is shipped proposal and implementation details is done","title":"Implementation History"},{"location":"enhancements/20190815-zone-aware-routing/#drawbacks-optional","text":"More load on the Kubernetes API server.","title":"Drawbacks [optional]"},{"location":"enhancements/YYYYMMDD-kep-template/","text":"Title \u00b6 This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review. The title should be lowercased and spaces/punctuation should be replaced with - . To get started with this template: Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md , where YYYYMMDD is the date the KEP was first drafted. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue. Create a PR. Assign it to folks that are sponsoring this process. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes. The canonical place for the latest set of instructions (and the likely source of this file) is here . The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items. Table of Contents \u00b6 A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template. Ensure the TOC is wrapped with 
       
      -

      Troubleshooting

      -

      Ingress-Controller Logs and Events

      +

      Troubleshooting

      +

      Ingress-Controller Logs and Events

      There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.

      Check the Ingress Resource Events

      @@ -1359,7 +1359,7 @@ methods to obtain more information.

      kube-system kubernetes-dashboard NodePort 10.103.128.17 <none> 80:30000/TCP 30m
      -

      Debug Logging

      +

      Debug Logging

      Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment.

      $ kubectl get deploy -n <namespace-of-ingress-controller>
      @@ -1376,7 +1376,7 @@ the deployment.

    • --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
    • --v=5 configures NGINX in debug mode
    -

    Authentication to the Kubernetes API Server

    +

    Authentication to the Kubernetes API Server

    A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.

    @@ -1424,7 +1424,7 @@ on the lower left hand side.

    +---------------------------------------------------+ +------------------+ -

    Service Account

    +

    Service Account

    If using a service account to connect to the API server, Dashboard expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server.

    @@ -1516,10 +1516,10 @@ token that is required to authenticate with the API server.

  • User Guide: Service Accounts
  • Cluster Administrator Guide: Managing Service Accounts
  • -

    Kube-Config

    +

    Kube-Config

    If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.

    -

    Using GDB with Nginx

    +

    Using GDB with Nginx

    Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.

    Note: The below is based on the nginx documentation.

    diff --git a/user-guide/basic-usage/index.html b/user-guide/basic-usage/index.html index 2efb408d5..9ddfdc526 100644 --- a/user-guide/basic-usage/index.html +++ b/user-guide/basic-usage/index.html @@ -1150,7 +1150,7 @@ -

    Basic usage - host based routing

    +

    Basic usage - host based routing

    ingress-nginx can be used for many use cases, inside various cloud provider and supports a lot of configurations. In this section you can find a common usage scenario where a single load balancer powered by ingress-nginx will route traffic to 2 different HTTP backend services based on the host name.

    First of all follow the instructions to install ingress-nginx. Then imagine that you need to expose 2 HTTP services already installed: myServiceA, myServiceB. Let's say that you want to expose the first at myServiceA.foo.org and the second at myServiceB.foo.org. One possible solution is to create two ingress resources:

    apiVersion: extensions/v1beta1
    diff --git a/user-guide/cli-arguments/index.html b/user-guide/cli-arguments/index.html
    index 83bc1d78a..2faefafa6 100644
    --- a/user-guide/cli-arguments/index.html
    +++ b/user-guide/cli-arguments/index.html
    @@ -1148,7 +1148,7 @@
                       
                     
                     
    -                

    Command line arguments

    +

    Command line arguments

    The following command line arguments are accepted by the Ingress controller executable.

    They are set in the container spec of the nginx-ingress-controller Deployment manifest

    diff --git a/user-guide/custom-errors/index.html b/user-guide/custom-errors/index.html index 198aeee1a..aa68e36f0 100644 --- a/user-guide/custom-errors/index.html +++ b/user-guide/custom-errors/index.html @@ -1148,7 +1148,7 @@ -

    Custom errors

    +

    Custom errors

    When the custom-http-errors option is enabled, the Ingress controller configures NGINX so that it passes several HTTP headers down to its default-backend in case of error:

    @@ -1202,7 +1202,7 @@ could decide to return the error payload as a JSON document instead of HTML.

    NGINX does not change the response from the custom default backend.

    An example of such custom backend is available inside the source repository at images/custom-error-pages.

    -

    See also the Custom errors example.

    +

    See also the Custom errors example.

    diff --git a/user-guide/default-backend/index.html b/user-guide/default-backend/index.html index 62fe72d6a..ba5ccc15a 100644 --- a/user-guide/default-backend/index.html +++ b/user-guide/default-backend/index.html @@ -1148,7 +1148,7 @@ -

    Default backend

    +

    Default backend

    The default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand (i.e., all the requests that are not mapped with an Ingress).

    Basically a default backend exposes two URLs:

    diff --git a/user-guide/exposing-tcp-udp-services/index.html b/user-guide/exposing-tcp-udp-services/index.html index d69e663b3..8b94cf093 100644 --- a/user-guide/exposing-tcp-udp-services/index.html +++ b/user-guide/exposing-tcp-udp-services/index.html @@ -1148,7 +1148,7 @@ -

    Exposing TCP and UDP services

    +

    Exposing TCP and UDP services

    Ingress does not support TCP or UDP services. For this reason this Ingress controller uses the flags --tcp-services-configmap and --udp-services-configmap to point to an existing config map where the key is the external port to use and the value indicates the service to expose using the format: <namespace/service name>:<service port>:[PROXY]:[PROXY]

    It is also possible to use a number or the name of the port. The two last fields are optional. diff --git a/user-guide/external-articles/index.html b/user-guide/external-articles/index.html index 82f3092dc..fa640cb33 100644 --- a/user-guide/external-articles/index.html +++ b/user-guide/external-articles/index.html @@ -1148,7 +1148,7 @@ -

    External Articles

    +

    External Articles

    • Pain(less) NGINX Ingress
    • Accessing Kubernetes Pods from Outside of the Cluster
    • diff --git a/user-guide/fcgi-services/index.html b/user-guide/fcgi-services/index.html index 402f500c7..5731b000e 100644 --- a/user-guide/fcgi-services/index.html +++ b/user-guide/fcgi-services/index.html @@ -1261,13 +1261,13 @@ -

      Exposing FastCGI Servers

      +

      Exposing FastCGI Servers

      FastCGI is a binary protocol for interfacing interactive programs with a web server. [...] (It's) aim is to reduce the overhead related to interfacing between web server and CGI programs, allowing a server to handle more web page requests per unit of time.

      — Wikipedia

      The ingress-nginx ingress controller can be used to directly expose FastCGI servers. Enabling FastCGI in your Ingress only requires setting the backend-protocol annotation to FCGI, and with a couple more annotations you can customize the way ingress-nginx handles the communication with your FastCGI server.

      -

      Example Objects to Expose a FastCGI Pod

      +

      Example Objects to Expose a FastCGI Pod

      The Pod example object below exposes port 9000, which is the conventional FastCGI port.

      apiVersion: v1
       kind: Pod
      @@ -1330,19 +1330,19 @@
                 servicePort: fastcgi
       
      -

      The FastCGI Ingress Annotations

      -

      The nginx.ingress.kubernetes.io/backend-protocol Annotation

      +

      The FastCGI Ingress Annotations

      +

      The nginx.ingress.kubernetes.io/backend-protocol Annotation

      To enable FastCGI, the backend-protocol annotation needs to be set to FCGI, which overrides the default HTTP value.

      nginx.ingress.kubernetes.io/backend-protocol: "FCGI"

      This enables the FastCGI mode for the whole Ingress object.

      -

      The nginx.ingress.kubernetes.io/fastcgi-index Annotation

      +

      The nginx.ingress.kubernetes.io/fastcgi-index Annotation

      To specify an index file, the fastcgi-index annotation value can optionally be set. In the example below, the value is set to index.php. This annotation corresponds to the NGINX fastcgi_index directive.

      nginx.ingress.kubernetes.io/fastcgi-index: "index.php"

      -

      The nginx.ingress.kubernetes.io/fastcgi-params-configmap Annotation

      +

      The nginx.ingress.kubernetes.io/fastcgi-params-configmap Annotation

      To specify NGINX fastcgi_param directives, the fastcgi-params-configmap annotation is used, which in turn must lead to a ConfigMap object containing the NGINX fastcgi_param directives as key/values.

      nginx.ingress.kubernetes.io/fastcgi-params: "example-configmap"

      diff --git a/user-guide/ingress-path-matching/index.html b/user-guide/ingress-path-matching/index.html index 4c72fd7bc..851d3fe8a 100644 --- a/user-guide/ingress-path-matching/index.html +++ b/user-guide/ingress-path-matching/index.html @@ -1273,8 +1273,8 @@ -

      Ingress Path Matching

      -

      Regular Expression Support

      +

      Ingress Path Matching

      +

      Regular Expression Support

      Important

      Regular expressions and wild cards are not supported in the spec.rules.host field. Full hostnames must be used.

      @@ -1305,10 +1305,10 @@ This can be enabled by setting the nginx.ingress.kubern }
      -

      Path Priority

      +

      Path Priority

      In NGINX, regular expressions follow a first match policy. In order to enable more accurate path matching, ingress-nginx first orders the paths by descending length before writing them to the NGINX template as location blocks.

      Please read the warning before using regular expressions in your ingress definitions.

      -

      Example

      +

      Example

      Let the following two ingress definitions be created:

      apiVersion: extensions/v1beta1
       kind: Ingress
      @@ -1370,10 +1370,10 @@ location ~* "^/foo/bar" {
       
      • If the use-regex OR rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.
      -

      Warning

      +

      Warning

      The following example describes a case that may inflict unwanted path matching behaviour.

      This case is expected and a result of NGINX's a first match policy for paths that use the regular expression location modifier. For more information about how a path is chosen, please read the following article: "Understanding Nginx Server and Location Block Selection Algorithms".

      -

      Example

      +

      Example

      Let the following ingress be defined:

      apiVersion: extensions/v1beta1
       kind: Ingress
      diff --git a/user-guide/miscellaneous/index.html b/user-guide/miscellaneous/index.html
      index ce3c5fd39..8833a9321 100644
      --- a/user-guide/miscellaneous/index.html
      +++ b/user-guide/miscellaneous/index.html
      @@ -1277,16 +1277,16 @@
                         
                       
                       
      -                

      Miscellaneous

      -

      Source IP address

      +

      Miscellaneous

      +

      Source IP address

      By default NGINX uses the content of the header X-Forwarded-For as the source of truth to get information about the client IP address. This works without issues in L7 if we configure the setting proxy-real-ip-cidr with the correct information of the IP/network address of trusted external load balancer.

      If the ingress controller is running in AWS we need to use the VPC IPv4 CIDR.

      Another option is to enable proxy protocol using use-proxy-protocol: "true".

      In this mode NGINX does not use the content of the header to get the source IP address of the connection.

      -

      Proxy Protocol

      +

      Proxy Protocol

      If you are using a L4 proxy to forward the traffic to the NGINX pods and terminate HTTP/HTTPS there, you will lose the remote endpoint's IP address. To prevent this you could use the Proxy Protocol for forwarding traffic, this will send the connection details before forwarding the actual TCP connection itself.

      Amongst others ELBs in AWS and HAProxy support Proxy Protocol.

      -

      Websockets

      +

      Websockets

      Support for websockets is provided by NGINX out of the box. No special configuration required.

      The only requirement to avoid the close of connections is the increase of the values of proxy-read-timeout and proxy-send-timeout.

      The default value of this settings is 60 seconds.

      @@ -1295,18 +1295,18 @@

      Important

      If the NGINX ingress controller is exposed with a service type=LoadBalancer make sure the protocol between the loadbalancer and NGINX is TCP.

      -

      Optimizing TLS Time To First Byte (TTTFB)

      +

      Optimizing TLS Time To First Byte (TTTFB)

      NGINX provides the configuration option ssl_buffer_size to allow the optimization of the TLS record size.

      This improves the TLS Time To First Byte (TTTFB). The default value in the Ingress controller is 4k (NGINX default is 16k).

      -

      Retries in non-idempotent methods

      +

      Retries in non-idempotent methods

      Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error. The previous behavior can be restored using retry-non-idempotent=true in the configuration ConfigMap.

      -

      Limitations

      +

      Limitations

      • Ingress rules for TLS require the definition of the field host
      -

      Why endpoints and not services

      +

      Why endpoints and not services

      The NGINX ingress controller does not use Services to route traffic to the pods. Instead it uses the Endpoints API in order to bypass kube-proxy to allow NGINX features like session affinity and custom load balancing algorithms. It also removes some overhead, such as conntrack entries for iptables DNAT.

      diff --git a/user-guide/monitoring/index.html b/user-guide/monitoring/index.html index 8447ad9ad..313ff12ce 100644 --- a/user-guide/monitoring/index.html +++ b/user-guide/monitoring/index.html @@ -1247,29 +1247,23 @@ -

      Prometheus and Grafana installation

      +

      Prometheus and Grafana installation

      This tutorial will show you how to install Prometheus and Grafana for scraping the metrics of the NGINX Ingress controller.

      Important

      This example uses emptyDir volumes for Prometheus and Grafana. This means once the pod gets terminated you will lose all the data.

      -

      Before You Begin

      +

      Before You Begin

      The NGINX Ingress controller should already be deployed according to the deployment instructions here.

      Note that the kustomize bases used in this tutorial are stored in the deploy folder of the GitHub repository kubernetes/ingress-nginx.

      -

      Deploy and configure Prometheus Server

      +

      Deploy and configure Prometheus Server

      The Prometheus server must be configured so that it can discover endpoints of services. If a Prometheus server is already running in the cluster and if it is configured in a way that it can find the ingress controller pods, no extra configuration is needed.

      If there is no existing Prometheus server running, the rest of this tutorial will guide you through the steps needed to deploy a properly configured Prometheus server.

      Running the following command deploys prometheus in Kubernetes:

      kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/prometheus/
      -serviceaccount/prometheus-server created
      -role.rbac.authorization.k8s.io/prometheus-server created
      -rolebinding.rbac.authorization.k8s.io/prometheus-server created
      -configmap/prometheus-configuration-bc6bcg7b65 created
      -service/prometheus-server created
      -deployment.apps/prometheus-server created
       
      -

      Prometheus Dashboard

      +

      Prometheus Dashboard

      Open Prometheus dashboard in a web browser:

      kubectl get svc -n ingress-nginx
       NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
      @@ -1290,7 +1284,7 @@
       

      Open your browser and visit the following URL: http://{node IP address}:{prometheus-svc-nodeport} to load the Prometheus Dashboard.

      According to the above example, this URL will be http://10.192.0.3:32630

      Dashboard

      -

      Grafana

      +

      Grafana

      kubectl apply --kustomize github.com/kubernetes/ingress-nginx/deploy/grafana/
       
      diff --git a/user-guide/multiple-ingress/index.html b/user-guide/multiple-ingress/index.html index 3c4223d3d..be83177c9 100644 --- a/user-guide/multiple-ingress/index.html +++ b/user-guide/multiple-ingress/index.html @@ -1193,7 +1193,7 @@ -

      Multiple Ingress controllers

      +

      Multiple Ingress controllers

      If you're running multiple ingress controllers, or running on a cloud provider that natively handles ingress such as GKE, you need to specify the annotation kubernetes.io/ingress.class: "nginx" in all ingresses that you would like the ingress-nginx controller to claim.

      For instance,

      @@ -1214,7 +1214,7 @@ you need to specify the annotation kubernetes.io/ingres

      To reiterate, setting the annotation to any value which does not match a valid ingress class will force the NGINX Ingress controller to ignore your Ingress. If you are only running a single NGINX ingress controller, this can be achieved by setting the annotation to any value except "nginx" or an empty string.

      Do this if you wish to use one of the other Ingress controllers at the same time as the NGINX controller.

      -

      Multiple ingress-nginx controllers

      +

      Multiple ingress-nginx controllers

      This mechanism also provides users the ability to run multiple NGINX ingress controllers (e.g. one which serves public traffic, one which serves "internal" traffic). To do this, the option --ingress-class must be changed to a value unique for the cluster within the definition of the replication controller. Here is a partial example:

      diff --git a/user-guide/nginx-configuration/annotations/index.html b/user-guide/nginx-configuration/annotations/index.html index fb1a08001..13222e31a 100644 --- a/user-guide/nginx-configuration/annotations/index.html +++ b/user-guide/nginx-configuration/annotations/index.html @@ -1945,7 +1945,7 @@ -

      Annotations

      +

      Annotations

      You can add these Kubernetes annotations to specific Ingress objects to customize their behavior.

      Tip

      @@ -2221,10 +2221,6 @@ table below.

    - - - - @@ -2382,7 +2378,7 @@ table below.

    string
    nginx.ingress.kubernetes.io/secure-verify-ca-secretstring
    nginx.ingress.kubernetes.io/server-alias string
    -

    Canary

    +

    Canary

    In some cases, you may want to "canary" a new set of changes by sending a small number of requests to a different service than the production service. The canary annotation enables the Ingress spec to act as an alternative service for requests to route to depending on the rules applied. The following annotations to configure canary can be enabled after nginx.ingress.kubernetes.io/canary: "true" is set:

    • @@ -2403,7 +2399,7 @@ table below.

      Note that when you mark an ingress as canary, then all the other non-canary annotations will be ignored (inherited from the corresponding main ingress) except nginx.ingress.kubernetes.io/load-balance and nginx.ingress.kubernetes.io/upstream-hash-by.

      Known Limitations

      Currently a maximum of one canary ingress can be applied per Ingress rule.

      -

      Rewrite

      +

      Rewrite

      In some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404. Set the annotation nginx.ingress.kubernetes.io/rewrite-target to the path expected by the service.

      If the Application Root is exposed in a different path and needs to be redirected, set the annotation nginx.ingress.kubernetes.io/app-root to redirect requests for /.

      @@ -2411,7 +2407,7 @@ Set the annotation nginx.ingress.kubernetes.io/rewrite-

      Example

      Please check the rewrite example.

    -

    Session Affinity

    +

    Session Affinity

    The annotation nginx.ingress.kubernetes.io/affinity enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server. The only affinity type available for NGINX is cookie.

    The annotation nginx.ingress.kubernetes.io/affinity-mode defines the stickyness of a session. Setting this to balanced (default) will redistribute some sessions if a deployment gets scaled up, therefore rebalancing the load on the servers. Setting this to persistent will not rebalance sessions to new servers, therefore providing maximum stickyness.

    @@ -2423,10 +2419,10 @@ The only affinity type available for NGINX is cookieExample

    Please check the affinity example.

    - +

    If you use the cookie affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation nginx.ingress.kubernetes.io/session-cookie-name. The default is to create a cookie named 'INGRESSCOOKIE'.

    The NGINX annotation nginx.ingress.kubernetes.io/session-cookie-path defines the path that will be set on the cookie. This is optional unless the annotation nginx.ingress.kubernetes.io/use-regex is set to true; Session cookie paths do not support regex.

    -

    Authentication

    +

    Authentication

    Is possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords.

    The annotations are:

    nginx.ingress.kubernetes.io/auth-type: [basic|digest]
    @@ -2452,21 +2448,21 @@ This annotation also accepts the alternative form "namespace/secretName", in whi
     

    Example

    Please check the auth example.

    -

    Custom NGINX upstream hashing

    +

    Custom NGINX upstream hashing

    NGINX supports load balancing by client-server mapping based on consistent hashing for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The ketama consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.

    There is a special mode of upstream hashing called subset. In this mode, upstream servers are grouped into subsets, and stickiness works by mapping keys to a subset instead of individual upstream servers. Specific server is chosen uniformly at random from the selected sticky subset. It provides a balance between stickiness and load distribution.

    To enable consistent hashing for a backend:

    nginx.ingress.kubernetes.io/upstream-hash-by: the nginx variable, text value or any combination thereof to use for consistent hashing. For example nginx.ingress.kubernetes.io/upstream-hash-by: "$request_uri" to consistently hash upstream requests by the current request URI.

    "subset" hashing can be enabled setting nginx.ingress.kubernetes.io/upstream-hash-by-subset: "true". This maps requests to subset of nodes instead of a single one. upstream-hash-by-subset-size determines the size of each subset (default 3).

    Please check the chashsubset example.

    -

    Custom NGINX load balancing

    +

    Custom NGINX load balancing

    This is similar to load-balance in ConfigMap, but configures load balancing algorithm per ingress.

    Note that nginx.ingress.kubernetes.io/upstream-hash-by takes preference over this. If this and nginx.ingress.kubernetes.io/upstream-hash-by are not set then we fallback to using globally configured load balancing algorithm.

    -

    Custom NGINX upstream vhost

    +

    Custom NGINX upstream vhost

    This configuration setting allows you to control the value for host in the following statement: proxy_set_header Host $host, which forms part of the location block. This is useful if you need to call the upstream server by something other than $host.

    -

    Client Certificate Authentication

    +

    Client Certificate Authentication

    It is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.

    The annotations are:

      @@ -2492,7 +2488,7 @@ This annotation also accepts the alternative form "namespace/secretName", in whi

      Cloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: https://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/

      Only Authenticated Origin Pulls are allowed and can be configured by following their tutorial: https://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls

      -

      Backend Certificate Authentication

      +

      Backend Certificate Authentication

      It is possible to authenticate to a proxied HTTPS backend with certificate using additional annotations in Ingress Rule.

      • nginx.ingress.kubernetes.io/proxy-ssl-secret: secretName: @@ -2507,23 +2503,23 @@ This annotation also accepts the alternative form "namespace/secretName", in whi
      • nginx.ingress.kubernetes.io/proxy-ssl-protocols: Enables the specified protocols for requests to a proxied HTTPS server.
      -

      Configuration snippet

      +

      Configuration snippet

      Using this annotation you can add additional configuration to the NGINX location. For example:

      nginx.ingress.kubernetes.io/configuration-snippet: |
         more_set_headers "Request-Id: $req_id";
       
      -

      Custom HTTP Errors

      +

      Custom HTTP Errors

      Like the custom-http-errors value in the ConfigMap, this annotation will set NGINX proxy-intercept-errors, but only for the NGINX location associated with this ingress. If a default backend annotation is specified on the ingress, the errors will be routed to that annotation's default backend service (instead of the global default backend). Different ingresses can specify different sets of error codes. Even if multiple ingress objects share the same hostname, this annotation can be used to intercept different error codes for each ingress (for example, different error codes to be intercepted for different paths on the same hostname, if each path is on a different ingress). If custom-http-errors is also specified globally, the error values specified in this annotation will override the global value for the given ingress' hostname and path.

      Example usage:

      nginx.ingress.kubernetes.io/custom-http-errors: "404,415"
       

      -

      Default Backend

      +

      Default Backend

      This annotation is of the form nginx.ingress.kubernetes.io/default-backend: <svc name> to specify a custom default backend. This <svc name> is a reference to a service inside of the same namespace in which you are applying this annotation. This annotation overrides the global default backend.

      This service will be handle the response when the service in the Ingress rule does not have active endpoints. It will also handle the error responses if both this annotation and the custom-http-errors annotation is set.

      -

      Enable CORS

      +

      Enable CORS

      To enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation nginx.ingress.kubernetes.io/enable-cors: "true". This will add a section in the server location enabling this functionality.

      @@ -2573,7 +2569,7 @@ location enabling this functionality.

      Note

      For more information please see https://enable-cors.org

      -

      HTTP2 Push Preload.

      +

      HTTP2 Push Preload.

      Enables automatic conversion of preload links specified in the “Link” response header fields into push requests.

      Example

      @@ -2581,7 +2577,7 @@ location enabling this functionality.

    • nginx.ingress.kubernetes.io/http2-push-preload: "true"
    -

    Server Alias

    +

    Server Alias

    Allows the definition of one or more aliases in the server definition of the NGINX configuration using the annotation nginx.ingress.kubernetes.io/server-alias: "<alias 1>,<alias 2>". This will create a server with the same configuration, but adding new values to the server_name directive.

    @@ -2591,7 +2587,7 @@ If a server-alias is created and later a new server with the same hostname is cr place over the alias configuration.

    For more information please see the server_name documentation.

    -

    Server snippet

    +

    Server snippet

    Using the annotation nginx.ingress.kubernetes.io/server-snippet it is possible to add custom configuration in the server configuration block.

    apiVersion: extensions/v1beta1
     kind: Ingress
    @@ -2613,7 +2609,7 @@ place over the alias configuration.

    Attention

    This annotation can be used only once per host.

    -

    Client Body Buffer Size

    +

    Client Body Buffer Size

    Sets buffer size for reading client request body per location. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages. This is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is @@ -2633,7 +2629,7 @@ applied to each location provided in the ingress rule.

    For more information please see http://nginx.org

    -

    External Authentication

    +

    External Authentication

    To use an existing service that provides authentication the Ingress rule can be annotated with nginx.ingress.kubernetes.io/auth-url to indicate the URL where the HTTP request should be sent.

    nginx.ingress.kubernetes.io/auth-url: "URL to the authentication service"
     
    @@ -2669,12 +2665,12 @@ applied to each location provided in the ingress rule.

    Example

    Please check the external-auth example.

    -

    Global External Authentication

    +

    Global External Authentication

    By default the controller redirects all requests to an existing service that provides authentication if global-auth-url is set in the NGINX ConfigMap. If you want to disable this behavior for that ingress, you can use enable-global-auth: "false" in the NGINX ConfigMap. nginx.ingress.kubernetes.io/enable-global-auth: indicates if GlobalExternalAuth configuration should be applied or not to this Ingress rule. Default values is set to "true".

    !!! note For more information please see global-auth-url.

    -

    Rate limiting

    +

    Rate limiting

    These annotations define limits on connections and transmission rates. These can be used to mitigate DDoS Attacks.

    • nginx.ingress.kubernetes.io/limit-connections: number of concurrent connections allowed from a single IP address. A 503 error is returned when exceeding this limit.
    • @@ -2687,13 +2683,13 @@ applied to each location provided in the ingress rule.

      If you specify multiple annotations in a single Ingress rule, limits are applied in the order limit-connections, limit-rpm, limit-rps.

      To configure settings globally for all Ingress rules, the limit-rate-after and limit-rate values may be set in the NGINX ConfigMap. The value set in an Ingress annotation will override the global setting.

      The client IP address will be set based on the use of PROXY protocol or from the X-Forwarded-For header value when use-forwarded-headers is enabled.

      -

      Permanent Redirect

      +

      Permanent Redirect

      This annotation allows to return a permanent redirect instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com would redirect everything to Google.

      -

      Permanent Redirect Code

      +

      Permanent Redirect Code

      This annotation allows you to modify the status code used for permanent redirects. For example nginx.ingress.kubernetes.io/permanent-redirect-code: '308' would return your permanent-redirect with a 308.

      -

      Temporal Redirect

      +

      Temporal Redirect

      This annotation allows you to return a temporal redirect (Return Code 302) instead of sending data to the upstream. For example nginx.ingress.kubernetes.io/temporal-redirect: https://www.google.com would redirect everything to Google with a Return Code of 302 (Moved Temporarily)

      -

      SSL Passthrough

      +

      SSL Passthrough

      The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in the User guide.

      @@ -2707,17 +2703,17 @@ the User guide.

      Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough invalidates all the other annotations set on an Ingress object.

      -

      Service Upstream

      +

      Service Upstream

      By default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.

      The nginx.ingress.kubernetes.io/service-upstream annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.

      This can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue #257.

      -

      Known Issues

      +

      Known Issues

      If the service-upstream annotation is specified the following things should be taken into consideration:

      • Sticky Sessions will not work as only round-robin load balancing is supported.
      • The proxy_next_upstream directive will not have any effect meaning on error the request will not be dispatched to another upstream.
      -

      Server-side HTTPS enforcement through redirect

      +

      Server-side HTTPS enforcement through redirect

      By default the controller redirects (308) to HTTPS if TLS is enabled for that ingress. If you want to disable this behavior globally, you can use ssl-redirect: "false" in the NGINX ConfigMap.

      To configure this feature for specific ingress resources, you can use the nginx.ingress.kubernetes.io/ssl-redirect: "false" @@ -2725,7 +2721,7 @@ annotation in the particular resource.

      When using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

      -

      Redirect from/to www

      +

      Redirect from/to www

      In some scenarios is required to redirect from www.domain.com to domain.com or vice versa. To enable this feature use the annotation nginx.ingress.kubernetes.io/from-to-www-redirect: "true"

      @@ -2736,7 +2732,7 @@ To enable this feature use the annotation nginx.ingress

      Attention

      For HTTPS to HTTPS redirects is mandatory the SSL Certificate defined in the Secret, located in the TLS section of Ingress, contains both FQDN in the common name of the certificate.

      -

      Whitelist source range

      +

      Whitelist source range

      You can specify allowed client IP source ranges through the nginx.ingress.kubernetes.io/whitelist-source-range annotation. The value is a comma separated list of CIDRs, e.g. 10.0.0.0/24,172.10.0.1.

      To configure this setting globally for all Ingress rules, the whitelist-source-range value may be set in the NGINX ConfigMap.

      @@ -2744,7 +2740,7 @@ The value is a comma separated list of Note

      Adding an annotation to an Ingress rule overrides any global restriction.

      -

      Custom timeouts

      +

      Custom timeouts

      Using the configuration configmap it is possible to set the default global timeout for connections to the upstream servers. In some scenarios is required to have different values. To allow this we provide annotations that allows this customization:

        @@ -2756,26 +2752,26 @@ In some scenarios is required to have different values. To allow this we provide
      • nginx.ingress.kubernetes.io/proxy-next-upstream-tries
      • nginx.ingress.kubernetes.io/proxy-request-buffering
      -

      Proxy redirect

      +

      Proxy redirect

      With the annotations nginx.ingress.kubernetes.io/proxy-redirect-from and nginx.ingress.kubernetes.io/proxy-redirect-to it is possible to set the text that should be changed in the Location and Refresh header fields of a proxied server response

      Setting "off" or "default" in the annotation nginx.ingress.kubernetes.io/proxy-redirect-from disables nginx.ingress.kubernetes.io/proxy-redirect-to, otherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.

      By default the value of each annotation is "off".

      -

      Custom max body size

      +

      Custom max body size

      For NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter client_max_body_size.

      To configure this setting globally for all Ingress rules, the proxy-body-size value may be set in the NGINX ConfigMap. To use custom values in an Ingress rule define these annotation:

      nginx.ingress.kubernetes.io/proxy-body-size: 8m
       
      - +

      Sets a text that should be changed in the domain attribute of the "Set-Cookie" header fields of a proxied server response.

      To configure this setting globally for all Ingress rules, the proxy-cookie-domain value may be set in the NGINX ConfigMap.

      - +

      Sets a text that should be changed in the path attribute of the "Set-Cookie" header fields of a proxied server response.

      To configure this setting globally for all Ingress rules, the proxy-cookie-path value may be set in the NGINX ConfigMap.

      -

      Proxy buffering

      +

      Proxy buffering

      Enable or disable proxy buffering proxy_buffering. By default proxy buffering is disabled in the NGINX config.

      To configure this setting globally for all Ingress rules, the proxy-buffering value may be set in the NGINX ConfigMap. @@ -2783,60 +2779,60 @@ To use custom values in an Ingress rule define these annotation:

      nginx.ingress.kubernetes.io/proxy-buffering: "on"
       
      -

      Proxy buffers Number

      +

      Proxy buffers Number

      Sets the number of the buffers in proxy_buffers used for reading the first part of the response received from the proxied server. By default proxy buffers number is set as 4

      To configure this setting globally, set proxy-buffers-number in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
       

      -

      Proxy buffer size

      +

      Proxy buffer size

      Sets the size of the buffer proxy_buffer_size used for reading the first part of the response received from the proxied server. By default proxy buffer size is set as "4k"

      To configure this setting globally, set proxy-buffer-size in NGINX ConfigMap. To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
       

      -

      Proxy max temp file size

      +

      Proxy max temp file size

      When buffering of responses from the proxied server is enabled, and the whole response does not fit into the buffers set by the proxy_buffer_size and proxy_buffers directives, a part of the response can be saved to a temporary file. This directive sets the maximum size of the temporary file setting the proxy_max_temp_file_size. The size of data written to the temporary file at a time is set by the proxy_temp_file_write_size directive.

      The zero value disables buffering of responses to temporary files.

      To use custom values in an Ingress rule, define this annotation:

      nginx.ingress.kubernetes.io/proxy-max-temp-file-size: "1024m"
       

      -

      Proxy HTTP version

      +

      Proxy HTTP version

      Using this annotation sets the proxy_http_version that the Nginx reverse proxy will use to communicate with the backend. By default this is set to "1.1".

      nginx.ingress.kubernetes.io/proxy-http-version: "1.0"
       
      -

      SSL ciphers

      +

      SSL ciphers

      Specifies the enabled ciphers.

      Using this annotation will set the ssl_ciphers directive at the server level. This configuration is active for all the paths in the host.

      nginx.ingress.kubernetes.io/ssl-ciphers: "ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP"
       
      -

      Connection proxy header

      +

      Connection proxy header

      Using this annotation will override the default connection header set by NGINX. To use custom values in an Ingress rule, define the annotation:

      nginx.ingress.kubernetes.io/connection-proxy-header: "keep-alive"
       
      -

      Enable Access Log

      +

      Enable Access Log

      Access logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given ingress. To do this, use the annotation:

      nginx.ingress.kubernetes.io/enable-access-log: "false"
       
      -

      Enable Rewrite Log

      +

      Enable Rewrite Log

      Rewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs. Note that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:

      nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
       
      -

      X-Forwarded-Prefix Header

      +

      X-Forwarded-Prefix Header

      To add the non-standard X-Forwarded-Prefix header to the upstream request with a string value, the following annotation can be used:

      nginx.ingress.kubernetes.io/x-forwarded-prefix: "/path"
       
      -

      Lua Resty WAF

      +

      Lua Resty WAF

      Using lua-resty-waf-* annotations we can enable and control the lua-resty-waf Web Application Firewall per location.

      Following configuration will enable the WAF for the paths defined in the corresponding ingress:

      @@ -2873,7 +2869,7 @@ Reference for this https://github.com/p0pr0ck5/lua-resty-waf.

      -

      ModSecurity

      +

      ModSecurity

      ModSecurity is an OpenSource Web Application firewall. It can be enabled for a particular set of ingress locations. The ModSecurity module must first be enabled by enabling ModSecurity in the ConfigMap. Note this will enable ModSecurity for all paths, and each path @@ -2902,7 +2898,7 @@ statement: Include /etc/nginx/owasp-modsecurity-crs/nginx-modsecurity.conf Include /etc/nginx/modsecurity/modsecurity.conf

      -

      InfluxDB

      +

      InfluxDB

      Using influxdb-* annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket using the nginx-influxdb-module.

      nginx.ingress.kubernetes.io/enable-influxdb: "true"
      @@ -2921,7 +2917,7 @@ Prometheus, etc.. (recommended)
       

    It's important to remember that there's no DNS resolver at this stage so you will have to configure an ip address to nginx.ingress.kubernetes.io/influxdb-host. If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use 127.0.0.1.

    -

    Backend Protocol

    +

    Backend Protocol

    Using backend-protocol annotations is possible to indicate how NGINX should communicate with the backend service. (Replaces secure-backends in older versions) Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP

    By default NGINX uses HTTP.

    @@ -2929,7 +2925,7 @@ Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP

    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
     
    -

    Use Regex

    +

    Use Regex

    Attention

    @@ -2944,12 +2940,12 @@ Valid Values: HTTP, HTTPS, GRPC, GRPCS and AJP

    When this annotation is set to true, the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

    Additionally, if the rewrite-target annotation is used on any Ingress for a given host, then the case insensitive regular expression location modifier will be enforced on ALL paths for a given host regardless of what Ingress they are defined on.

    Please read about ingress path matching before using this modifier.

    -

    Satisfy

    +

    Satisfy

    By default, a request would need to satisfy all authentication requirements in order to be allowed. By using this annotation, requests that satisfy either any or all authentication requirements are allowed, based on the configuration value.

    nginx.ingress.kubernetes.io/satisfy: "any"
     
    -

    Mirror

    +

    Mirror

    Enables a request to be mirrored to a mirror backend. Responses by mirror backends are ignored. This feature is useful, to see how requests will react in "test" backends.

    You can mirror a request to the /mirror path on your ingress, by applying the below:

    nginx.ingress.kubernetes.io/mirror-uri: "/mirror"
    diff --git a/user-guide/nginx-configuration/configmap/index.html b/user-guide/nginx-configuration/configmap/index.html
    index 9593d4581..585dae072 100644
    --- a/user-guide/nginx-configuration/configmap/index.html
    +++ b/user-guide/nginx-configuration/configmap/index.html
    @@ -766,6 +766,13 @@
         http2-max-requests
       
       
    +
    +      
    +        
  • + + http2-max-concurrent-streams + +
  • @@ -2343,6 +2350,13 @@ http2-max-requests +
  • + +
  • + + http2-max-concurrent-streams + +
  • @@ -3197,7 +3211,7 @@ -

    ConfigMaps

    +

    ConfigMaps

    ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.

    The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.

    @@ -3215,7 +3229,7 @@ This means that we want a value with boolean values we need to quote the values, Same for numbers, like "100".

    "Slice" types (defined below as []string or []int can be provided as a comma-delimited string.

  • -

    Configuration options

    +

    Configuration options

    The following table shows a configuration option's name, type, and the default value:

    @@ -3347,6 +3361,11 @@ Same for numbers, like "100".

    + + + + + @@ -3948,83 +3967,87 @@ Same for numbers, like "100".

    1000
    http2-max-concurrent-streamsint1000
    hsts bool "true"
    -

    add-headers

    +

    add-headers

    Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

    -

    allow-backend-server-header

    +

    allow-backend-server-header

    Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

    -

    hide-headers

    +

    hide-headers

    Sets additional header that will not be passed from the upstream server to the client response. default: empty

    References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

    -

    access-log-params

    +

    access-log-params

    Additional params for access_log. For example, buffer=16k, gzip, flush=1m

    References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

    -

    access-log-path

    +

    access-log-path

    Access log path. Goes to /var/log/nginx/access.log by default.

    Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

    -

    enable-access-log-for-default-backend

    +

    enable-access-log-for-default-backend

    Enables logging access to default backend. default: is disabled.

    -

    error-log-path

    +

    error-log-path

    Error log path. Goes to /var/log/nginx/error.log by default.

    Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

    References: http://nginx.org/en/docs/ngx_core_module.html#error_log

    -

    enable-modsecurity

    +

    enable-modsecurity

    Enables the modsecurity module for NGINX. default: is disabled

    -

    enable-owasp-modsecurity-crs

    +

    enable-owasp-modsecurity-crs

    Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

    -

    modsecurity-snippet

    +

    modsecurity-snippet

    Adds custom rules to modsecurity section of nginx configration

    -

    client-header-buffer-size

    +

    client-header-buffer-size

    Allows to configure a custom buffer size for reading client request header.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

    -

    client-header-timeout

    +

    client-header-timeout

    Defines a timeout for reading client request header, in seconds.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

    -

    client-body-buffer-size

    +

    client-body-buffer-size

    Sets buffer size for reading client request body.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

    -

    client-body-timeout

    +

    client-body-timeout

    Defines a timeout for reading client request body, in seconds.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

    -

    disable-access-log

    +

    disable-access-log

    Disables the Access Log from the entire Ingress Controller. default: '"false"'

    References: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

    -

    disable-ipv6

    +

    disable-ipv6

    Disable listening on IPV6. default: false; IPv6 listening is enabled

    -

    disable-ipv6-dns

    +

    disable-ipv6-dns

    Disable IPV6 for nginx DNS resolver. default: false; IPv6 resolving enabled.

    -

    enable-underscores-in-headers

    +

    enable-underscores-in-headers

    Enables underscores in header names. default: is disabled

    -

    ignore-invalid-headers

    +

    ignore-invalid-headers

    Set if header fields with invalid names should be ignored. default: is enabled

    -

    retry-non-idempotent

    +

    retry-non-idempotent

    Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".

    -

    error-log-level

    +

    error-log-level

    Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

    References: http://nginx.org/en/docs/ngx_core_module.html#error_log

    -

    http2-max-field-size

    +

    http2-max-field-size

    Limits the maximum size of an HPACK-compressed request header field.

    References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

    -

    http2-max-header-size

    +

    http2-max-header-size

    Limits the maximum size of the entire request header list after HPACK decompression.

    References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

    -

    http2-max-requests

    +

    http2-max-requests

    Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.

    References: http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests

    -

    hsts

    +

    http2-max-concurrent-streams

    +

    Sets the maximum number of concurrent HTTP/2 streams in a connection.

    +

    References: +http://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams

    +

    hsts

    Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

    References:

    @@ -4032,27 +4055,27 @@ HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature
  • https://developer.mozilla.org/en-US/docs/Web/Security/HTTP_strict_transport_security
  • https://blog.qualys.com/securitylabs/2016/03/28/the-importance-of-a-proper-http-strict-transport-security-implementation-on-your-web-server
  • -

    hsts-include-subdomains

    +

    hsts-include-subdomains

    Enables or disables the use of HSTS in all the subdomains of the server-name.

    -

    hsts-max-age

    +

    hsts-max-age

    Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.

    -

    hsts-preload

    +

    hsts-preload

    Enables or disables the preload attribute in the HSTS feature (when it is enabled) dd

    -

    keep-alive

    +

    keep-alive

    Sets the time during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

    -

    keep-alive-requests

    +

    keep-alive-requests

    Sets the maximum number of requests that can be served through one keep-alive connection.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_requests

    -

    large-client-header-buffers

    +

    large-client-header-buffers

    Sets the maximum number and size of buffers used for reading large client request header. default: 4 8k

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers

    -

    log-format-escape-json

    +

    log-format-escape-json

    Sets if the escape parameter allows JSON ("true") or default characters escaping in variables ("false") Sets the nginx log format.

    -

    log-format-upstream

    +

    log-format-upstream

    Sets the nginx log format. Example for json output:

    log-format-upstream: '{"time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id",
    @@ -4062,14 +4085,14 @@ Example for json output:

    Please check the log-format for definition of each field.

    -

    log-format-stream

    +

    log-format-stream

    Sets the nginx stream format.

    -

    enable-multi-accept

    +

    enable-multi-accept

    If disabled, a worker process will accept one new connection at a time. Otherwise, a worker process will accept all new connections at a time. default: true

    References: http://nginx.org/en/docs/ngx_core_module.html#multi_accept

    -

    max-worker-connections

    +

    max-worker-connections

    Sets the maximum number of simultaneous connections that can be opened by each worker process. 0 will use the value of max-worker-open-files. default: 16384

    @@ -4077,57 +4100,57 @@ Example for json output:

    Tip

    Using 0 in scenarios of high load improves performance at the cost of increasing RAM utilization (even on idle).

    -

    max-worker-open-files

    +

    max-worker-open-files

    Sets the maximum number of files that can be opened by each worker process. The default of 0 means "max open files (system's limit) / worker-processes - 1024". default: 0

    -

    map-hash-bucket-size

    +

    map-hash-bucket-size

    Sets the bucket size for the map variables hash tables. The details of setting up hash tables are provided in a separate document.

    -

    proxy-real-ip-cidr

    +

    proxy-real-ip-cidr

    If use-proxy-protocol is enabled, proxy-real-ip-cidr defines the default the IP/network address of your external load balancer.

    -

    proxy-set-headers

    +

    proxy-set-headers

    Sets custom headers from named configmap before sending traffic to backends. The value format is namespace/name. See example

    -

    server-name-hash-max-size

    +

    server-name-hash-max-size

    Sets the maximum size of the server names hash tables used in server names,map directive’s values, MIME types, names of request header strings, etc.

    References: http://nginx.org/en/docs/hash.html

    -

    server-name-hash-bucket-size

    +

    server-name-hash-bucket-size

    Sets the size of the bucket for the server names hash tables.

    References:

    -

    proxy-headers-hash-max-size

    +

    proxy-headers-hash-max-size

    Sets the maximum size of the proxy headers hash tables.

    References:

    -

    reuse-port

    +

    reuse-port

    Instructs NGINX to create an individual listening socket for each worker process (using the SO_REUSEPORT socket option), allowing a kernel to distribute incoming connections between worker processes default: true

    -

    proxy-headers-hash-bucket-size

    +

    proxy-headers-hash-bucket-size

    Sets the size of the bucket for the proxy headers hash tables.

    References:

    -

    server-tokens

    +

    server-tokens

    Send NGINX Server header in responses and display NGINX version in error pages. default: is enabled

    -

    ssl-ciphers

    +

    ssl-ciphers

    Sets the ciphers list to enable. The ciphers are specified in the format understood by the OpenSSL library.

    The default cipher list is: ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256.

    The ordering of a ciphersuite is very important because it decides which algorithms are going to be selected in priority. The recommendation above prioritizes algorithms that provide perfect forward secrecy.

    Please check the Mozilla SSL Configuration Generator.

    -

    ssl-ecdh-curve

    +

    ssl-ecdh-curve

    Specifies a curve for ECDHE ciphers.

    References: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ecdh_curve

    -

    ssl-dh-param

    +

    ssl-dh-param

    Sets the name of the secret that contains Diffie-Hellman key to help with "Perfect Forward Secrecy".

    References:

    -

    ssl-protocols

    +

    ssl-protocols

    Sets the SSL protocols to use. The default is: TLSv1.2.

    Please check the result of the configuration using https://ssllabs.com/ssltest/analyze.html or https://testssl.sh.

    -

    ssl-early-data

    +

    ssl-early-data

    Enables or disables TLS 1.3 early data

    This requires ssl-protocols to have TLSv1.3 enabled.

    ssl_early_data. The default is: false.

    -

    ssl-session-cache

    +

    ssl-session-cache

    Enables or disables the use of shared SSL cache among worker processes.

    -

    ssl-session-cache-size

    +

    ssl-session-cache-size

    Sets the size of the SSL shared session cache between all worker processes.

    -

    ssl-session-tickets

    +

    ssl-session-tickets

    Enables or disables session resumption through TLS session tickets.

    -

    ssl-session-ticket-key

    +

    ssl-session-ticket-key

    Sets the secret key used to encrypt and decrypt TLS session tickets. The value must be a valid base64 string. To create a ticket: openssl rand 80 | openssl enc -A -base64

    TLS session ticket-key, by default, a randomly generated key is used.

    -

    ssl-session-timeout

    +

    ssl-session-timeout

    Sets the time during which a client may reuse the session parameters stored in a cache.

    -

    ssl-buffer-size

    +

    ssl-buffer-size

    Sets the size of the SSL buffer used for sending data. The default of 4k helps NGINX to improve TLS Time To First Byte (TTTFB).

    References: https://www.igvita.com/2013/12/16/optimizing-nginx-tls-time-to-first-byte/

    -

    use-proxy-protocol

    +

    use-proxy-protocol

    Enables or disables the PROXY protocol to receive client connection (real IP address) information passed through proxy servers and load balancers such as HAProxy and Amazon Elastic Load Balancer (ELB).

    -

    proxy-protocol-header-timeout

    +

    proxy-protocol-header-timeout

    Sets the timeout value for receiving the proxy-protocol headers. The default of 5 seconds prevents the TLS passthrough handler from waiting indefinitely on a dropped connection. default: 5s

    -

    use-gzip

    +

    use-gzip

    Enables or disables compression of HTTP responses using the "gzip" module. MIME types to compress are controlled by gzip-types. default: true

    -

    use-geoip

    +

    use-geoip

    Enables or disables "geoip" module that creates variables with values depending on the client IP address, using the precompiled MaxMind databases. default: true

    Note: MaxMind legacy databases are discontinued and will not receive updates after 2019-01-02, cf. discontinuation notice. Consider use-geoip2 below.

    -

    use-geoip2

    +

    use-geoip2

    Enables the geoip2 module for NGINX. default: false

    -

    enable-brotli

    +

    enable-brotli

    Enables or disables compression of HTTP responses using the "brotli" module. The default mime type list to compress is: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component. default: is disabled

    Note: Brotli does not works in Safari < 11. For more information see https://caniuse.com/#feat=brotli

    -

    brotli-level

    +

    brotli-level

    Sets the Brotli Compression Level that will be used. default: 4

    -

    brotli-types

    +

    brotli-types

    Sets the MIME Types that will be compressed on-the-fly by brotli. default: application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component

    -

    use-http2

    +

    use-http2

    Enables or disables HTTP/2 support in secure connections.

    -

    gzip-level

    +

    gzip-level

    Sets the gzip Compression Level that will be used. default: 5

    -

    gzip-types

    +

    gzip-types

    Sets the MIME types in addition to "text/html" to compress. The special value "*" matches any MIME type. Responses with the "text/html" type are always compressed if [use-gzip](#use-gzip) is enabled. default: application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component.

    -

    worker-processes

    +

    worker-processes

    Sets the number of worker processes. The default of "auto" means number of available CPU cores.

    -

    worker-cpu-affinity

    +

    worker-cpu-affinity

    Binds worker processes to the sets of CPUs. worker_cpu_affinity. By default worker processes are not bound to any specific CPUs. The value can be:

      @@ -4203,9 +4226,9 @@ By default worker processes are not bound to any specific CPUs. The value can be
    • cpumask: e.g. 0001 0010 0100 1000 to bind processes to specific cpus.
    • auto: binding worker processes automatically to available CPUs.
    -

    worker-shutdown-timeout

    +

    worker-shutdown-timeout

    Sets a timeout for Nginx to wait for worker to gracefully shutdown. default: "240s"

    -

    load-balance

    +

    load-balance

    Sets the algorithm to use for load balancing. The value can either be:

      @@ -4219,148 +4242,148 @@ The value can either be:

    References: http://nginx.org/en/docs/http/load_balancing.html

    -

    variables-hash-bucket-size

    +

    variables-hash-bucket-size

    Sets the bucket size for the variables hash table.

    References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_bucket_size

    -

    variables-hash-max-size

    +

    variables-hash-max-size

    Sets the maximum size of the variables hash table.

    References: http://nginx.org/en/docs/http/ngx_http_map_module.html#variables_hash_max_size

    -

    upstream-keepalive-connections

    +

    upstream-keepalive-connections

    Activates the cache for connections to upstream servers. The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. default: 32

    References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive

    -

    upstream-keepalive-timeout

    +

    upstream-keepalive-timeout

    Sets a timeout during which an idle keepalive connection to an upstream server will stay open. default: 60

    References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout

    -

    upstream-keepalive-requests

    +

    upstream-keepalive-requests

    Sets the maximum number of requests that can be served through one keepalive connection. After the maximum number of requests is made, the connection is closed. default: 100

    References: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_requests

    -

    limit-conn-zone-variable

    +

    limit-conn-zone-variable

    Sets parameters for a shared memory zone that will keep states for various keys of limit_conn_zone. The default of "$binary_remote_addr" variable’s size is always 4 bytes for IPv4 addresses or 16 bytes for IPv6 addresses.

    -

    proxy-stream-timeout

    +

    proxy-stream-timeout

    Sets the timeout between two successive read or write operations on client or proxied server connections. If no data is transmitted within this time, the connection is closed.

    References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_timeout

    -

    proxy-stream-responses

    +

    proxy-stream-responses

    Sets the number of datagrams expected from the proxied server in response to the client request if the UDP protocol is used.

    References: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_responses

    -

    bind-address

    +

    bind-address

    Sets the addresses on which the server will accept requests instead of *. It should be noted that these addresses must exist in the runtime environment or the controller will crash loop.

    -

    use-forwarded-headers

    +

    use-forwarded-headers

    If true, NGINX passes the incoming X-Forwarded-* headers to upstreams. Use this option when NGINX is behind another L7 proxy / load balancer that is setting these headers.

    If false, NGINX ignores incoming X-Forwarded-* headers, filling them with the request information it sees. Use this option if NGINX is exposed directly to the internet, or it's behind a L3/packet-based load balancer that doesn't alter the source IP in the packets.

    -

    forwarded-for-header

    +

    forwarded-for-header

    Sets the header field for identifying the originating IP address of a client. default: X-Forwarded-For

    -

    compute-full-forwarded-for

    +

    compute-full-forwarded-for

    Append the remote address to the X-Forwarded-For header instead of replacing it. When this option is enabled, the upstream application is responsible for extracting the client IP based on its own list of trusted proxies.

    -

    proxy-add-original-uri-header

    +

    proxy-add-original-uri-header

    Adds an X-Original-Uri header with the original request URI to the backend request

    -

    generate-request-id

    +

    generate-request-id

    Ensures that X-Request-ID is defaulted to a random value, if no X-Request-ID is present in the request

    -

    enable-opentracing

    +

    enable-opentracing

    Enables the nginx Opentracing extension. default: is disabled

    References: https://github.com/opentracing-contrib/nginx-opentracing

    -

    zipkin-collector-host

    +

    zipkin-collector-host

    Specifies the host to use when uploading traces. It must be a valid URL.

    -

    zipkin-collector-port

    +

    zipkin-collector-port

    Specifies the port to use when uploading traces. default: 9411

    -

    zipkin-service-name

    +

    zipkin-service-name

    Specifies the service name to use for any traces created. default: nginx

    -

    zipkin-sample-rate

    +

    zipkin-sample-rate

    Specifies sample rate for any traces created. default: 1.0

    -

    jaeger-collector-host

    +

    jaeger-collector-host

    Specifies the host to use when uploading traces. It must be a valid URL.

    -

    jaeger-collector-port

    +

    jaeger-collector-port

    Specifies the port to use when uploading traces. default: 6831

    -

    jaeger-service-name

    +

    jaeger-service-name

    Specifies the service name to use for any traces created. default: nginx

    -

    jaeger-sampler-type

    +

    jaeger-sampler-type

    Specifies the sampler to be used when sampling traces. The available samplers are: const, probabilistic, ratelimiting, remote. default: const

    -

    jaeger-sampler-param

    +

    jaeger-sampler-param

    Specifies the argument to be passed to the sampler constructor. Must be a number. For const this should be 0 to never sample and 1 to always sample. default: 1

    -

    jaeger-sampler-host

    +

    jaeger-sampler-host

    Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. Leave blank to use default value (localhost). default: http://127.0.0.1

    -

    jaeger-sampler-port

    +

    jaeger-sampler-port

    Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. default: 5778

    -

    jaeger-trace-context-header-name

    +

    jaeger-trace-context-header-name

    Specifies the header name used for passing trace context. default: uber-trace-id

    -

    jaeger-debug-header

    +

    jaeger-debug-header

    Specifies the header name used for force sampling. default: jaeger-debug-id

    -

    jaeger-baggage-header

    +

    jaeger-baggage-header

    Specifies the header name used to submit baggage if there is no root span. default: jaeger-baggage

    -

    jaeger-tracer-baggage-header-prefix

    +

    jaeger-tracer-baggage-header-prefix

    Specifies the header prefix used to propagate baggage. default: uberctx-

    -

    main-snippet

    +

    main-snippet

    Adds custom configuration to the main section of the nginx configuration.

    -

    http-snippet

    +

    http-snippet

    Adds custom configuration to the http section of the nginx configuration.

    -

    server-snippet

    +

    server-snippet

    Adds custom configuration to all the servers in the nginx configuration.

    -

    location-snippet

    +

    location-snippet

    Adds custom configuration to all the locations in the nginx configuration.

    You can not use this to add new locations that proxy to the Kubernetes pods, as the snippet does not have access to the Go template functions. If you want to add custom locations you will have to provide your own nginx.tmpl.

    -

    custom-http-errors

    +

    custom-http-errors

    Enables which HTTP codes should be passed for processing with the error_page directive

    Setting at least one code also enables proxy_intercept_errors which are required to process error_page.

    Example usage: custom-http-errors: 404,415

    -

    proxy-body-size

    +

    proxy-body-size

    Sets the maximum allowed size of the client request body. See NGINX client_max_body_size.

    -

    proxy-connect-timeout

    +

    proxy-connect-timeout

    Sets the timeout for establishing a connection with a proxied server. It should be noted that this timeout cannot usually exceed 75 seconds.

    -

    proxy-read-timeout

    +

    proxy-read-timeout

    Sets the timeout in seconds for reading a response from the proxied server. The timeout is set only between two successive read operations, not for the transmission of the whole response.

    -

    proxy-send-timeout

    +

    proxy-send-timeout

    Sets the timeout in seconds for transmitting a request to the proxied server. The timeout is set only between two successive write operations, not for the transmission of the whole request.

    -

    proxy-buffers-number

    +

    proxy-buffers-number

    Sets the number of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

    -

    proxy-buffer-size

    +

    proxy-buffer-size

    Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header.

    - +

    Sets a text that should be changed in the path attribute of the “Set-Cookie” header fields of a proxied server response.

    - +

    Sets a text that should be changed in the domain attribute of the “Set-Cookie” header fields of a proxied server response.

    -

    proxy-next-upstream

    +

    proxy-next-upstream

    Specifies in which cases a request should be passed to the next server.

    -

    proxy-next-upstream-timeout

    +

    proxy-next-upstream-timeout

    Limits the time in seconds during which a request can be passed to the next server.

    -

    proxy-next-upstream-tries

    +

    proxy-next-upstream-tries

    Limit the number of possible tries a request should be passed to the next server.

    -

    proxy-redirect-from

    +

    proxy-redirect-from

    Sets the original text that should be changed in the "Location" and "Refresh" header fields of a proxied server response. default: off

    References: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect

    -

    proxy-request-buffering

    +

    proxy-request-buffering

    Enables or disables buffering of a client request body.

    -

    ssl-redirect

    +

    ssl-redirect

    Sets the global value of redirects (301) to HTTPS if the server has a TLS certificate (defined in an Ingress rule). default: "true"

    -

    whitelist-source-range

    +

    whitelist-source-range

    Sets the default whitelisted IPs for each server block. This can be overwritten by an annotation on an Ingress rule. See ngx_http_access_module.

    -

    skip-access-log-urls

    +

    skip-access-log-urls

    Sets a list of URLs that should not appear in the NGINX access log. This is useful with urls like /health or health-check that make "complex" reading the logs. default: is empty

    -

    limit-rate

    +

    limit-rate

    Limits the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate

    -

    limit-rate-after

    +

    limit-rate-after

    Sets the initial amount after which the further transmission of a response to a client will be rate limited.

    -

    lua-shared-dicts

    +

    lua-shared-dicts

    Customize default Lua shared dictionaries or define more. You can use the following syntax to do so:

    lua-shared-dicts: "<my dict name>: <my dict size>, [<my dict name>: <my dict size>], ..."
     
    @@ -4372,7 +4395,7 @@ See ngx_http

    References: http://nginx.org/en/docs/http/ngx_http_core_module.html#limit_rate_after

    -

    http-redirect-code

    +

    http-redirect-code

    Sets the HTTP status code to be used in redirects. Supported codes are 301,302,307 and 308 default: 308

    @@ -4380,58 +4403,58 @@ Supported codes are RFC 7238 was created to define the 308 (Permanent Redirect) status code that is similar to 301 (Moved Permanently) but it keeps the payload in the redirect. This is important if the we send a redirect in methods like POST.

    -

    proxy-buffering

    +

    proxy-buffering

    Enables or disables buffering of responses from the proxied server.

    -

    limit-req-status-code

    +

    limit-req-status-code

    Sets the status code to return in response to rejected requests. default: 503

    -

    limit-conn-status-code

    +

    limit-conn-status-code

    Sets the status code to return in response to rejected connections. default: 503

    -

    no-tls-redirect-locations

    +

    no-tls-redirect-locations

    A comma-separated list of locations on which http requests will never get redirected to their https counterpart. default: "/.well-known/acme-challenge"

    -

    global-auth-url

    +

    global-auth-url

    A url to an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-url. Locations that should not get authenticated can be listed using no-auth-locations See no-auth-locations. In addition, each service can be excluded from authentication via annotation enable-global-auth set to "false". default: ""

    References: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md#external-authentication

    -

    global-auth-method

    +

    global-auth-method

    A HTTP method to use for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-method. default: ""

    -

    global-auth-signin

    +

    global-auth-signin

    Sets the location of the error page for an existing service that provides authentication for all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-signin. default: ""

    -

    global-auth-response-headers

    +

    global-auth-response-headers

    Sets the headers to pass to backend once authentication request completes. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-response-headers. default: ""

    -

    global-auth-request-redirect

    +

    global-auth-request-redirect

    Sets the X-Auth-Request-Redirect header value. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""

    -

    global-auth-snippet

    +

    global-auth-snippet

    Sets a custom snippet to use with external authentication. Applied to all the locations. Similar to the Ingress rule annotation nginx.ingress.kubernetes.io/auth-request-redirect. default: ""

    -

    global-auth-cache-key

    +

    global-auth-cache-key

    Enables caching for global auth requests. Specify a lookup key for auth responses, e.g. $remote_user$http_authorization.

    -

    global-auth-cache-duration

    +

    global-auth-cache-duration

    Set a caching time for auth responses based on their response codes, e.g. 200 202 30m. See proxy_cache_valid for details. You may specify multiple, comma-separated values: 200 202 10m, 401 5m. defaults to 200 202 401 5m.

    -

    no-auth-locations

    +

    no-auth-locations

    A comma-separated list of locations that should not get authenticated. default: "/.well-known/acme-challenge"

    -

    block-cidrs

    +

    block-cidrs

    A comma-separated list of IP addresses (or subnets), request from which have to be blocked globally.

    References: http://nginx.org/en/docs/http/ngx_http_access_module.html#deny

    -

    block-user-agents

    +

    block-user-agents

    A comma-separated list of User-Agent, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

    References: http://nginx.org/en/docs/http/ngx_http_map_module.html#map

    -

    block-referers

    +

    block-referers

    A comma-separated list of Referers, request from which have to be blocked globally. It's possible to use here full strings and regular expressions. More details about valid patterns can be found at map Nginx directive documentation.

    References: diff --git a/user-guide/nginx-configuration/custom-template/index.html b/user-guide/nginx-configuration/custom-template/index.html index bdd66e158..87e25da98 100644 --- a/user-guide/nginx-configuration/custom-template/index.html +++ b/user-guide/nginx-configuration/custom-template/index.html @@ -1150,7 +1150,7 @@ -

    Custom NGINX template

    +

    Custom NGINX template

    The NGINX template is located in the file /etc/nginx/template/nginx.tmpl.

    Using a Volume it is possible to use a custom template. This includes using a Configmap as source of the template

    diff --git a/user-guide/nginx-configuration/index.html b/user-guide/nginx-configuration/index.html index 2a255479f..cd6bc845e 100644 --- a/user-guide/nginx-configuration/index.html +++ b/user-guide/nginx-configuration/index.html @@ -1150,7 +1150,7 @@ -

    NGINX Configuration

    +

    NGINX Configuration

    There are three ways to customize NGINX:

    1. ConfigMap: using a Configmap to set global configurations in NGINX.
    2. diff --git a/user-guide/nginx-configuration/log-format/index.html b/user-guide/nginx-configuration/log-format/index.html index 6cea88a38..aebf431fd 100644 --- a/user-guide/nginx-configuration/log-format/index.html +++ b/user-guide/nginx-configuration/log-format/index.html @@ -1150,7 +1150,7 @@ -

      Log format

      +

      Log format

      The default configuration uses a custom logging format to add additional information about upstreams, response time and status.

      log_format upstreaminfo
           '$remote_addr - $remote_user [$time_local] "$request" '
      diff --git a/user-guide/third-party-addons/modsecurity/index.html b/user-guide/third-party-addons/modsecurity/index.html
      index 58f83ce2b..36d1f1228 100644
      --- a/user-guide/third-party-addons/modsecurity/index.html
      +++ b/user-guide/third-party-addons/modsecurity/index.html
      @@ -1150,7 +1150,7 @@
                         
                       
                       
      -                

      ModSecurity Web Application Firewall

      +

      ModSecurity Web Application Firewall

      ModSecurity is an open source, cross platform web application firewall (WAF) engine for Apache, IIS and Nginx that is developed by Trustwave's SpiderLabs. It has a robust event-based programming language which provides protection from a range of attacks against web applications and allows for HTTP traffic monitoring, logging and real-time analysis - https://www.modsecurity.org

      The ModSecurity-nginx connector is the connection point between NGINX and libmodsecurity (ModSecurity v3).

      The default ModSecurity configuration file is located in /etc/nginx/modsecurity/modsecurity.conf. This is the only file located in this directory and contains the default recommended configuration. Using a volume we can replace this file with the desired configuration. diff --git a/user-guide/third-party-addons/opentracing/index.html b/user-guide/third-party-addons/opentracing/index.html index 7158f63ac..ae829a66e 100644 --- a/user-guide/third-party-addons/opentracing/index.html +++ b/user-guide/third-party-addons/opentracing/index.html @@ -1249,11 +1249,11 @@ -

      OpenTracing

      +

      OpenTracing

      Enables requests served by NGINX for distributed tracing via The OpenTracing Project.

      Using the third party module opentracing-contrib/nginx-opentracing the NGINX ingress controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled.

      -

      Usage

      +

      Usage

      To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap:

      data:
         enable-opentracing: "true"
      @@ -1321,10 +1321,10 @@ datadog-service-name
       datadog-operation-name-override
       

      All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP. In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here) to make sure traces will be sent to the local agent.

      -

      Examples

      +

      Examples

      The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube.

      -

      Zipkin

      +

      Zipkin

      In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run:

      kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml
      @@ -1346,7 +1346,7 @@ kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/ma
       
       

      In the Zipkin interface we can see the details: zipkin screenshot

      -

      Jaeger

      +

      Jaeger

      1. Enable Ingress addon in Minikube: diff --git a/user-guide/tls/index.html b/user-guide/tls/index.html index ce4b2ffcd..7e063b316 100644 --- a/user-guide/tls/index.html +++ b/user-guide/tls/index.html @@ -1303,8 +1303,8 @@ -

        TLS/HTTPS

        -

        TLS Secrets

        +

        TLS/HTTPS

        +

        TLS Secrets

        Anytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.

        You can generate a self-signed certificate and private key with:

        $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ${KEY_FILE} -out ${CERT_FILE} -subj "/CN=${HOST}/O=${HOST}"
        @@ -1315,7 +1315,7 @@
         

        The resulting secret will be of type kubernetes.io/tls.

        -

        Default SSL Certificate

        +

        Default SSL Certificate

        NGINX provides the option to configure a server as a catch-all with server_name for requests that do not match any of the configured server names. @@ -1329,8 +1329,8 @@ If this flag is not provided NGINX will use a self-signed certificate.

        add --default-ssl-certificate=default/foo-tls in the nginx-controller deployment.

        The default certificate will also be used for ingress tls: sections that do not have a secretName option.

        -

        SSL Passthrough

        -

        The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by +

        SSL Passthrough

        +

        The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by default. This is required to enable passthrough backends in Ingress objects.

        Warning

        @@ -1347,14 +1347,14 @@ passthrough proxy port (default: 442), which proxies the request to the default

        Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of individual Endpoints.

        -

        HTTP Strict Transport Security

        +

        HTTP Strict Transport Security

        HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives this header that browser will prevent any communications from being sent over HTTP to the specified domain and will instead send all communications over HTTPS.

        HSTS is enabled by default.

        To disable this behavior use hsts: "false" in the configuration ConfigMap.

        -

        Server-side HTTPS enforcement through redirect

        +

        Server-side HTTPS enforcement through redirect

        By default the controller redirects HTTP clients to the HTTPS port 443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.

        This can be disabled globally using ssl-redirect: "false" in the NGINX config map, @@ -1367,7 +1367,7 @@ redirect to HTTPS even when there is no TLS certificate available. This can be achieved by using the nginx.ingress.kubernetes.io/force-ssl-redirect: "true" annotation in the particular resource.

      -

      Automated Certificate Management with Kube-Lego

      +

      Automated Certificate Management with Kube-Lego

      Tip

      Kube-Lego has reached end-of-life and is being @@ -1381,10 +1381,10 @@ by monitoring ingress resources and their referenced secrets.

      To setup Kube-Lego you can take a look at this full example. The first version to fully support Kube-Lego is Nginx Ingress controller 0.8.

      -

      Default TLS Version and Ciphers

      +

      Default TLS Version and Ciphers

      To provide the most secure baseline configuration possible,

      nginx-ingress defaults to using TLS 1.2 only and a secure set of TLS ciphers.

      -

      Legacy TLS

      +

      Legacy TLS

      The default configuration, though secure, does not support some older browsers and operating systems.

      For instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing, May 2018, approximately 15% of Android devices