In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectl get node
NAME STATUS ROLES EXTERNAL-IPhost-1 Ready master 203.0.113.1host-2 Ready node 203.0.113.2
diff --git a/deploy/hardening-guide/index.html b/deploy/hardening-guide/index.html
index eba723166..f44f29e0d 100644
--- a/deploy/hardening-guide/index.html
+++ b/deploy/hardening-guide/index.html
@@ -1,4 +1,4 @@
- Hardening guide - NGINX Ingress Controller
cipherlist.eu (one of many forks of the now dead project cipherli.st)
This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.
Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.
This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself
4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)
OK
HSTS is enabled by default
4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)
ACTION NEEDED / RISK TO BE ACCEPTED
HKPK not enabled by default
If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown
4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored)
DEPENDS ON BACKEND
Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh
cipherlist.eu (one of many forks of the now dead project cipherli.st)
This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible.
Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences.
This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself
4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored)
OK
HSTS is enabled by default
4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored)
ACTION NEEDED / RISK TO BE ACCEPTED
HKPK not enabled by default
If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown
4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored)
DEPENDS ON BACKEND
Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh
The default configuration watches Ingress object from all namespaces.
To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.
Warning
If multiple Ingresses define paths for the same host, the ingress controller merges the definitions.
Danger
The admission webhook requires connectivity between Kubernetes API server and the ingress controller.
In case Network policies or additional firewalls, please allow access to port 8443.
Attention
The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. For this reason, there is an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.
You can wait until it is ready to run the next command:
The default configuration watches Ingress object from all namespaces.
To change this behavior use the flag --watch-namespace to limit the scope to a particular namespace.
Warning
If multiple Ingresses define paths for the same host, the ingress controller merges the definitions.
Danger
The admission webhook requires connectivity between Kubernetes API server and the ingress controller.
In case Network policies or additional firewalls, please allow access to port 8443.
Attention
The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. For this reason, there is an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.
You can wait until it is ready to run the next command:
This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.
Role Based Access Control is comprised of four layers:
ClusterRole - permissions assigned to a role that apply to an entire cluster
ClusterRoleBinding - binding a ClusterRole to a specific account
Role - permissions assigned to a role that apply to a specific namespace
RoleBinding - binding a Role to a specific account
In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.
There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.
These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole
These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role
configmaps, pods, secrets: get
endpoints: get
Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx
Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).
The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.
The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.
This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.
Role Based Access Control is comprised of four layers:
ClusterRole - permissions assigned to a role that apply to an entire cluster
ClusterRoleBinding - binding a ClusterRole to a specific account
Role - permissions assigned to a role that apply to a specific namespace
RoleBinding - binding a Role to a specific account
In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.
There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.
These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole
These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role
configmaps, pods, secrets: get
endpoints: get
Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx
Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).
The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.
The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.
No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.
No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.
This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects, annotations, watches Endpoints and turn them into usable nginx.conf configuration.
Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copy of that (1) currently running configuration model and (2) the one generated in response to some changes in the cluster.
The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one.
There are static and dynamic configuration changes.
All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua.
Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples.
The image used to build the final ingress controller, used in deploy scripts and Helm charts.
This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system.
One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file.
To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.
\ No newline at end of file
diff --git a/developer-guide/getting-started/index.html b/developer-guide/getting-started/index.html
new file mode 100644
index 000000000..45fa669ed
--- /dev/null
+++ b/developer-guide/getting-started/index.html
@@ -0,0 +1,19 @@
+ Getting Started - NGINX Ingress Controller
\ No newline at end of file
diff --git a/e2e-tests/index.html b/e2e-tests/index.html
index 505f194c3..7b2f243cc 100644
--- a/e2e-tests/index.html
+++ b/e2e-tests/index.html
@@ -1,4 +1,4 @@
- e2e test suite for [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/tree/master/) - NGINX Ingress Controller
Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.
Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
Remove any action of the flag --enable-dynamic-certificates
Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.
Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
Remove any action of the flag --enable-dynamic-certificates
Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.
When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.
At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.
This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.
Arguably inter-zone network latency should also be better than cross-zone.
This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.
Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.
How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.
How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.
Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.
How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.
We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.
Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.
When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.
At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.
This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.
Arguably inter-zone network latency should also be better than cross-zone.
This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases
The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.
Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.
How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.
How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.
Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.
How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.
We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.
This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.
The title should be lowercased and spaces/punctuation should be replaced with -.
To get started with this template:
Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
Create a PR. Assign it to folks that are sponsoring this process.
Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.
The canonical place for the latest set of instructions (and the likely source of this file) is here.
The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.
A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.
The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
A good summary is probably at least a paragraph in length.
This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.
Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.
What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.
What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.
How will security be reviewed and by whom? How will UX be reviewed and by whom?
Consider including folks that also work outside project.
Note:Section not required until targeted at a release.
Consider the following in developing a test plan for this enhancement:
Will there be e2e and integration tests, in addition to unit tests?
How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.
All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.
Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.
This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.
The title should be lowercased and spaces/punctuation should be replaced with -.
To get started with this template:
Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
Create a PR. Assign it to folks that are sponsoring this process.
Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.
The canonical place for the latest set of instructions (and the likely source of this file) is here.
The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.
A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.
Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.
The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.
A good summary is probably at least a paragraph in length.
This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.
Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.
What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.
What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.
How will security be reviewed and by whom? How will UX be reviewed and by whom?
Consider including folks that also work outside project.
Note:Section not required until targeted at a release.
Consider the following in developing a test plan for this enhancement:
Will there be e2e and integration tests, in addition to unit tests?
How will it be tested in isolation vs with other components?
No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.
All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.
Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.
No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.
KEPs are only required when the changes are wide ranging and impact most of the project.
Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.
Benefits to KEP users (in the limit):
Exposure on a kubernetes blessed web site that is findable via web search engines.
Cross indexing of KEPs so that users can find connections and the current status of any KEP.
A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.
We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.
A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.
No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.
KEPs are only required when the changes are wide ranging and impact most of the project.
Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.
Benefits to KEP users (in the limit):
Exposure on a kubernetes blessed web site that is findable via web search engines.
Cross indexing of KEPs so that users can find connections and the current status of any KEP.
A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.
We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.
Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
Legacy version of the previous annotation for compatibility with older browsers, generates an Expires cookie directive by adding the seconds to the current date
When set to false nginx ingress will send request to upstream pointed by sticky cookie even if previous attempt failed. When set to true and previous attempt failed, sticky cookie will be changed to point to another upstream.
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.
This example shows how to add authentication in a Ingress rule using a secret that contains a file generated with htpasswd. It's important the file generated is named auth (actually - that the secret has a key data.auth), otherwise the ingress-controller returns a 503.
It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:
CA certificate and Key(Intermediate Certs need to be in CA)
Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use)
Client Certificate(Signed by CA) and Key
For more details on the generation process, checkout the Prerequisite docs.
You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:
It is possible to enable Client-Certificate Authentication by adding additional annotations to your Ingress Resource. Before getting started you must have the following Certificates Setup:
CA certificate and Key(Intermediate Certs need to be in CA)
Server Certificate(Signed by CA) and Key (CN should be equal the hostname you will use)
Client Certificate(Signed by CA) and Key
For more details on the generation process, checkout the Prerequisite docs.
You can have as many certificates as you want. If they're in the binary DER format, you can convert them as the following:
openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem
Then, you can concatenate them all in only one file, named 'ca.crt' as the following:
Note: Make sure that the Key Size is greater than 1024 and Hashing Algorithm(Digest) is something better than md5 for each certificate generated. Otherwise you will receive an error.
This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.
This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.
Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.
The Ingress in this example adds a custom header to Nginx configuration that only applies to that specific Ingress. If you want to add headers that apply globally to all Ingresses, please have a look at this example.
Check if the contents of the annotation are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n kube-system -- cat /etc/nginx/nginx.conf
configmap.yaml defines a ConfigMap in the ingress-nginx namespace named ingress-nginx-controller. This controls the global configuration of the ingress controller, and already exists in a standard installation. The key proxy-set-headers is set to cite the previously-created ingress-nginx/custom-headers ConfigMap.
The nginx ingress controller will read the ingress-nginx/ingress-nginx-controller ConfigMap, find the proxy-set-headers key, read HTTP headers from the ingress-nginx/custom-headers ConfigMap, and include those HTTP headers in all requests flowing from nginx to the backends.
Check the contents of the ConfigMaps are present in the nginx.conf file using: kubectl exec ingress-nginx-controller-873061567-4n3k2 -n ingress-nginx -- cat /etc/nginx/nginx.conf
Custom DH parameters for perfect forward secrecy ¶
This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".
Custom DH parameters for perfect forward secrecy ¶
This example aims to demonstrate the deployment of an nginx ingress controller and use a ConfigMap to configure custom Diffie-Hellman parameters file to help with "Perfect Forward Secrecy".