diff --git a/e2e-tests/index.html b/e2e-tests/index.html index 7e4da74c1..625bff226 100644 --- a/e2e-tests/index.html +++ b/e2e-tests/index.html @@ -1,4 +1,4 @@ - e2e test suite for [NGINX Ingress Controller](https://github.com/kubernetes/ingress-nginx/tree/main/) - NGINX Ingress Controller
Skip to content

e2e test suite for NGINX Ingress Controller

[Default Backend] change default settings

[Default Backend]

[Default Backend] custom service

[Default Backend] SSL

[TCP] tcp-services

auth-*

affinitymode

proxy-*

mirror-*

canary-*

limit-rate

force-ssl-redirect

http2-push-preload

proxy-ssl-*

modsecurity owasp

backend-protocol - GRPC

cors-*

influxdb-*

Annotation - limit-connections

client-body-buffer-size

default-backend

connection-proxy-header

upstream-vhost

custom-http-errors

disable-access-log disable-http-access-log disable-stream-access-log

server-snippet

rewrite-target use-regex enable-rewrite-log

app-root

whitelist-source-range

enable-access-log enable-rewrite-log

x-forwarded-prefix

configuration-snippet

backend-protocol - FastCGI

from-to-www-redirect

permanent-redirect permanent-redirect-code

upstream-hash-by-*

annotation-global-rate-limit

backend-protocol

satisfy

server-alias

ssl-ciphers

auth-tls-*

[Status] status update

Debug CLI

[Memory Leak] Dynamic Certificates

[Ingress] [PathType] mix Exact and Prefix paths

[Ingress] definition without host

single ingress - multiple hosts

[Ingress] [PathType] exact

[Ingress] [PathType] prefix checks

[Security] request smuggling

[SSL] [Flag] default-ssl-certificate

enable-real-ip

access-log

[Lua] lua-shared-dicts

server-tokens

use-proxy-protocol

[Flag] custom HTTP and HTTPS ports

[Security] no-auth-locations

Dynamic $proxy_host

proxy-connect-timeout

[Security] Pod Security Policies

Geoip2

[Security] Pod Security Policies with volumes

enable-multi-accept

log-format-*

[Flag] ingress-class

ssl-ciphers

proxy-next-upstream

[Security] global-auth-url

[Security] block-*

plugins

Configmap - limit-rate

Configure OpenTracing

use-forwarded-headers

proxy-send-timeout

Add no tls redirect locations

settings-global-rate-limit

add-headers

hash size

keep-alive keep-alive-requests

[Flag] disable-catch-all

main-snippet

[SSL] TLS protocols, ciphers and headers)

Configmap change

proxy-read-timeout

[Security] modsecurity-snippet

OCSP

reuse-port

[Shutdown] Graceful shutdown with pending request

[Shutdown] ingress controller

[Service] backend status code 503

[Service] Type ExternalName

Skip to content

e2e test suite for NGINX Ingress Controller

[Default Backend] change default settings

[Default Backend]

[Default Backend] custom service

[Default Backend] SSL

[TCP] tcp-services

auth-*

affinitymode

proxy-*

mirror-*

canary-*

limit-rate

force-ssl-redirect

http2-push-preload

proxy-ssl-*

modsecurity owasp

backend-protocol - GRPC

cors-*

influxdb-*

Annotation - limit-connections

client-body-buffer-size

default-backend

connection-proxy-header

upstream-vhost

custom-http-errors

disable-access-log disable-http-access-log disable-stream-access-log

server-snippet

rewrite-target use-regex enable-rewrite-log

app-root

whitelist-source-range

enable-access-log enable-rewrite-log

x-forwarded-prefix

configuration-snippet

backend-protocol - FastCGI

from-to-www-redirect

permanent-redirect permanent-redirect-code

upstream-hash-by-*

annotation-global-rate-limit

backend-protocol

satisfy

server-alias

ssl-ciphers

auth-tls-*

[Status] status update

Debug CLI

[Memory Leak] Dynamic Certificates

[Ingress] [PathType] mix Exact and Prefix paths

[Ingress] definition without host

single ingress - multiple hosts

[Ingress] [PathType] exact

[Ingress] [PathType] prefix checks

[Security] request smuggling

[SSL] [Flag] default-ssl-certificate

enable-real-ip

access-log

[Lua] lua-shared-dicts

server-tokens

use-proxy-protocol

[Flag] custom HTTP and HTTPS ports

[Security] no-auth-locations

Dynamic $proxy_host

proxy-connect-timeout

[Security] Pod Security Policies

Geoip2

[Security] Pod Security Policies with volumes

enable-multi-accept

log-format-*

[Flag] ingress-class

ssl-ciphers

proxy-next-upstream

[Security] global-auth-url

[Security] block-*

plugins

Configmap - limit-rate

Configure OpenTracing

use-forwarded-headers

proxy-send-timeout

Add no tls redirect locations

settings-global-rate-limit

add-headers

hash size

keep-alive keep-alive-requests

[Flag] disable-catch-all

main-snippet

[SSL] TLS protocols, ciphers and headers)

Configmap change

proxy-read-timeout

[Security] modsecurity-snippet

OCSP

reuse-port

[Shutdown] Graceful shutdown with pending request

[Shutdown] ingress controller

[Service] backend status code 503

[Service] Type ExternalName

Skip to content

Remove static SSL configuration mode

Table of Contents

Summary

Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

Motivation

The static configuration implies reloads, something that affects the majority of the users.

Goals

  • Deprecation of the flag --enable-dynamic-certificates.
  • Cleanup of the codebase.

Non-Goals

  • Features related to certificate authentication are not changed in any way.

Proposal

  • Remove static SSL configuration

Implementation Details/Notes/Constraints

  • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
  • Remove any action of the flag --enable-dynamic-certificates

Drawbacks

Alternatives

Keep both implementations

Skip to content

Remove static SSL configuration mode

Table of Contents

Summary

Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.

Motivation

The static configuration implies reloads, something that affects the majority of the users.

Goals

  • Deprecation of the flag --enable-dynamic-certificates.
  • Cleanup of the codebase.

Non-Goals

  • Features related to certificate authentication are not changed in any way.

Proposal

  • Remove static SSL configuration

Implementation Details/Notes/Constraints

  • Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs.
  • Remove any action of the flag --enable-dynamic-certificates

Drawbacks

Alternatives

Keep both implementations

Skip to content

Availability zone aware routing

Table of Contents

Summary

Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

Motivation

When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

Arguably inter-zone network latency should also be better than cross-zone.

Goals

  • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
  • This should not impact canary feature
  • ingress-nginx should be able to operate successfully if there are no zonal endpoints

Non-Goals

  • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
  • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases

Proposal

The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

Implementation History

  • initial version of KEP is shipped
  • proposal and implementation details are done

Drawbacks [optional]

More load on the Kubernetes API server.

Skip to content

Availability zone aware routing

Table of Contents

Summary

Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.

Motivation

When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money.

At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic.

This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost.

Arguably inter-zone network latency should also be better than cross-zone.

Goals

  • Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying
  • This should not impact canary feature
  • ingress-nginx should be able to operate successfully if there are no zonal endpoints

Non-Goals

  • This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone
  • This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases

Proposal

The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior.

Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that.

How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase.

How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead.

Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded.

How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer.

We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.

Implementation History

  • initial version of KEP is shipped
  • proposal and implementation details are done

Drawbacks [optional]

More load on the Kubernetes API server.

Skip to content

Title

This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

The title should be lowercased and spaces/punctuation should be replaced with -.

To get started with this template:

  1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
  2. Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
  3. Create a PR. Assign it to folks that are sponsoring this process.
  4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
  5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

The canonical place for the latest set of instructions (and the likely source of this file) is here.

The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

Table of Contents

A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

Summary

The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

A good summary is probably at least a paragraph in length.

Motivation

This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

Goals

List the specific goals of the KEP. How will we know that this has succeeded?

Non-Goals

What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

Proposal

This is where we get down to the nitty gritty of what the proposal actually is.

User Stories [optional]

Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.

Story 1

Story 2

Implementation Details/Notes/Constraints [optional]

What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

Risks and Mitigations

What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

How will security be reviewed and by whom? How will UX be reviewed and by whom?

Consider including folks that also work outside project.

Design Details

Test Plan

Note: Section not required until targeted at a release.

Consider the following in developing a test plan for this enhancement:

  • Will there be e2e and integration tests, in addition to unit tests?
  • How will it be tested in isolation vs with other components?

No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

Removing a deprecated flag

  • Announce deprecation and support policy of the existing flag
  • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
  • Address feedback on usage/changed behavior, provided on GitHub issues
  • Deprecate the flag

Implementation History

Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

  • the Summary and Motivation sections being merged signaling acceptance
  • the Proposal section being merged signaling agreement on a proposed design
  • the date implementation started
  • the first Kubernetes release where an initial version of the KEP was available
  • the version of Kubernetes where the KEP graduated to general availability
  • when the KEP was retired or superseded

Drawbacks [optional]

Why should this KEP not be implemented.

Alternatives [optional]

Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

Skip to content

Title

This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review.

The title should be lowercased and spaces/punctuation should be replaced with -.

To get started with this template:

  1. Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md, where YYYYMMDD is the date the KEP was first drafted.
  2. Fill out the "overview" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue.
  3. Create a PR. Assign it to folks that are sponsoring this process.
  4. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template.
  5. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the "Overview" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes.

The canonical place for the latest set of instructions (and the likely source of this file) is here.

The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items.

Table of Contents

A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template.

Ensure the TOC is wrapped with <!-- toc --&rt;<!-- /toc --&rt; tags, and then generate with hack/update-toc.sh.

Summary

The Summary section is incredibly important for producing high quality user-focused documentation such as release notes or a development roadmap. It should be possible to collect this information before implementation begins in order to avoid requiring implementers to split their attention between writing release notes and implementing the feature itself.

A good summary is probably at least a paragraph in length.

Motivation

This section is for explicitly listing the motivation, goals and non-goals of this KEP. Describe why the change is important and the benefits to users. The motivation section can optionally provide links to experience reports to demonstrate the interest in a KEP within the wider Kubernetes community.

Goals

List the specific goals of the KEP. How will we know that this has succeeded?

Non-Goals

What is out of scope for this KEP? Listing non-goals helps to focus discussion and make progress.

Proposal

This is where we get down to the nitty gritty of what the proposal actually is.

User Stories [optional]

Detail the things that people will be able to do if this KEP is implemented. Include as much detail as possible so that people can understand the "how" of the system. The goal here is to make this feel real for users without getting bogged down.

Story 1

Story 2

Implementation Details/Notes/Constraints [optional]

What are the caveats to the implementation? What are some important details that didn't come across above. Go in to as much detail as necessary here. This might be a good place to talk about core concepts and how they relate.

Risks and Mitigations

What are the risks of this proposal and how do we mitigate. Think broadly. For example, consider both security and how this will impact the larger kubernetes ecosystem.

How will security be reviewed and by whom? How will UX be reviewed and by whom?

Consider including folks that also work outside project.

Design Details

Test Plan

Note: Section not required until targeted at a release.

Consider the following in developing a test plan for this enhancement:

  • Will there be e2e and integration tests, in addition to unit tests?
  • How will it be tested in isolation vs with other components?

No need to outline all of the test cases, just the general strategy. Anything that would count as tricky in the implementation and anything particularly challenging to test should be called out.

All code is expected to have adequate tests (eventually with coverage expectations). Please adhere to the Kubernetes testing guidelines when drafting this test plan.

Removing a deprecated flag

  • Announce deprecation and support policy of the existing flag
  • Two versions passed since introducing the functionality which deprecates the flag (to address version skew)
  • Address feedback on usage/changed behavior, provided on GitHub issues
  • Deprecate the flag

Implementation History

Major milestones in the life cycle of a KEP should be tracked in Implementation History. Major milestones might include

  • the Summary and Motivation sections being merged signaling acceptance
  • the Proposal section being merged signaling agreement on a proposed design
  • the date implementation started
  • the first Kubernetes release where an initial version of the KEP was available
  • the version of Kubernetes where the KEP graduated to general availability
  • when the KEP was retired or superseded

Drawbacks [optional]

Why should this KEP not be implemented.

Alternatives [optional]

Similar to the Drawbacks section the Alternatives section is used to highlight and record other possible approaches to delivering the value proposed by a KEP.

Skip to content

Kubernetes Enhancement Proposals (KEPs)

A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

Quick start for the KEP process

Follow the process outlined in the KEP template

Do I have to use the KEP process?

No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

KEPs are only required when the changes are wide ranging and impact most of the project.

Why would I want to use the KEP process?

Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

Benefits to KEP users (in the limit):

  • Exposure on a kubernetes blessed web site that is findable via web search engines.
  • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
  • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.

Skip to content

Kubernetes Enhancement Proposals (KEPs)

A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.

Quick start for the KEP process

Follow the process outlined in the KEP template

Do I have to use the KEP process?

No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record.

KEPs are only required when the changes are wide ranging and impact most of the project.

Why would I want to use the KEP process?

Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata.

Benefits to KEP users (in the limit):

  • Exposure on a kubernetes blessed web site that is findable via web search engines.
  • Cross indexing of KEPs so that users can find connections and the current status of any KEP.
  • A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions.

We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.