diff --git a/search/search_index.json b/search/search_index.json index 57a853c9e..c530a4d52 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This is the documentation for the Ingress NGINX Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started.","title":"Welcome"},{"location":"#overview","text":"This is the documentation for the Ingress NGINX Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation .","title":"Overview"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"e2e-tests/","text":"e2e test suite for Ingress NGINX Controller \u00b6 \u00b6 \u00b6 \u00b6 should set backend protocol to https:// and use proxy_pass should set backend protocol to $scheme:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass should set backend protocol to '' and use fastcgi_pass \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 <<<<<<< Updated upstream - should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar ======= - Stashed changes \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6","title":"E2e tests"},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","text":"","title":"e2e test suite for Ingress NGINX Controller"},{"location":"e2e-tests/#_1","text":"","title":""},{"location":"e2e-tests/#_2","text":"","title":""},{"location":"e2e-tests/#_3","text":"should set backend protocol to https:// and use proxy_pass should set backend protocol to $scheme:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass","title":""},{"location":"e2e-tests/#should-set-backend-protocol-to-and-use-fastcgi_pass","text":"","title":"should set backend protocol to '' and use fastcgi_pass"},{"location":"e2e-tests/#_4","text":"","title":""},{"location":"e2e-tests/#_5","text":"","title":""},{"location":"e2e-tests/#_6","text":"","title":""},{"location":"e2e-tests/#_7","text":"","title":""},{"location":"e2e-tests/#_8","text":"","title":""},{"location":"e2e-tests/#_9","text":"","title":""},{"location":"e2e-tests/#_10","text":"","title":""},{"location":"e2e-tests/#_11","text":"","title":""},{"location":"e2e-tests/#_12","text":"","title":""},{"location":"e2e-tests/#_13","text":"","title":""},{"location":"e2e-tests/#_14","text":"","title":""},{"location":"e2e-tests/#_15","text":"","title":""},{"location":"e2e-tests/#_16","text":"","title":""},{"location":"e2e-tests/#_17","text":"","title":""},{"location":"e2e-tests/#_18","text":"","title":""},{"location":"e2e-tests/#_19","text":"","title":""},{"location":"e2e-tests/#_20","text":"","title":""},{"location":"e2e-tests/#_21","text":"","title":""},{"location":"e2e-tests/#_22","text":"","title":""},{"location":"e2e-tests/#_23","text":"","title":""},{"location":"e2e-tests/#_24","text":"","title":""},{"location":"e2e-tests/#_25","text":"","title":""},{"location":"e2e-tests/#_26","text":"","title":""},{"location":"e2e-tests/#_27","text":"","title":""},{"location":"e2e-tests/#_28","text":"","title":""},{"location":"e2e-tests/#_29","text":"","title":""},{"location":"e2e-tests/#_30","text":"","title":""},{"location":"e2e-tests/#_31","text":"","title":""},{"location":"e2e-tests/#_32","text":"","title":""},{"location":"e2e-tests/#_33","text":"","title":""},{"location":"e2e-tests/#_34","text":"","title":""},{"location":"e2e-tests/#_35","text":"","title":""},{"location":"e2e-tests/#_36","text":"","title":""},{"location":"e2e-tests/#_37","text":"","title":""},{"location":"e2e-tests/#_38","text":"","title":""},{"location":"e2e-tests/#_39","text":"","title":""},{"location":"e2e-tests/#_40","text":"","title":""},{"location":"e2e-tests/#_41","text":"","title":""},{"location":"e2e-tests/#_42","text":"","title":""},{"location":"e2e-tests/#_43","text":"","title":""},{"location":"e2e-tests/#_44","text":"","title":""},{"location":"e2e-tests/#_45","text":"","title":""},{"location":"e2e-tests/#_46","text":"","title":""},{"location":"e2e-tests/#_47","text":"","title":""},{"location":"e2e-tests/#_48","text":"","title":""},{"location":"e2e-tests/#_49","text":"","title":""},{"location":"e2e-tests/#_50","text":"","title":""},{"location":"e2e-tests/#_51","text":"","title":""},{"location":"e2e-tests/#_52","text":"","title":""},{"location":"e2e-tests/#_53","text":"","title":""},{"location":"e2e-tests/#_54","text":"","title":""},{"location":"e2e-tests/#_55","text":"","title":""},{"location":"e2e-tests/#_56","text":"","title":""},{"location":"e2e-tests/#_57","text":"","title":""},{"location":"e2e-tests/#_58","text":"","title":""},{"location":"e2e-tests/#_59","text":"","title":""},{"location":"e2e-tests/#_60","text":"","title":""},{"location":"e2e-tests/#_61","text":"","title":""},{"location":"e2e-tests/#_62","text":"","title":""},{"location":"e2e-tests/#_63","text":"","title":""},{"location":"e2e-tests/#_64","text":"","title":""},{"location":"e2e-tests/#_65","text":"","title":""},{"location":"e2e-tests/#_66","text":"","title":""},{"location":"e2e-tests/#_67","text":"","title":""},{"location":"e2e-tests/#_68","text":"","title":""},{"location":"e2e-tests/#_69","text":"","title":""},{"location":"e2e-tests/#_70","text":"","title":""},{"location":"e2e-tests/#_71","text":"","title":""},{"location":"e2e-tests/#_72","text":"","title":""},{"location":"e2e-tests/#_73","text":"","title":""},{"location":"e2e-tests/#_74","text":"","title":""},{"location":"e2e-tests/#_75","text":"","title":""},{"location":"e2e-tests/#_76","text":"","title":""},{"location":"e2e-tests/#_77","text":"","title":""},{"location":"e2e-tests/#_78","text":"","title":""},{"location":"e2e-tests/#_79","text":"","title":""},{"location":"e2e-tests/#_80","text":"","title":""},{"location":"e2e-tests/#_81","text":"","title":""},{"location":"e2e-tests/#_82","text":"","title":""},{"location":"e2e-tests/#_83","text":"","title":""},{"location":"e2e-tests/#_84","text":"","title":""},{"location":"e2e-tests/#_85","text":"","title":""},{"location":"e2e-tests/#_86","text":"","title":""},{"location":"e2e-tests/#_87","text":"","title":""},{"location":"e2e-tests/#_88","text":"","title":""},{"location":"e2e-tests/#_89","text":"","title":""},{"location":"e2e-tests/#_90","text":"","title":""},{"location":"e2e-tests/#_91","text":"","title":""},{"location":"e2e-tests/#_92","text":"","title":""},{"location":"e2e-tests/#_93","text":"","title":""},{"location":"e2e-tests/#_94","text":"","title":""},{"location":"e2e-tests/#_95","text":"","title":""},{"location":"e2e-tests/#_96","text":"","title":""},{"location":"e2e-tests/#_97","text":"","title":""},{"location":"e2e-tests/#_98","text":"","title":""},{"location":"e2e-tests/#_99","text":"","title":""},{"location":"e2e-tests/#_100","text":"","title":""},{"location":"e2e-tests/#_101","text":"","title":""},{"location":"e2e-tests/#_102","text":"","title":""},{"location":"e2e-tests/#_103","text":"","title":""},{"location":"e2e-tests/#_104","text":"","title":""},{"location":"e2e-tests/#_105","text":"","title":""},{"location":"e2e-tests/#_106","text":"","title":""},{"location":"e2e-tests/#_107","text":"","title":""},{"location":"e2e-tests/#_108","text":"","title":""},{"location":"e2e-tests/#_109","text":"","title":""},{"location":"e2e-tests/#_110","text":"","title":""},{"location":"e2e-tests/#_111","text":"","title":""},{"location":"e2e-tests/#_112","text":"","title":""},{"location":"e2e-tests/#_113","text":"","title":""},{"location":"e2e-tests/#_114","text":"","title":""},{"location":"e2e-tests/#_115","text":"","title":""},{"location":"e2e-tests/#_116","text":"","title":""},{"location":"e2e-tests/#_117","text":"","title":""},{"location":"e2e-tests/#_118","text":"<<<<<<< Updated upstream - should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar ======= - Stashed changes","title":""},{"location":"e2e-tests/#_119","text":"","title":""},{"location":"e2e-tests/#_120","text":"","title":""},{"location":"e2e-tests/#_121","text":"","title":""},{"location":"e2e-tests/#_122","text":"","title":""},{"location":"e2e-tests/#_123","text":"","title":""},{"location":"e2e-tests/#_124","text":"","title":""},{"location":"e2e-tests/#_125","text":"","title":""},{"location":"e2e-tests/#_126","text":"","title":""},{"location":"e2e-tests/#_127","text":"","title":""},{"location":"e2e-tests/#_128","text":"","title":""},{"location":"e2e-tests/#_129","text":"","title":""},{"location":"e2e-tests/#_130","text":"","title":""},{"location":"e2e-tests/#_131","text":"","title":""},{"location":"e2e-tests/#_132","text":"","title":""},{"location":"e2e-tests/#_133","text":"","title":""},{"location":"e2e-tests/#_134","text":"","title":""},{"location":"e2e-tests/#_135","text":"","title":""},{"location":"e2e-tests/#_136","text":"","title":""},{"location":"e2e-tests/#_137","text":"","title":""},{"location":"e2e-tests/#_138","text":"","title":""},{"location":"e2e-tests/#_139","text":"","title":""},{"location":"e2e-tests/#_140","text":"","title":""},{"location":"e2e-tests/#_141","text":"","title":""},{"location":"e2e-tests/#_142","text":"","title":""},{"location":"e2e-tests/#_143","text":"","title":""},{"location":"e2e-tests/#_144","text":"","title":""},{"location":"e2e-tests/#_145","text":"","title":""},{"location":"e2e-tests/#_146","text":"","title":""},{"location":"e2e-tests/#_147","text":"","title":""},{"location":"faq/","text":"FAQ \u00b6 Retaining Client IPAddress \u00b6 Please read Retain Client IPAddress Guide here . Kubernetes v1.22 Migration \u00b6 If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here . Validation Of path \u00b6 For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path . This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0. When \" ingress.spec.rules.http.pathType=Exact \" or \" pathType=Prefix \", this validation will limit the characters accepted on the field \" ingress.spec.rules.http.paths.path \", to \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\" When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \" ImplementationSpecific \". API Spec on pathType is documented here When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" , in the path field, but is not using pathType value as ImplementationSpecific , then the ingress object will be denied admission. The cluster admin should establish validation rules using mechanisms like \" Open Policy Agent \", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here A complete example of an Openpolicyagent gatekeeper rule is available here If you have any issues or concerns, please do one of the following: Open a GitHub issue Comment in our Dev Slack Channel Open a thread in our Google Group ingress-nginx-dev@kubernetes.io","title":"FAQ"},{"location":"faq/#faq","text":"","title":"FAQ"},{"location":"faq/#retaining-client-ipaddress","text":"Please read Retain Client IPAddress Guide here .","title":"Retaining Client IPAddress"},{"location":"faq/#kubernetes-v122-migration","text":"If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here .","title":"Kubernetes v1.22 Migration"},{"location":"faq/#validation-of-path","text":"For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path . This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0. When \" ingress.spec.rules.http.pathType=Exact \" or \" pathType=Prefix \", this validation will limit the characters accepted on the field \" ingress.spec.rules.http.paths.path \", to \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\" When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \" ImplementationSpecific \". API Spec on pathType is documented here When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" , in the path field, but is not using pathType value as ImplementationSpecific , then the ingress object will be denied admission. The cluster admin should establish validation rules using mechanisms like \" Open Policy Agent \", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here A complete example of an Openpolicyagent gatekeeper rule is available here If you have any issues or concerns, please do one of the following: Open a GitHub issue Comment in our Dev Slack Channel Open a thread in our Google Group ingress-nginx-dev@kubernetes.io","title":"Validation Of path"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on. Avoiding outage from wrong configuration \u00b6 Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","text":"Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"Avoiding outage from wrong configuration"},{"location":"kubectl-plugin/","text":"The ingress-nginx kubectl plugin \u00b6 Installation \u00b6 Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command. Common Flags \u00b6 Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment , --pod , and --container flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller , and the --container flag defaults to controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace. Subcommands \u00b6 Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher. backends \u00b6 Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name. certs \u00b6 Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- conf \u00b6 Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ... exec \u00b6 kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json opentelemetry.toml owasp-modsecurity-crs template info \u00b6 Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx . ingresses \u00b6 kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs. $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2 lint \u00b6 kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 logs \u00b6 kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ... ssh \u00b6 kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"kubectl plugin"},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","text":"","title":"The ingress-nginx kubectl plugin"},{"location":"kubectl-plugin/#installation","text":"Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command.","title":"Installation"},{"location":"kubectl-plugin/#common-flags","text":"Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment , --pod , and --container flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller , and the --container flag defaults to controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace.","title":"Common Flags"},{"location":"kubectl-plugin/#subcommands","text":"Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher.","title":"Subcommands"},{"location":"kubectl-plugin/#backends","text":"Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name.","title":"backends"},{"location":"kubectl-plugin/#certs","text":"Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY-----","title":"certs"},{"location":"kubectl-plugin/#conf","text":"Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ...","title":"conf"},{"location":"kubectl-plugin/#exec","text":"kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json opentelemetry.toml owasp-modsecurity-crs template","title":"exec"},{"location":"kubectl-plugin/#info","text":"Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx .","title":"info"},{"location":"kubectl-plugin/#ingresses","text":"kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs. $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2","title":"ingresses"},{"location":"kubectl-plugin/#lint","text":"kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808","title":"lint"},{"location":"kubectl-plugin/#logs","text":"kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ...","title":"logs"},{"location":"kubectl-plugin/#ssh","text":"kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"ssh"},{"location":"lua_tests/","text":"Lua Tests \u00b6 Running the Lua Tests \u00b6 To run the Lua tests you can run the following from the root directory: make lua-test This command makes use of docker hence does not need any dependency installations besides docker Where are the Lua Tests? \u00b6 Lua Tests can be found in the rootfs/etc/nginx/lua/test directory","title":"Lua Tests"},{"location":"lua_tests/#lua-tests","text":"","title":"Lua Tests"},{"location":"lua_tests/#running-the-lua-tests","text":"To run the Lua tests you can run the following from the root directory: make lua-test This command makes use of docker hence does not need any dependency installations besides docker","title":"Running the Lua Tests"},{"location":"lua_tests/#where-are-the-lua-tests","text":"Lua Tests can be found in the rootfs/etc/nginx/lua/test directory","title":"Where are the Lua Tests?"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events \u00b6 $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress Check the Ingress Controller Logs \u00b6 $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration \u00b6 $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist \u00b6 $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions) \u00b6 Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider ) Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF Then please follow the below steps. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null (\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 (\u2388 |myprompt)\u279c ~ b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 (\u2388 |myprompt)\u279c ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP/2 200 docker-distribution-api-version: registry/2.0 content-type: application/vnd.docker.distribution.manifest.list.v2+json docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content-length: 1384 date: Wed, 28 Sep 2022 16:46:28 GMT server: Docker Registry x-xss-protection: 0 x-frame-options: SAMEORIGIN alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\" (\u2388 |myprompt)\u279c ~ Redirection in the proxy is implemented to ensure the pulling of the images. This is the solution recommended to whitelist the below image repositories : *.appspot.com *.k8s.io *.pkg.dev *.gcr.io More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR. Unable to listen on port (80/443) \u00b6 One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap ) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment. If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable. Create a test pod \u00b6 The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example: apiVersion : v1 kind : Pod metadata : name : ingress-nginx-sleep namespace : default labels : app : nginx spec : containers : - name : nginx image : ##_CONTROLLER_IMAGE_## resources : requests : memory : \"512Mi\" cpu : \"500m\" limits : memory : \"1Gi\" cpu : \"1\" command : [ \"sleep\" ] args : [ \"3600\" ] ports : - containerPort : 80 name : http protocol : TCP - containerPort : 443 name : https protocol : TCP securityContext : allowPrivilegeEscalation : true capabilities : add : - NET_BIND_SERVICE drop : - ALL runAsUser : 101 restartPolicy : Never nodeSelector : kubernetes.io/hostname : ##_NODE_NAME_## tolerations : - key : \"node.kubernetes.io/unschedulable\" operator : \"Exists\" effect : NoSchedule * update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster Apply the YAML and open a shell into the pod. Try to manually run the controller process: $ /nginx-ingress-controller You should get the same error as from the ingress controller pod logs. Confirm the capabilities are properly surfacing into the pod: $ grep CapBnd /proc/1/status CapBnd: 0000000000000400 The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container. $ capsh --decode = 0000000000000400 0x0000000000000400=cap_net_bind_service Create a test pod as root \u00b6 (Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities. Some things to try after shelling into this container: Try running the controller as the www-data (101) user: $ chmod 4755 /nginx-ingress-controller $ /nginx-ingress-controller Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller /nginx-ingress-controller cap_net_bind_service=ep (if missing, see above about purging image on the server and re-pulling) Strace the executable to see what system calls are being executed when it fails: $ apk add strace (1/1) Installing strace (5.12-r0) Executing busybox-1.33.1-r7.trigger OK: 28 MiB in 42 packages $ strace /nginx-ingress-controller execve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x29ea690) = 0 ...","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#check-the-ingress-resource-events","text":"$ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress","title":"Check the Ingress Resource Events"},{"location":"troubleshooting/#check-the-ingress-controller-logs","text":"$ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- ....","title":"Check the Ingress Controller Logs"},{"location":"troubleshooting/#check-the-nginx-configuration","text":"$ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { ....","title":"Check the Nginx Configuration"},{"location":"troubleshooting/#check-if-used-services-exist","text":"$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m","title":"Check if used Services Exist"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","text":"Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider ) Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF Then please follow the below steps. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null (\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 (\u2388 |myprompt)\u279c ~ b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 (\u2388 |myprompt)\u279c ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP/2 200 docker-distribution-api-version: registry/2.0 content-type: application/vnd.docker.distribution.manifest.list.v2+json docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content-length: 1384 date: Wed, 28 Sep 2022 16:46:28 GMT server: Docker Registry x-xss-protection: 0 x-frame-options: SAMEORIGIN alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\" (\u2388 |myprompt)\u279c ~ Redirection in the proxy is implemented to ensure the pulling of the images. This is the solution recommended to whitelist the below image repositories : *.appspot.com *.k8s.io *.pkg.dev *.gcr.io More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)"},{"location":"troubleshooting/#unable-to-listen-on-port-80443","text":"One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap ) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment. If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.","title":"Unable to listen on port (80/443)"},{"location":"troubleshooting/#create-a-test-pod","text":"The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example: apiVersion : v1 kind : Pod metadata : name : ingress-nginx-sleep namespace : default labels : app : nginx spec : containers : - name : nginx image : ##_CONTROLLER_IMAGE_## resources : requests : memory : \"512Mi\" cpu : \"500m\" limits : memory : \"1Gi\" cpu : \"1\" command : [ \"sleep\" ] args : [ \"3600\" ] ports : - containerPort : 80 name : http protocol : TCP - containerPort : 443 name : https protocol : TCP securityContext : allowPrivilegeEscalation : true capabilities : add : - NET_BIND_SERVICE drop : - ALL runAsUser : 101 restartPolicy : Never nodeSelector : kubernetes.io/hostname : ##_NODE_NAME_## tolerations : - key : \"node.kubernetes.io/unschedulable\" operator : \"Exists\" effect : NoSchedule * update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster Apply the YAML and open a shell into the pod. Try to manually run the controller process: $ /nginx-ingress-controller You should get the same error as from the ingress controller pod logs. Confirm the capabilities are properly surfacing into the pod: $ grep CapBnd /proc/1/status CapBnd: 0000000000000400 The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container. $ capsh --decode = 0000000000000400 0x0000000000000400=cap_net_bind_service","title":"Create a test pod"},{"location":"troubleshooting/#create-a-test-pod-as-root","text":"(Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities. Some things to try after shelling into this container: Try running the controller as the www-data (101) user: $ chmod 4755 /nginx-ingress-controller $ /nginx-ingress-controller Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller /nginx-ingress-controller cap_net_bind_service=ep (if missing, see above about purging image on the server and re-pulling) Strace the executable to see what system calls are being executed when it fails: $ apk add strace (1/1) Installing strace (5.12-r0) Executing busybox-1.33.1-r7.trigger OK: 28 MiB in 42 packages $ strace /nginx-ingress-controller execve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x29ea690) = 0 ...","title":"Create a test pod as root"},{"location":"deploy/","text":"Installation Guide \u00b6 There are multiple ways to install the Ingress-Nginx Controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider. Contents \u00b6 Quick start Environment-specific instructions ... Docker Desktop ... Rancher Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... OVHcloud ... Bare-metal Miscellaneous Quick start \u00b6 If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you want a full list of values that you can set, while installing with Helm, then run: helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider. Firewall configuration \u00b6 To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml . In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller . - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing. Pre-flight check \u00b6 A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Local testing \u00b6 Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses a host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=\"demo.localdev.me/*=demo:80\" Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 Info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. This issue shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080 You should see a HTML response containing text like \"It works!\" . Online testing \u00b6 If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89 Environment-specific instructions \u00b6 Local development clusters \u00b6 minikube \u00b6 The ingress controller can be installed through minikube's addons system: minikube addons enable ingress MicroK8s \u00b6 The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details. Docker Desktop \u00b6 Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section . Rancher Desktop \u00b6 Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop. Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu. Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample. Cloud deployments \u00b6 If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers. AWS \u00b6 In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller . Network Load Balancer (NLB) \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml TLS termination in AWS Load Balancer (NLB) \u00b6 By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB. Download the deploy.yaml template wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy.yaml NLB Idle Timeouts \u00b6 Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default, NGINX keepalive_timeout is set to 75s . More information with regard to timeouts can be found in the official AWS documentation GCE-GKE \u00b6 First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . More information can be found in the Official GCP Documentation . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Proxy-protocol is supported in GCE check the Official Documentations on how to enable. Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation . Digital Ocean \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/do/deploy.yaml - By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\" . While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data , unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue . Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. Scaleway \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/scw/deploy.yaml Exoscale \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation . Oracle Cloud Infrastructure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation. OVHcloud \u00b6 helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace You can find the complete tutorial . Bare metal clusters \u00b6 This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations . Miscellaneous \u00b6 Checking ingress controller version \u00b6 Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Scope \u00b6 By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details. Webhook network access \u00b6 Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service. Certificate generation \u00b6 Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Running on Kubernetes versions older than 1.19 \u00b6 Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49). The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"There are multiple ways to install the Ingress-Nginx Controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.","title":"Installation Guide"},{"location":"deploy/#contents","text":"Quick start Environment-specific instructions ... Docker Desktop ... Rancher Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... OVHcloud ... Bare-metal Miscellaneous","title":"Contents"},{"location":"deploy/#quick-start","text":"If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you want a full list of values that you can set, while installing with Helm, then run: helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.","title":"Quick start"},{"location":"deploy/#firewall-configuration","text":"To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml . In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller . - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.","title":"Firewall configuration"},{"location":"deploy/#pre-flight-check","text":"A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Pre-flight check"},{"location":"deploy/#local-testing","text":"Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses a host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=\"demo.localdev.me/*=demo:80\" Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 Info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. This issue shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080 You should see a HTML response containing text like \"It works!\" .","title":"Local testing"},{"location":"deploy/#online-testing","text":"If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89","title":"Online testing"},{"location":"deploy/#environment-specific-instructions","text":"","title":"Environment-specific instructions"},{"location":"deploy/#local-development-clusters","text":"","title":"Local development clusters"},{"location":"deploy/#minikube","text":"The ingress controller can be installed through minikube's addons system: minikube addons enable ingress","title":"minikube"},{"location":"deploy/#microk8s","text":"The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details.","title":"MicroK8s"},{"location":"deploy/#docker-desktop","text":"Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section .","title":"Docker Desktop"},{"location":"deploy/#rancher-desktop","text":"Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop. Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu. Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.","title":"Rancher Desktop"},{"location":"deploy/#cloud-deployments","text":"If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.","title":"Cloud deployments"},{"location":"deploy/#aws","text":"In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller .","title":"AWS"},{"location":"deploy/#network-load-balancer-nlb","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","text":"By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB. Download the deploy.yaml template wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy.yaml","title":"TLS termination in AWS Load Balancer (NLB)"},{"location":"deploy/#nlb-idle-timeouts","text":"Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default, NGINX keepalive_timeout is set to 75s . More information with regard to timeouts can be found in the official AWS documentation","title":"NLB Idle Timeouts"},{"location":"deploy/#gce-gke","text":"First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . More information can be found in the Official GCP Documentation . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Proxy-protocol is supported in GCE check the Official Documentations on how to enable.","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation .","title":"Azure"},{"location":"deploy/#digital-ocean","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/do/deploy.yaml - By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\" . While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data , unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue . Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.","title":"Digital Ocean"},{"location":"deploy/#scaleway","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/scw/deploy.yaml","title":"Scaleway"},{"location":"deploy/#exoscale","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation .","title":"Exoscale"},{"location":"deploy/#oracle-cloud-infrastructure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.","title":"Oracle Cloud Infrastructure"},{"location":"deploy/#ovhcloud","text":"helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace You can find the complete tutorial .","title":"OVHcloud"},{"location":"deploy/#bare-metal-clusters","text":"This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations .","title":"Bare metal clusters"},{"location":"deploy/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"deploy/#checking-ingress-controller-version","text":"Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Checking ingress controller version"},{"location":"deploy/#scope","text":"By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.","title":"Scope"},{"location":"deploy/#webhook-network-access","text":"Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.","title":"Webhook network access"},{"location":"deploy/#certificate-generation","text":"Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Certificate generation"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","text":"Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49). The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).","title":"Running on Kubernetes versions older than 1.19"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide . MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. --- apiVersion : metallb.io/v1beta1 kind : IPAddressPool metadata : name : default namespace : metallb-system spec : addresses : - 203.0.113.10-203.0.113.15 autoAssign : true --- apiVersion : metallb.io/v1beta1 kind : L2Advertisement metadata : name : default namespace : metallb-system spec : ipAddressPools : - default $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide . MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. --- apiVersion : metallb.io/v1beta1 kind : IPAddressPool metadata : name : default namespace : metallb-system spec : addresses : - 203.0.113.10-203.0.113.15 autoAssign : true --- apiVersion : metallb.io/v1beta1 kind : L2Advertisement metadata : name : default namespace : metallb-system spec : ipAddressPools : - default $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/hardening-guide/","text":"Hardening Guide \u00b6 Overview \u00b6 There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself Configuration Guide \u00b6 Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Hardening guide"},{"location":"deploy/hardening-guide/#hardening-guide","text":"","title":"Hardening Guide"},{"location":"deploy/hardening-guide/#overview","text":"There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself","title":"Overview"},{"location":"deploy/hardening-guide/#configuration-guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Configuration Guide"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, ingress-nginx . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx . Cluster Permissions \u00b6 These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses , ingressclasses , endpointslices : get, list, watch events : create, patch ingresses/status : update leases : list, watch Namespace Permissions \u00b6 These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). leases : get, update (for resourceName ingress-controller-leader ) leases : create This resourceName is the election-id defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader resourceName : Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller. Bindings \u00b6 The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, ingress-nginx .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses , ingressclasses , endpointslices : get, list, watch events : create, patch ingresses/status : update leases : list, watch","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). leases : get, update (for resourceName ingress-controller-leader ) leases : create This resourceName is the election-id defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader resourceName : Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx Migrating from stable/nginx-ingress \u00b6 See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx","title":"With Helm"},{"location":"deploy/upgrade/#migrating-from-stablenginx-ingress","text":"See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Migrating from stable/nginx-ingress"},{"location":"developer-guide/code-overview/","text":"Ingress NGINX - Code Overview \u00b6 This document provides an overview of Ingress NGINX code. Core Golang code \u00b6 This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration. Core Sync Logics: \u00b6 Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that: One copy is the currently running configuration model Second copy is the one generated in response to some changes in the cluster The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found: Entrypoint \u00b6 The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory. Version \u00b6 Is the package of the code responsible for adding version subcommand, and can be found in version directory. Internal code \u00b6 This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into: Admission Controller \u00b6 Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory. File functions \u00b6 Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory. Ingress functions \u00b6 Contains all the logics from Ingress-Nginx Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller defaults - define the default struct - internal/ingress/defaults . Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller - internal/ingress/resolver . Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future. K8s functions \u00b6 Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory. Networking functions \u00b6 Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory. NGINX functions \u00b6 Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory. Tasks / Queue \u00b6 Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory. Other parts of internal \u00b6 Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future. E2E Test \u00b6 The e2e tests code is in test directory. Other programs \u00b6 Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts. kubectl plugin \u00b6 It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin Deploy files \u00b6 This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory. Helm Chart \u00b6 Used to generate the Helm chart published. Code is in charts/ingress-nginx . Documentation/Website \u00b6 The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages. Container Images \u00b6 Container images used to run ingress-nginx, or to build the final image. Base Images \u00b6 Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory. Ingress Controller Image \u00b6 The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile nginx config Ingress NGINX Lua Scripts \u00b6 Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua . Nginx Go template file \u00b6 One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Code Overview"},{"location":"developer-guide/code-overview/#ingress-nginx-code-overview","text":"This document provides an overview of Ingress NGINX code.","title":"Ingress NGINX - Code Overview"},{"location":"developer-guide/code-overview/#core-golang-code","text":"This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration.","title":"Core Golang code"},{"location":"developer-guide/code-overview/#core-sync-logics","text":"Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that: One copy is the currently running configuration model Second copy is the one generated in response to some changes in the cluster The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found:","title":"Core Sync Logics:"},{"location":"developer-guide/code-overview/#entrypoint","text":"The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory.","title":"Entrypoint"},{"location":"developer-guide/code-overview/#version","text":"Is the package of the code responsible for adding version subcommand, and can be found in version directory.","title":"Version"},{"location":"developer-guide/code-overview/#internal-code","text":"This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:","title":"Internal code"},{"location":"developer-guide/code-overview/#admission-controller","text":"Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory.","title":"Admission Controller"},{"location":"developer-guide/code-overview/#file-functions","text":"Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory.","title":"File functions"},{"location":"developer-guide/code-overview/#ingress-functions","text":"Contains all the logics from Ingress-Nginx Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller defaults - define the default struct - internal/ingress/defaults . Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller - internal/ingress/resolver . Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future.","title":"Ingress functions"},{"location":"developer-guide/code-overview/#k8s-functions","text":"Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory.","title":"K8s functions"},{"location":"developer-guide/code-overview/#networking-functions","text":"Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory.","title":"Networking functions"},{"location":"developer-guide/code-overview/#nginx-functions","text":"Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory.","title":"NGINX functions"},{"location":"developer-guide/code-overview/#tasks-queue","text":"Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory.","title":"Tasks / Queue"},{"location":"developer-guide/code-overview/#other-parts-of-internal","text":"Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.","title":"Other parts of internal"},{"location":"developer-guide/code-overview/#e2e-test","text":"The e2e tests code is in test directory.","title":"E2E Test"},{"location":"developer-guide/code-overview/#other-programs","text":"Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts.","title":"Other programs"},{"location":"developer-guide/code-overview/#kubectl-plugin","text":"It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin","title":"kubectl plugin"},{"location":"developer-guide/code-overview/#deploy-files","text":"This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory.","title":"Deploy files"},{"location":"developer-guide/code-overview/#helm-chart","text":"Used to generate the Helm chart published. Code is in charts/ingress-nginx .","title":"Helm Chart"},{"location":"developer-guide/code-overview/#documentationwebsite","text":"The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages.","title":"Documentation/Website"},{"location":"developer-guide/code-overview/#container-images","text":"Container images used to run ingress-nginx, or to build the final image.","title":"Container Images"},{"location":"developer-guide/code-overview/#base-images","text":"Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory.","title":"Base Images"},{"location":"developer-guide/code-overview/#ingress-controller-image","text":"The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile nginx config","title":"Ingress Controller Image"},{"location":"developer-guide/code-overview/#ingress-nginx-lua-scripts","text":"Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua .","title":"Ingress NGINX Lua Scripts"},{"location":"developer-guide/code-overview/#nginx-go-template-file","text":"One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Nginx Go template file"},{"location":"developer-guide/getting-started/","text":"Developing for Ingress-Nginx Controller This document explains how to get started with developing for Ingress-Nginx Controller. For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide . This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any) Prerequisites \u00b6 Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers Quick Start \u00b6 Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies Local build \u00b6 Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file. Testing \u00b6 Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention _test.lua or it will be ignored Run e2e test suite make kind-e2e-test To limit the scope of the tests to execute, we can use the environment variable FOCUS FOCUS=\"no-auth-locations\" make kind-e2e-test Note The variable FOCUS defines Ginkgo Focused Specs Valid values are defined in the describe definition of the e2e tests like Default Backend The complete list of tests can be found here Custom docker image \u00b6 In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location. This can be done setting two environment variables, REGISTRY and TAG export TAG=\"dev\" export REGISTRY=\"$USER\" make build image and then publish such version with docker push $REGISTRY/controller:$TAG","title":"Getting Started"},{"location":"developer-guide/getting-started/#prerequisites","text":"Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers","title":"Prerequisites"},{"location":"developer-guide/getting-started/#quick-start","text":"Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies","title":"Quick Start"},{"location":"developer-guide/getting-started/#local-build","text":"Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.","title":"Local build"},{"location":"developer-guide/getting-started/#testing","text":"Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention _test.lua or it will be ignored Run e2e test suite make kind-e2e-test To limit the scope of the tests to execute, we can use the environment variable FOCUS FOCUS=\"no-auth-locations\" make kind-e2e-test Note The variable FOCUS defines Ginkgo Focused Specs Valid values are defined in the describe definition of the e2e tests like Default Backend The complete list of tests can be found here","title":"Testing"},{"location":"developer-guide/getting-started/#custom-docker-image","text":"In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location. This can be done setting two environment variables, REGISTRY and TAG export TAG=\"dev\" export REGISTRY=\"$USER\" make build image and then publish such version with docker push $REGISTRY/controller:$TAG","title":"Custom docker image"},{"location":"enhancements/","text":"Kubernetes Enhancement Proposals (KEPs) \u00b6 A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it. Quick start for the KEP process \u00b6 Follow the process outlined in the KEP template Do I have to use the KEP process? \u00b6 No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project. Why would I want to use the KEP process? \u00b6 Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#kubernetes-enhancement-proposals-keps","text":"A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#quick-start-for-the-kep-process","text":"Follow the process outlined in the KEP template","title":"Quick start for the KEP process"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","text":"No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project.","title":"Do I have to use the KEP process?"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","text":"Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Why would I want to use the KEP process?"},{"location":"enhancements/20190724-only-dynamic-ssl/","text":"Remove static SSL configuration mode \u00b6 Table of Contents \u00b6 Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives Summary \u00b6 Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic. Motivation \u00b6 The static configuration implies reloads, something that affects the majority of the users. Goals \u00b6 Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase. Non-Goals \u00b6 Features related to certificate authentication are not changed in any way. Proposal \u00b6 Remove static SSL configuration Implementation Details/Notes/Constraints \u00b6 Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates Drawbacks \u00b6 Alternatives \u00b6 Keep both implementations","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#remove-static-ssl-configuration-mode","text":"","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","text":"Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives","title":"Table of Contents"},{"location":"enhancements/20190724-only-dynamic-ssl/#summary","text":"Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.","title":"Summary"},{"location":"enhancements/20190724-only-dynamic-ssl/#motivation","text":"The static configuration implies reloads, something that affects the majority of the users.","title":"Motivation"},{"location":"enhancements/20190724-only-dynamic-ssl/#goals","text":"Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase.","title":"Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#non-goals","text":"Features related to certificate authentication are not changed in any way.","title":"Non-Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#proposal","text":"Remove static SSL configuration","title":"Proposal"},{"location":"enhancements/20190724-only-dynamic-ssl/#implementation-detailsnotesconstraints","text":"Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates","title":"Implementation Details/Notes/Constraints"},{"location":"enhancements/20190724-only-dynamic-ssl/#drawbacks","text":"","title":"Drawbacks"},{"location":"enhancements/20190724-only-dynamic-ssl/#alternatives","text":"Keep both implementations","title":"Alternatives"},{"location":"enhancements/20190815-zone-aware-routing/","text":"Availability zone aware routing \u00b6 Table of Contents \u00b6 Availability zone aware routing Table of Contents Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional] Summary \u00b6 Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint. Motivation \u00b6 When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money. At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross-zone. Goals \u00b6 Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there are no zonal endpoints Non-Goals \u00b6 This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases Proposal \u00b6 The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior. Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem. Implementation History \u00b6 initial version of KEP is shipped proposal and implementation details are done Drawbacks [optional] \u00b6 More load on the Kubernetes API server.","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#availability-zone-aware-routing","text":"","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","text":"Availability zone aware routing Table of Contents Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional]","title":"Table of Contents"},{"location":"enhancements/20190815-zone-aware-routing/#summary","text":"Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.","title":"Summary"},{"location":"enhancements/20190815-zone-aware-routing/#motivation","text":"When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money. At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross-zone.","title":"Motivation"},{"location":"enhancements/20190815-zone-aware-routing/#goals","text":"Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there are no zonal endpoints","title":"Goals"},{"location":"enhancements/20190815-zone-aware-routing/#non-goals","text":"This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases","title":"Non-Goals"},{"location":"enhancements/20190815-zone-aware-routing/#proposal","text":"The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior. Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.","title":"Proposal"},{"location":"enhancements/20190815-zone-aware-routing/#implementation-history","text":"initial version of KEP is shipped proposal and implementation details are done","title":"Implementation History"},{"location":"enhancements/20190815-zone-aware-routing/#drawbacks-optional","text":"More load on the Kubernetes API server.","title":"Drawbacks [optional]"},{"location":"enhancements/YYYYMMDD-kep-template/","text":"Title \u00b6 This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review. The title should be lowercased and spaces/punctuation should be replaced with - . To get started with this template: Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md , where YYYYMMDD is the date the KEP was first drafted. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue. Create a PR. Assign it to folks that are sponsoring this process. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes. The canonical place for the latest set of instructions (and the likely source of this file) is here . The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items. Table of Contents \u00b6 A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template. Ensure the TOC is wrapped with |name: nginx| sb sb --> |hello nginx!| sa end subgraph otel otc[\"Otel Collector\"] end subgraph observability tempo[\"Tempo\"] grafana[\"Grafana\"] backend[\"Jaeger\"] zipkin[\"Zipkin\"] end subgraph ingress-nginx ngx[nginx] end subgraph ngx[nginx] ng[nginx] om[OpenTelemetry module] end subgraph Node app otel observability ingress-nginx om --> |otlp-gRPC| otc --> |jaeger| backend otc --> |zipkin| zipkin otc --> |otlp-gRPC| tempo --> grafana sa --> |otlp-gRPC| otc sb --> |otlp-gRPC| otc start --> ng --> sa end To install the example and collectors run: Enable Ingress addon with: opentelemetry : enabled : true image : registry.k8s.io/ingress-nginx/opentelemetry:v20230527@sha256:fd7ec835f31b7b37187238eb4fdad4438806e69f413a203796263131f4f02ed0 containerSecurityContext : allowPrivilegeEscalation : false Enable OpenTelemetry and set the otlp-collector-host: $ echo ' apiVersion : v1 kind : ConfigMap data : enable-opentelemetry : \"true\" opentelemetry-config : \"/etc/nginx/opentelemetry.toml\" opentelemetry-operation-name : \"HTTP $request_method $service_name $uri\" opentelemetry-trust-incoming-span : \"true\" otlp-collector-host : \"otel-coll-collector.otel.svc\" otlp-collector-port : \"4317\" otel-max-queuesize : \"2048\" otel-schedule-delay-millis : \"5000\" otel-max-export-batch-size : \"512\" otel-service-name : \"nginx-proxy\" # Opentelemetry resource name otel-sampler : \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased otel-sampler-ratio : \"1.0\" otel-sampler-parent-based : \"false\" metadata : name : ingress-nginx-controller namespace : ingress-nginx ' | kubectl replace -f - Deploy otel-collector, grafana and Jaeger backend: # add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deply cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator # deploy OpenTelemetry collector kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml # deploy Jaeger all-in-one kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability # deploy zipkin kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability # deploy tempo and grafana helm upgrade --install tempo grafana/tempo --create-namespace -n observability helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability Build and deploy demo app: # build images make images # deploy demo app: make deploy-app Make a few requests to the Service: kubectl port-forward --namespace = ingress-nginx service/ingress-nginx-controller 8090 :80 curl http://esigo.dev:8090/hello/nginx StatusCode : 200 StatusDescription : OK Content : { \"v\" : \"hello nginx!\" } RawContent : HTTP/1.1 200 OK Connection: keep-alive Content-Length: 21 Content-Type: text/plain ; charset = utf-8 Date: Mon, 10 Oct 2022 17 :43:33 GMT { \"v\" : \"hello nginx!\" } Forms : {} Headers : {[ Connection, keep-alive ] , [ Content-Length, 21 ] , [ Content-Type, text/plain ; charset = utf-8 ] , [ Date, Mon, 10 Oct 2022 17 :43:33 GMT ]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 21 View the Grafana UI: kubectl port-forward --namespace = observability service/grafana 3000 :80 In the Grafana interface we can see the details: View the Jaeger UI: kubectl port-forward --namespace = observability service/jaeger-all-in-one-query 16686 :16686 In the Jaeger interface we can see the details: View the Zipkin UI: kubectl port-forward --namespace = observability service/zipkin 9411 :9411 In the Zipkin interface we can see the details: Migration from OpenTracing, Jaeger, Zipkin and Datadog \u00b6 If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations: Annotations \u00b6 Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry opentracing-trust-incoming-span opentracing-trust-incoming-span Configs \u00b6 Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port , otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio","title":"OpenTelemetry"},{"location":"user-guide/third-party-addons/opentelemetry/#opentelemetry","text":"Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project. Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled. Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes. Demo: OpenTelemetry in Ingress NGINX.","title":"OpenTelemetry"},{"location":"user-guide/third-party-addons/opentelemetry/#usage","text":"To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap: data : enable-opentelemetry : \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentelemetry annotation: kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/enable-opentelemetry : \"true\" We must also set the host to use when uploading traces: otlp-collector-host : \"otel-coll-collector.otel.svc\" NOTE: While the option is called otlp-collector-host , you will need to point this to any backend that receives otlp-grpc. Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector , Jaeger Tempo , and zipkin have been tested. Other optional configuration options: # specifies the name to use for the server span opentelemetry-operation-name # sets whether or not to trust incoming telemetry spans opentelemetry-trust-incoming-span # specifies the port to use when uploading traces, Default : 4317 otlp-collector-port # specifies the service name to use for any traces created, Default: nginx otel-service-name # The maximum queue size. After the size is reached data are dropped. otel-max-queuesize # The delay interval in milliseconds between two consecutive exports. otel-schedule-delay-millis # How long the export can run before it is cancelled. otel-schedule-delay-millis # The maximum batch size of every export. It must be smaller or equal to maxQueueSize. otel-max-export-batch-size # specifies sample rate for any traces created, Default: 0.01 otel-sampler-ratio # specifies the sampler to be used when sampling traces. # The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOff otel-sampler # Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false otel-sampler-parent-based Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span : \"true\"","title":"Usage"},{"location":"user-guide/third-party-addons/opentelemetry/#examples","text":"The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop. In the esigo/nginx-example GitHub repository is an example of a simple hello service: graph TB subgraph Browser start[\"http://esigo.dev/hello/nginx\"] end subgraph app sa[service-a] sb[service-b] sa --> |name: nginx| sb sb --> |hello nginx!| sa end subgraph otel otc[\"Otel Collector\"] end subgraph observability tempo[\"Tempo\"] grafana[\"Grafana\"] backend[\"Jaeger\"] zipkin[\"Zipkin\"] end subgraph ingress-nginx ngx[nginx] end subgraph ngx[nginx] ng[nginx] om[OpenTelemetry module] end subgraph Node app otel observability ingress-nginx om --> |otlp-gRPC| otc --> |jaeger| backend otc --> |zipkin| zipkin otc --> |otlp-gRPC| tempo --> grafana sa --> |otlp-gRPC| otc sb --> |otlp-gRPC| otc start --> ng --> sa end To install the example and collectors run: Enable Ingress addon with: opentelemetry : enabled : true image : registry.k8s.io/ingress-nginx/opentelemetry:v20230527@sha256:fd7ec835f31b7b37187238eb4fdad4438806e69f413a203796263131f4f02ed0 containerSecurityContext : allowPrivilegeEscalation : false Enable OpenTelemetry and set the otlp-collector-host: $ echo ' apiVersion : v1 kind : ConfigMap data : enable-opentelemetry : \"true\" opentelemetry-config : \"/etc/nginx/opentelemetry.toml\" opentelemetry-operation-name : \"HTTP $request_method $service_name $uri\" opentelemetry-trust-incoming-span : \"true\" otlp-collector-host : \"otel-coll-collector.otel.svc\" otlp-collector-port : \"4317\" otel-max-queuesize : \"2048\" otel-schedule-delay-millis : \"5000\" otel-max-export-batch-size : \"512\" otel-service-name : \"nginx-proxy\" # Opentelemetry resource name otel-sampler : \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased otel-sampler-ratio : \"1.0\" otel-sampler-parent-based : \"false\" metadata : name : ingress-nginx-controller namespace : ingress-nginx ' | kubectl replace -f - Deploy otel-collector, grafana and Jaeger backend: # add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deply cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator # deploy OpenTelemetry collector kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml # deploy Jaeger all-in-one kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability # deploy zipkin kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability # deploy tempo and grafana helm upgrade --install tempo grafana/tempo --create-namespace -n observability helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability Build and deploy demo app: # build images make images # deploy demo app: make deploy-app Make a few requests to the Service: kubectl port-forward --namespace = ingress-nginx service/ingress-nginx-controller 8090 :80 curl http://esigo.dev:8090/hello/nginx StatusCode : 200 StatusDescription : OK Content : { \"v\" : \"hello nginx!\" } RawContent : HTTP/1.1 200 OK Connection: keep-alive Content-Length: 21 Content-Type: text/plain ; charset = utf-8 Date: Mon, 10 Oct 2022 17 :43:33 GMT { \"v\" : \"hello nginx!\" } Forms : {} Headers : {[ Connection, keep-alive ] , [ Content-Length, 21 ] , [ Content-Type, text/plain ; charset = utf-8 ] , [ Date, Mon, 10 Oct 2022 17 :43:33 GMT ]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 21 View the Grafana UI: kubectl port-forward --namespace = observability service/grafana 3000 :80 In the Grafana interface we can see the details: View the Jaeger UI: kubectl port-forward --namespace = observability service/jaeger-all-in-one-query 16686 :16686 In the Jaeger interface we can see the details: View the Zipkin UI: kubectl port-forward --namespace = observability service/zipkin 9411 :9411 In the Zipkin interface we can see the details:","title":"Examples"},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","text":"If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog"},{"location":"user-guide/third-party-addons/opentelemetry/#annotations","text":"Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry opentracing-trust-incoming-span opentracing-trust-incoming-span","title":"Annotations"},{"location":"user-guide/third-party-addons/opentelemetry/#configs","text":"Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port , otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio","title":"Configs"},{"location":"user-guide/third-party-addons/opentracing/","text":"OpenTracing \u00b6 Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the Ingress-Nginx Controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled. Usage \u00b6 To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data: enable-opentracing: \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentracing annotation: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/enable-opentracing: \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Alternatively, you can set jaeger-endpoint and specify the full endpoint for uploading traces. This will use TCP and should be used for a collector rather than an agent. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the name to use for the server span opentracing-operation-name # specifies specifies the name to use for the location span opentracing-location-operation-name # sets whether or not to trust incoming tracing spans opentracing-trust-incoming-span # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the endpoint to use when uploading traces to a collector instead of an agent jaeger-endpoint # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the traceparent/tracestate propagation format jaeger-propagation-format # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. # Default: http://127.0.0.1 jaeger-sampler-host # Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. Default: 5778 jaeger-sampler-port # Specifies the header name used for passing trace context. Must be a string. Default: uber-trace-id jaeger-trace-context-header-name # Specifies the header name used for force sampling. Must be a string. Default: jaeger-debug-id jaeger-debug-header # Specifies the header name used to submit baggage if there is no root span. Must be a string. Default: jaeger-baggage jaeger-baggage-header # Specifies the header prefix used to propagate baggage. Must be a string. Default: uberctx- jaeger-tracer-baggage-header-prefix # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the environment this trace belongs to, Default: prod datadog-environment # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override # Specifies to use client-side sampling for distributed priority sampling and ignore sample rate, Default: true datadog-priority-sampling # specifies sample rate for any traces created, Default: 1.0 datadog-sample-rate All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/opentracing-trust-incoming-span: \"true\" Examples \u00b6 The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube. Zipkin \u00b6 In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the Ingress-NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details: Jaeger \u00b6 Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \"$(minikube ip) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=registry.k8s.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echo-ingress spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /echo pathType: Prefix backend: service: name: echoheaders-x port: number: 80 ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address=172.17.0.5 command=POST real path=/echo query=nil request_version=1.1 request_uri=http://example.com:8080/echo SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close content-length=4 content-type=application/x-www-form-urlencoded host=example.com user-agent=curl/7.54.0 x-forwarded-for=192.168.99.1 x-forwarded-host=example.com x-forwarded-port=80 x-forwarded-proto=http x-original-uri=/echo x-real-ip=192.168.99.1 x-scheme=http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#opentracing","text":"Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the Ingress-Nginx Controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled.","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#usage","text":"To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data: enable-opentracing: \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentracing annotation: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/enable-opentracing: \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Alternatively, you can set jaeger-endpoint and specify the full endpoint for uploading traces. This will use TCP and should be used for a collector rather than an agent. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the name to use for the server span opentracing-operation-name # specifies specifies the name to use for the location span opentracing-location-operation-name # sets whether or not to trust incoming tracing spans opentracing-trust-incoming-span # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the endpoint to use when uploading traces to a collector instead of an agent jaeger-endpoint # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the traceparent/tracestate propagation format jaeger-propagation-format # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. # Default: http://127.0.0.1 jaeger-sampler-host # Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. Default: 5778 jaeger-sampler-port # Specifies the header name used for passing trace context. Must be a string. Default: uber-trace-id jaeger-trace-context-header-name # Specifies the header name used for force sampling. Must be a string. Default: jaeger-debug-id jaeger-debug-header # Specifies the header name used to submit baggage if there is no root span. Must be a string. Default: jaeger-baggage jaeger-baggage-header # Specifies the header prefix used to propagate baggage. Must be a string. Default: uberctx- jaeger-tracer-baggage-header-prefix # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the environment this trace belongs to, Default: prod datadog-environment # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override # Specifies to use client-side sampling for distributed priority sampling and ignore sample rate, Default: true datadog-priority-sampling # specifies sample rate for any traces created, Default: 1.0 datadog-sample-rate All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/opentracing-trust-incoming-span: \"true\"","title":"Usage"},{"location":"user-guide/third-party-addons/opentracing/#examples","text":"The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube.","title":"Examples"},{"location":"user-guide/third-party-addons/opentracing/#zipkin","text":"In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the Ingress-NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details:","title":"Zipkin"},{"location":"user-guide/third-party-addons/opentracing/#jaeger","text":"Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \"$(minikube ip) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=registry.k8s.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echo-ingress spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /echo pathType: Prefix backend: service: name: echoheaders-x port: number: 80 ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address=172.17.0.5 command=POST real path=/echo query=nil request_version=1.1 request_uri=http://example.com:8080/echo SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close content-length=4 content-type=application/x-www-form-urlencoded host=example.com user-agent=curl/7.54.0 x-forwarded-for=192.168.99.1 x-forwarded-host=example.com x-forwarded-port=80 x-forwarded-proto=http x-original-uri=/echo x-real-ip=192.168.99.1 x-scheme=http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"Jaeger"}]} \ No newline at end of file +{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Overview \u00b6 This is the documentation for the Ingress NGINX Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation . Getting Started \u00b6 See Deployment for a whirlwind tour that will get you started.","title":"Welcome"},{"location":"#overview","text":"This is the documentation for the Ingress NGINX Controller. It is built around the Kubernetes Ingress resource , using a ConfigMap to store the controller configuration. You can learn more about using Ingress in the official Kubernetes documentation .","title":"Overview"},{"location":"#getting-started","text":"See Deployment for a whirlwind tour that will get you started.","title":"Getting Started"},{"location":"e2e-tests/","text":"e2e test suite for Ingress NGINX Controller \u00b6 \u00b6 \u00b6 \u00b6 should set backend protocol to https:// and use proxy_pass should set backend protocol to $scheme:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass should set backend protocol to '' and use fastcgi_pass \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 <<<<<<< Updated upstream - should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar ======= - Stashed changes \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6 \u00b6","title":"E2e tests"},{"location":"e2e-tests/#e2e-test-suite-for-ingress-nginx-controller","text":"","title":"e2e test suite for Ingress NGINX Controller"},{"location":"e2e-tests/#_1","text":"","title":""},{"location":"e2e-tests/#_2","text":"","title":""},{"location":"e2e-tests/#_3","text":"should set backend protocol to https:// and use proxy_pass should set backend protocol to $scheme:// and use proxy_pass should set backend protocol to grpc:// and use grpc_pass should set backend protocol to grpcs:// and use grpc_pass","title":""},{"location":"e2e-tests/#should-set-backend-protocol-to-and-use-fastcgi_pass","text":"","title":"should set backend protocol to '' and use fastcgi_pass"},{"location":"e2e-tests/#_4","text":"","title":""},{"location":"e2e-tests/#_5","text":"","title":""},{"location":"e2e-tests/#_6","text":"","title":""},{"location":"e2e-tests/#_7","text":"","title":""},{"location":"e2e-tests/#_8","text":"","title":""},{"location":"e2e-tests/#_9","text":"","title":""},{"location":"e2e-tests/#_10","text":"","title":""},{"location":"e2e-tests/#_11","text":"","title":""},{"location":"e2e-tests/#_12","text":"","title":""},{"location":"e2e-tests/#_13","text":"","title":""},{"location":"e2e-tests/#_14","text":"","title":""},{"location":"e2e-tests/#_15","text":"","title":""},{"location":"e2e-tests/#_16","text":"","title":""},{"location":"e2e-tests/#_17","text":"","title":""},{"location":"e2e-tests/#_18","text":"","title":""},{"location":"e2e-tests/#_19","text":"","title":""},{"location":"e2e-tests/#_20","text":"","title":""},{"location":"e2e-tests/#_21","text":"","title":""},{"location":"e2e-tests/#_22","text":"","title":""},{"location":"e2e-tests/#_23","text":"","title":""},{"location":"e2e-tests/#_24","text":"","title":""},{"location":"e2e-tests/#_25","text":"","title":""},{"location":"e2e-tests/#_26","text":"","title":""},{"location":"e2e-tests/#_27","text":"","title":""},{"location":"e2e-tests/#_28","text":"","title":""},{"location":"e2e-tests/#_29","text":"","title":""},{"location":"e2e-tests/#_30","text":"","title":""},{"location":"e2e-tests/#_31","text":"","title":""},{"location":"e2e-tests/#_32","text":"","title":""},{"location":"e2e-tests/#_33","text":"","title":""},{"location":"e2e-tests/#_34","text":"","title":""},{"location":"e2e-tests/#_35","text":"","title":""},{"location":"e2e-tests/#_36","text":"","title":""},{"location":"e2e-tests/#_37","text":"","title":""},{"location":"e2e-tests/#_38","text":"","title":""},{"location":"e2e-tests/#_39","text":"","title":""},{"location":"e2e-tests/#_40","text":"","title":""},{"location":"e2e-tests/#_41","text":"","title":""},{"location":"e2e-tests/#_42","text":"","title":""},{"location":"e2e-tests/#_43","text":"","title":""},{"location":"e2e-tests/#_44","text":"","title":""},{"location":"e2e-tests/#_45","text":"","title":""},{"location":"e2e-tests/#_46","text":"","title":""},{"location":"e2e-tests/#_47","text":"","title":""},{"location":"e2e-tests/#_48","text":"","title":""},{"location":"e2e-tests/#_49","text":"","title":""},{"location":"e2e-tests/#_50","text":"","title":""},{"location":"e2e-tests/#_51","text":"","title":""},{"location":"e2e-tests/#_52","text":"","title":""},{"location":"e2e-tests/#_53","text":"","title":""},{"location":"e2e-tests/#_54","text":"","title":""},{"location":"e2e-tests/#_55","text":"","title":""},{"location":"e2e-tests/#_56","text":"","title":""},{"location":"e2e-tests/#_57","text":"","title":""},{"location":"e2e-tests/#_58","text":"","title":""},{"location":"e2e-tests/#_59","text":"","title":""},{"location":"e2e-tests/#_60","text":"","title":""},{"location":"e2e-tests/#_61","text":"","title":""},{"location":"e2e-tests/#_62","text":"","title":""},{"location":"e2e-tests/#_63","text":"","title":""},{"location":"e2e-tests/#_64","text":"","title":""},{"location":"e2e-tests/#_65","text":"","title":""},{"location":"e2e-tests/#_66","text":"","title":""},{"location":"e2e-tests/#_67","text":"","title":""},{"location":"e2e-tests/#_68","text":"","title":""},{"location":"e2e-tests/#_69","text":"","title":""},{"location":"e2e-tests/#_70","text":"","title":""},{"location":"e2e-tests/#_71","text":"","title":""},{"location":"e2e-tests/#_72","text":"","title":""},{"location":"e2e-tests/#_73","text":"","title":""},{"location":"e2e-tests/#_74","text":"","title":""},{"location":"e2e-tests/#_75","text":"","title":""},{"location":"e2e-tests/#_76","text":"","title":""},{"location":"e2e-tests/#_77","text":"","title":""},{"location":"e2e-tests/#_78","text":"","title":""},{"location":"e2e-tests/#_79","text":"","title":""},{"location":"e2e-tests/#_80","text":"","title":""},{"location":"e2e-tests/#_81","text":"","title":""},{"location":"e2e-tests/#_82","text":"","title":""},{"location":"e2e-tests/#_83","text":"","title":""},{"location":"e2e-tests/#_84","text":"","title":""},{"location":"e2e-tests/#_85","text":"","title":""},{"location":"e2e-tests/#_86","text":"","title":""},{"location":"e2e-tests/#_87","text":"","title":""},{"location":"e2e-tests/#_88","text":"","title":""},{"location":"e2e-tests/#_89","text":"","title":""},{"location":"e2e-tests/#_90","text":"","title":""},{"location":"e2e-tests/#_91","text":"","title":""},{"location":"e2e-tests/#_92","text":"","title":""},{"location":"e2e-tests/#_93","text":"","title":""},{"location":"e2e-tests/#_94","text":"","title":""},{"location":"e2e-tests/#_95","text":"","title":""},{"location":"e2e-tests/#_96","text":"","title":""},{"location":"e2e-tests/#_97","text":"","title":""},{"location":"e2e-tests/#_98","text":"","title":""},{"location":"e2e-tests/#_99","text":"","title":""},{"location":"e2e-tests/#_100","text":"","title":""},{"location":"e2e-tests/#_101","text":"","title":""},{"location":"e2e-tests/#_102","text":"","title":""},{"location":"e2e-tests/#_103","text":"","title":""},{"location":"e2e-tests/#_104","text":"","title":""},{"location":"e2e-tests/#_105","text":"","title":""},{"location":"e2e-tests/#_106","text":"","title":""},{"location":"e2e-tests/#_107","text":"","title":""},{"location":"e2e-tests/#_108","text":"","title":""},{"location":"e2e-tests/#_109","text":"","title":""},{"location":"e2e-tests/#_110","text":"","title":""},{"location":"e2e-tests/#_111","text":"","title":""},{"location":"e2e-tests/#_112","text":"","title":""},{"location":"e2e-tests/#_113","text":"","title":""},{"location":"e2e-tests/#_114","text":"","title":""},{"location":"e2e-tests/#_115","text":"","title":""},{"location":"e2e-tests/#_116","text":"","title":""},{"location":"e2e-tests/#_117","text":"","title":""},{"location":"e2e-tests/#_118","text":"<<<<<<< Updated upstream - should ignore Ingress of namespace without label foo=bar and accept those of namespace with label foo=bar ======= - Stashed changes","title":""},{"location":"e2e-tests/#_119","text":"","title":""},{"location":"e2e-tests/#_120","text":"","title":""},{"location":"e2e-tests/#_121","text":"","title":""},{"location":"e2e-tests/#_122","text":"","title":""},{"location":"e2e-tests/#_123","text":"","title":""},{"location":"e2e-tests/#_124","text":"","title":""},{"location":"e2e-tests/#_125","text":"","title":""},{"location":"e2e-tests/#_126","text":"","title":""},{"location":"e2e-tests/#_127","text":"","title":""},{"location":"e2e-tests/#_128","text":"","title":""},{"location":"e2e-tests/#_129","text":"","title":""},{"location":"e2e-tests/#_130","text":"","title":""},{"location":"e2e-tests/#_131","text":"","title":""},{"location":"e2e-tests/#_132","text":"","title":""},{"location":"e2e-tests/#_133","text":"","title":""},{"location":"e2e-tests/#_134","text":"","title":""},{"location":"e2e-tests/#_135","text":"","title":""},{"location":"e2e-tests/#_136","text":"","title":""},{"location":"e2e-tests/#_137","text":"","title":""},{"location":"e2e-tests/#_138","text":"","title":""},{"location":"e2e-tests/#_139","text":"","title":""},{"location":"e2e-tests/#_140","text":"","title":""},{"location":"e2e-tests/#_141","text":"","title":""},{"location":"e2e-tests/#_142","text":"","title":""},{"location":"e2e-tests/#_143","text":"","title":""},{"location":"e2e-tests/#_144","text":"","title":""},{"location":"e2e-tests/#_145","text":"","title":""},{"location":"e2e-tests/#_146","text":"","title":""},{"location":"e2e-tests/#_147","text":"","title":""},{"location":"faq/","text":"FAQ \u00b6 Retaining Client IPAddress \u00b6 Please read Retain Client IPAddress Guide here . Kubernetes v1.22 Migration \u00b6 If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here . Validation Of path \u00b6 For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path . This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0. When \" ingress.spec.rules.http.pathType=Exact \" or \" pathType=Prefix \", this validation will limit the characters accepted on the field \" ingress.spec.rules.http.paths.path \", to \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\" When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \" ImplementationSpecific \". API Spec on pathType is documented here When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" , in the path field, but is not using pathType value as ImplementationSpecific , then the ingress object will be denied admission. The cluster admin should establish validation rules using mechanisms like \" Open Policy Agent \", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here A complete example of an Openpolicyagent gatekeeper rule is available here If you have any issues or concerns, please do one of the following: Open a GitHub issue Comment in our Dev Slack Channel Open a thread in our Google Group ingress-nginx-dev@kubernetes.io","title":"FAQ"},{"location":"faq/#faq","text":"","title":"FAQ"},{"location":"faq/#retaining-client-ipaddress","text":"Please read Retain Client IPAddress Guide here .","title":"Retaining Client IPAddress"},{"location":"faq/#kubernetes-v122-migration","text":"If you are using Ingress objects in your cluster (running Kubernetes older than v1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here .","title":"Kubernetes v1.22 Migration"},{"location":"faq/#validation-of-path","text":"For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path . This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0. When \" ingress.spec.rules.http.pathType=Exact \" or \" pathType=Prefix \", this validation will limit the characters accepted on the field \" ingress.spec.rules.http.paths.path \", to \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" Also, in this case, the path should start with \"/.\" When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be \" ImplementationSpecific \". API Spec on pathType is documented here When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than \" alphanumeric characters \", and \"/,\" \"_,\" \"-.\" , in the path field, but is not using pathType value as ImplementationSpecific , then the ingress object will be denied admission. The cluster admin should establish validation rules using mechanisms like \" Open Policy Agent \", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here A complete example of an Openpolicyagent gatekeeper rule is available here If you have any issues or concerns, please do one of the following: Open a GitHub issue Comment in our Dev Slack Channel Open a thread in our Google Group ingress-nginx-dev@kubernetes.io","title":"Validation Of path"},{"location":"how-it-works/","text":"How it works \u00b6 The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one. NGINX configuration \u00b6 The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done. NGINX model \u00b6 Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template. Building the NGINX model \u00b6 Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses. When a reload is required \u00b6 The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated. Avoiding reloads \u00b6 In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes. Avoiding reloads on Endpoints changes \u00b6 On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on. Avoiding outage from wrong configuration \u00b6 Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"How it works"},{"location":"how-it-works/#how-it-works","text":"The objective of this document is to explain how the Ingress-NGINX controller works, in particular how the NGINX model is built and why we need one.","title":"How it works"},{"location":"how-it-works/#nginx-configuration","text":"The goal of this Ingress controller is the assembly of a configuration file (nginx.conf). The main implication of this requirement is the need to reload NGINX after any change in the configuration file. Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app) . We use lua-nginx-module to achieve this. Check below to learn more about how it's done.","title":"NGINX configuration"},{"location":"how-it-works/#nginx-model","text":"Usually, a Kubernetes Controller utilizes the synchronization loop pattern to check if the desired state in the controller is updated or a change is required. To this purpose, we need to build a model using different objects from the cluster, in particular (in no special order) Ingresses, Services, Endpoints, Secrets, and Configmaps to generate a point in time configuration file that reflects the state of the cluster. To get this object from the cluster, we use Kubernetes Informers , in particular, FilteredSharedInformer . These informers allow reacting to change in using callbacks to individual changes when a new object is added, modified or removed. Unfortunately, there is no way to know if a particular change is going to affect the final configuration file. Therefore on every change, we have to rebuild a new model from scratch based on the state of cluster and compare it to the current model. If the new model equals to the current one, then we avoid generating a new NGINX configuration and triggering a reload. Otherwise, we check if the difference is only about Endpoints. If so we then send the new list of Endpoints to a Lua handler running inside Nginx using HTTP POST request and again avoid generating a new NGINX configuration and triggering a reload. If the difference between running and new model is about more than just Endpoints we create a new NGINX configuration based on the new model, replace the current model and trigger a reload. One of the uses of the model is to avoid unnecessary reloads when there's no change in the state and to detect conflicts in definitions. The final representation of the NGINX configuration is generated from a Go template using the new model as input for the variables required by the template.","title":"NGINX model"},{"location":"how-it-works/#building-the-nginx-model","text":"Building a model is an expensive operation, for this reason, the use of the synchronization loop is a must. By using a work queue it is possible to not lose changes and remove the use of sync.Mutex to force a single execution of the sync loop and additionally it is possible to create a time window between the start and end of the sync loop that allows us to discard unnecessary updates. It is important to understand that any change in the cluster could generate events that the informer will send to the controller and one of the reasons for the work queue . Operations to build the model: Order Ingress rules by CreationTimestamp field, i.e., old rules first. If the same path for the same host is defined in more than one Ingress, the oldest rule wins. If more than one Ingress contains a TLS section for the same host, the oldest rule wins. If multiple Ingresses define an annotation that affects the configuration of the Server block, the oldest rule wins. Create a list of NGINX Servers (per hostname) Create a list of NGINX Upstreams If multiple Ingresses define different paths for the same host, the ingress controller will merge the definitions. Annotations are applied to all the paths in the Ingress. Multiple Ingresses can define different annotations. These definitions are not shared between Ingresses.","title":"Building the NGINX model"},{"location":"how-it-works/#when-a-reload-is-required","text":"The next list describes the scenarios when a reload is required: New Ingress Resource Created. TLS section is added to existing Ingress. Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload. A path is added/removed from an Ingress. An Ingress, Service, Secret is removed. Some missing referenced object from the Ingress is available, like a Service or Secret. A Secret is updated.","title":"When a reload is required"},{"location":"how-it-works/#avoiding-reloads","text":"In some cases, it is possible to avoid reloads, in particular when there is a change in the endpoints, i.e., a pod is started or replaced. It is out of the scope of this Ingress controller to remove reloads completely. This would require an incredible amount of work and at some point makes no sense. This can change only if NGINX changes the way new configurations are read, basically, new changes do not replace worker processes.","title":"Avoiding reloads"},{"location":"how-it-works/#avoiding-reloads-on-endpoints-changes","text":"On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer. Then Nginx takes care of the rest. This way we avoid reloading Nginx on endpoint changes. Note that this includes annotation changes that affects only upstream configuration in Nginx as well. In a relatively big cluster with frequently deploying apps this feature saves significant number of Nginx reloads which can otherwise affect response latency, load balancing quality (after every reload Nginx resets the state of load balancing) and so on.","title":"Avoiding reloads on Endpoints changes"},{"location":"how-it-works/#avoiding-outage-from-wrong-configuration","text":"Because the ingress controller works using the synchronization loop pattern , it is applying the configuration for all matching objects. In case some Ingress objects have a broken configuration, for example a syntax error in the nginx.ingress.kubernetes.io/configuration-snippet annotation, the generated configuration becomes invalid, does not reload and hence no more ingresses will be taken into account. To prevent this situation to happen, the Ingress-Nginx Controller optionally exposes a validating admission webhook server to ensure the validity of incoming ingress objects. This webhook appends the incoming ingress objects to the list of ingresses, generates the configuration and calls nginx to ensure the configuration has no syntax errors.","title":"Avoiding outage from wrong configuration"},{"location":"kubectl-plugin/","text":"The ingress-nginx kubectl plugin \u00b6 Installation \u00b6 Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command. Common Flags \u00b6 Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment , --pod , and --container flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller , and the --container flag defaults to controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace. Subcommands \u00b6 Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher. backends \u00b6 Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name. certs \u00b6 Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- conf \u00b6 Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ... exec \u00b6 kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json opentelemetry.toml owasp-modsecurity-crs template info \u00b6 Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx . ingresses \u00b6 kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs. $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2 lint \u00b6 kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 logs \u00b6 kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ... ssh \u00b6 kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"kubectl plugin"},{"location":"kubectl-plugin/#the-ingress-nginx-kubectl-plugin","text":"","title":"The ingress-nginx kubectl plugin"},{"location":"kubectl-plugin/#installation","text":"Install krew , then run kubectl krew install ingress-nginx to install the plugin. Then run kubectl ingress-nginx --help to make sure the plugin is properly installed and to get a list of commands: kubectl ingress-nginx --help A kubectl plugin for inspecting your ingress-nginx deployments Usage: ingress-nginx [command] Available Commands: backends Inspect the dynamic backend information of an ingress-nginx instance certs Output the certificate data stored in an ingress-nginx pod conf Inspect the generated nginx.conf exec Execute a command inside an ingress-nginx pod general Inspect the other dynamic ingress-nginx information help Help about any command info Show information about the ingress-nginx service ingresses Provide a short summary of all of the ingress definitions lint Inspect kubernetes resources for possible issues logs Get the kubernetes logs for an ingress-nginx pod ssh ssh into a running ingress-nginx pod Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default HTTP cache directory (default \"/Users/alexkursell/.kube/http-cache\") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for ingress-nginx --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default \"0\") -s, --server string The address and port of the Kubernetes API server --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use \"ingress-nginx [command] --help\" for more information about a command.","title":"Installation"},{"location":"kubectl-plugin/#common-flags","text":"Every subcommand supports the basic kubectl configuration flags like --namespace , --context , --client-key and so on. Subcommands that act on a particular ingress-nginx pod ( backends , certs , conf , exec , general , logs , ssh ), support the --deployment , --pod , and --container flags to select either a pod from a deployment with the given name, or a pod with the given name (and the given container name). The --deployment flag defaults to ingress-nginx-controller , and the --container flag defaults to controller . Subcommands that inspect resources ( ingresses , lint ) support the --all-namespaces flag, which causes them to inspect resources in every namespace.","title":"Common Flags"},{"location":"kubectl-plugin/#subcommands","text":"Note that backends , general , certs , and conf require ingress-nginx version 0.23.0 or higher.","title":"Subcommands"},{"location":"kubectl-plugin/#backends","text":"Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about: $ kubectl ingress-nginx backends -n ingress-nginx [ { \"name\": \"default-apple-service-5678\", \"service\": { \"metadata\": { \"creationTimestamp\": null }, \"spec\": { \"ports\": [ { \"protocol\": \"TCP\", \"port\": 5678, \"targetPort\": 5678 } ], \"selector\": { \"app\": \"apple\" }, \"clusterIP\": \"10.97.230.121\", \"type\": \"ClusterIP\", \"sessionAffinity\": \"None\" }, \"status\": { \"loadBalancer\": {} } }, \"port\": 0, \"sslPassthrough\": false, \"endpoints\": [ { \"address\": \"10.1.3.86\", \"port\": \"5678\" } ], \"sessionAffinityConfig\": { \"name\": \"\", \"cookieSessionAffinity\": { \"name\": \"\" } }, \"upstreamHashByConfig\": { \"upstream-hash-by-subset-size\": 3 }, \"noServer\": false, \"trafficShapingPolicy\": { \"weight\": 0, \"header\": \"\", \"headerValue\": \"\", \"cookie\": \"\" } }, { \"name\": \"default-echo-service-8080\", ... }, { \"name\": \"upstream-default-backend\", ... } ] Add the --list option to show only the backend names. Add the --backend option to show only the backend with the given name.","title":"backends"},{"location":"kubectl-plugin/#certs","text":"Use kubectl ingress-nginx certs --host to dump the SSL cert/key information for a given host. WARNING: This command will dump sensitive private key information. Don't blindly share the output, and certainly don't log it anywhere. $ kubectl ingress-nginx certs -n ingress-nginx --host testaddr.local -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY-----","title":"certs"},{"location":"kubectl-plugin/#conf","text":"Use kubectl ingress-nginx conf to dump the generated nginx.conf file. Add the --host option to view only the server block for that host: kubectl ingress-nginx conf -n ingress-nginx --host testaddr.local server { server_name testaddr.local ; listen 80; set $proxy_upstream_name \"-\"; set $pass_access_scheme $scheme; set $pass_server_port $server_port; set $best_http_host $http_host; set $pass_port $pass_server_port; location / { set $namespace \"\"; set $ingress_name \"\"; set $service_name \"\"; set $service_port \"0\"; set $location_path \"/\"; ...","title":"conf"},{"location":"kubectl-plugin/#exec","text":"kubectl ingress-nginx exec is exactly the same as kubectl exec , with the same command flags. It will automatically choose an ingress-nginx pod to run the command in. $ kubectl ingress-nginx exec -i -n ingress-nginx -- ls /etc/nginx fastcgi_params geoip lua mime.types modsecurity modules nginx.conf opentracing.json opentelemetry.toml owasp-modsecurity-crs template","title":"exec"},{"location":"kubectl-plugin/#info","text":"Shows the internal and external IP/CNAMES for an ingress-nginx service. $ kubectl ingress-nginx info -n ingress-nginx Service cluster IP address: 10.187.253.31 LoadBalancer IP|CNAME: 35.123.123.123 Use the --service flag if your ingress-nginx LoadBalancer service is not named ingress-nginx .","title":"info"},{"location":"kubectl-plugin/#ingresses","text":"kubectl ingress-nginx ingresses , alternately kubectl ingress-nginx ing , shows a more detailed view of the ingress definitions in a namespace. Compare: $ kubectl get ingresses --all-namespaces NAMESPACE NAME HOSTS ADDRESS PORTS AGE default example-ingress1 testaddr.local,testaddr2.local localhost 80 5d default test-ingress-2 * localhost 80 5d vs. $ kubectl ingress-nginx ingresses --all-namespaces NAMESPACE INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS default example-ingress1 testaddr.local/etameta localhost NO pear-service 5678 5 default example-ingress1 testaddr2.local/otherpath localhost NO apple-service 5678 1 default example-ingress1 testaddr2.local/otherotherpath localhost NO pear-service 5678 5 default test-ingress-2 * localhost NO echo-service 8080 2","title":"ingresses"},{"location":"kubectl-plugin/#lint","text":"kubectl ingress-nginx lint can check a namespace or entire cluster for potential configuration issues. This command is especially useful when upgrading between ingress-nginx versions. $ kubectl ingress-nginx lint --all-namespaces --verbose Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 \u2717 othernamespace/ingress-definition-blah - The rewrite-target annotation value does not reference a capture group Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3174 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --sort-backends Lint added for version 0.22.0 https://github.com/kubernetes/ingress-nginx/issues/3655 - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808 To show the lints added only for a particular ingress-nginx release, use the --from-version and --to-version flags: $ kubectl ingress-nginx lint --all-namespaces --verbose --from-version 0 .24.0 --to-version 0 .24.0 Checking ingresses... \u2717 anamespace/this-nginx - Contains the removed session-cookie-hash annotation. Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3743 Checking deployments... \u2717 namespace2/ingress-nginx-controller - Uses removed config flag --enable-dynamic-certificates Lint added for version 0.24.0 https://github.com/kubernetes/ingress-nginx/issues/3808","title":"lint"},{"location":"kubectl-plugin/#logs","text":"kubectl ingress-nginx logs is almost the same as kubectl logs , with fewer flags. It will automatically choose an ingress-nginx pod to read logs from. $ kubectl ingress-nginx logs -n ingress-nginx ------------------------------------------------------------------------------- NGINX Ingress controller Release: dev Build: git-48dc3a867 Repository: git@github.com:kubernetes/ingress-nginx.git ------------------------------------------------------------------------------- W0405 16:53:46.061589 7 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false) nginx version: nginx/1.15.9 W0405 16:53:46.070093 7 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I0405 16:53:46.070499 7 main.go:205] Creating API client for https://10.96.0.1:443 I0405 16:53:46.077784 7 main.go:249] Running in Kubernetes cluster version v1.10 (v1.10.11) - git (clean) commit 637c7e288581ee40ab4ca210618a89a555b6e7e9 - platform linux/amd64 I0405 16:53:46.183359 7 nginx.go:265] Starting NGINX Ingress controller I0405 16:53:46.193913 7 event.go:209] Event(v1.ObjectReference{Kind:\"ConfigMap\", Namespace:\"ingress-nginx\", Name:\"udp-services\", UID:\"82258915-563e-11e9-9c52-025000000001\", APIVersion:\"v1\", ResourceVersion:\"494\", FieldPath:\"\"}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services ...","title":"logs"},{"location":"kubectl-plugin/#ssh","text":"kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash . Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container. $ kubectl ingress-nginx ssh -n ingress-nginx www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$","title":"ssh"},{"location":"lua_tests/","text":"Lua Tests \u00b6 Running the Lua Tests \u00b6 To run the Lua tests you can run the following from the root directory: make lua-test This command makes use of docker hence does not need any dependency installations besides docker Where are the Lua Tests? \u00b6 Lua Tests can be found in the rootfs/etc/nginx/lua/test directory","title":"Lua Tests"},{"location":"lua_tests/#lua-tests","text":"","title":"Lua Tests"},{"location":"lua_tests/#running-the-lua-tests","text":"To run the Lua tests you can run the following from the root directory: make lua-test This command makes use of docker hence does not need any dependency installations besides docker","title":"Running the Lua Tests"},{"location":"lua_tests/#where-are-the-lua-tests","text":"Lua Tests can be found in the rootfs/etc/nginx/lua/test directory","title":"Where are the Lua Tests?"},{"location":"troubleshooting/","text":"Troubleshooting \u00b6 Ingress-Controller Logs and Events \u00b6 There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information. Check the Ingress Resource Events \u00b6 $ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress Check the Ingress Controller Logs \u00b6 $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- .... Check the Nginx Configuration \u00b6 $ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { .... Check if used Services Exist \u00b6 $ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m Debug Logging \u00b6 Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode Authentication to the Kubernetes API Server \u00b6 A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+ Service Account \u00b6 If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts Kube-Config \u00b6 If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment. Using GDB with Nginx \u00b6 Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions) \u00b6 Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider ) Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF Then please follow the below steps. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null (\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 (\u2388 |myprompt)\u279c ~ b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 (\u2388 |myprompt)\u279c ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP/2 200 docker-distribution-api-version: registry/2.0 content-type: application/vnd.docker.distribution.manifest.list.v2+json docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content-length: 1384 date: Wed, 28 Sep 2022 16:46:28 GMT server: Docker Registry x-xss-protection: 0 x-frame-options: SAMEORIGIN alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\" (\u2388 |myprompt)\u279c ~ Redirection in the proxy is implemented to ensure the pulling of the images. This is the solution recommended to whitelist the below image repositories : *.appspot.com *.k8s.io *.pkg.dev *.gcr.io More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR. Unable to listen on port (80/443) \u00b6 One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap ) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment. If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable. Create a test pod \u00b6 The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example: apiVersion : v1 kind : Pod metadata : name : ingress-nginx-sleep namespace : default labels : app : nginx spec : containers : - name : nginx image : ##_CONTROLLER_IMAGE_## resources : requests : memory : \"512Mi\" cpu : \"500m\" limits : memory : \"1Gi\" cpu : \"1\" command : [ \"sleep\" ] args : [ \"3600\" ] ports : - containerPort : 80 name : http protocol : TCP - containerPort : 443 name : https protocol : TCP securityContext : allowPrivilegeEscalation : true capabilities : add : - NET_BIND_SERVICE drop : - ALL runAsUser : 101 restartPolicy : Never nodeSelector : kubernetes.io/hostname : ##_NODE_NAME_## tolerations : - key : \"node.kubernetes.io/unschedulable\" operator : \"Exists\" effect : NoSchedule * update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster Apply the YAML and open a shell into the pod. Try to manually run the controller process: $ /nginx-ingress-controller You should get the same error as from the ingress controller pod logs. Confirm the capabilities are properly surfacing into the pod: $ grep CapBnd /proc/1/status CapBnd: 0000000000000400 The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container. $ capsh --decode = 0000000000000400 0x0000000000000400=cap_net_bind_service Create a test pod as root \u00b6 (Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities. Some things to try after shelling into this container: Try running the controller as the www-data (101) user: $ chmod 4755 /nginx-ingress-controller $ /nginx-ingress-controller Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller /nginx-ingress-controller cap_net_bind_service=ep (if missing, see above about purging image on the server and re-pulling) Strace the executable to see what system calls are being executed when it fails: $ apk add strace (1/1) Installing strace (5.12-r0) Executing busybox-1.33.1-r7.trigger OK: 28 MiB in 42 packages $ strace /nginx-ingress-controller execve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x29ea690) = 0 ...","title":"Troubleshooting"},{"location":"troubleshooting/#troubleshooting","text":"","title":"Troubleshooting"},{"location":"troubleshooting/#ingress-controller-logs-and-events","text":"There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.","title":"Ingress-Controller Logs and Events"},{"location":"troubleshooting/#check-the-ingress-resource-events","text":"$ kubectl get ing -n NAME HOSTS ADDRESS PORTS AGE cafe-ingress cafe.com 10.0.2.15 80 25s $ kubectl describe ing -n Name: cafe-ingress Namespace: default Address: 10.0.2.15 Default backend: default-http-backend:80 (172.17.0.5:8080) Rules: Host Path Backends ---- ---- -------- cafe.com /tea tea-svc:80 () /coffee coffee-svc:80 () Annotations: kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{},\"name\":\"cafe-ingress\",\"namespace\":\"default\",\"selfLink\":\"/apis/networking/v1/namespaces/default/ingresses/cafe-ingress\"},\"spec\":{\"rules\":[{\"host\":\"cafe.com\",\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"tea-svc\",\"servicePort\":80},\"path\":\"/tea\"},{\"backend\":{\"serviceName\":\"coffee-svc\",\"servicePort\":80},\"path\":\"/coffee\"}]}}]},\"status\":{\"loadBalancer\":{\"ingress\":[{\"ip\":\"169.48.142.110\"}]}}} Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 1m ingress-nginx-controller Ingress default/cafe-ingress Normal UPDATE 58s ingress-nginx-controller Ingress default/cafe-ingress","title":"Check the Ingress Resource Events"},{"location":"troubleshooting/#check-the-ingress-controller-logs","text":"$ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl logs -n ingress-nginx-controller-67956bf89d-fv58j ------------------------------------------------------------------------------- NGINX Ingress controller Release: 0.14.0 Build: git-734361d Repository: https://github.com/kubernetes/ingress-nginx ------------------------------------------------------------------------------- ....","title":"Check the Ingress Controller Logs"},{"location":"troubleshooting/#check-the-nginx-configuration","text":"$ kubectl get pods -n NAME READY STATUS RESTARTS AGE ingress-nginx-controller-67956bf89d-fv58j 1/1 Running 0 1m $ kubectl exec -it -n ingress-nginx-controller-67956bf89d-fv58j -- cat /etc/nginx/nginx.conf daemon off; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 523264; worker_shutdown_timeout 240s; events { multi_accept on; worker_connections 16384; use epoll; } http { ....","title":"Check the Nginx Configuration"},{"location":"troubleshooting/#check-if-used-services-exist","text":"$ kubectl get svc --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default coffee-svc ClusterIP 10.106.154.35 80/TCP 18m default kubernetes ClusterIP 10.96.0.1 443/TCP 30m default tea-svc ClusterIP 10.104.172.12 80/TCP 18m kube-system default-http-backend NodePort 10.108.189.236 80:30001/TCP 30m kube-system kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 30m kube-system kubernetes-dashboard NodePort 10.103.128.17 80:30000/TCP 30m","title":"Check if used Services Exist"},{"location":"troubleshooting/#debug-logging","text":"Using the flag --v=XX it is possible to increase the level of logging. This is performed by editing the deployment. $ kubectl get deploy -n NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE default-http-backend 1 1 1 1 35m ingress-nginx-controller 1 1 1 1 35m $ kubectl edit deploy -n ingress-nginx-controller # Add --v = X to \"- args\" , where X is an integer --v=2 shows details using diff about the changes in the configuration in nginx --v=3 shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format --v=5 configures NGINX in debug mode","title":"Debug Logging"},{"location":"troubleshooting/#authentication-to-the-kubernetes-api-server","text":"A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file. Both authentications must work: +-------------+ service +------------+ | | authentication | | + apiserver +<-------------------+ ingress | | | | controller | +-------------+ +------------+ Service authentication The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in a couple of ways: Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the --kubeconfig flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the --kubeconfig does not requires the flag --apiserver-host . The format of the file is identical to ~/.kube/config which is used by kubectl to connect to the API server. See 'kubeconfig' section for details. Using the flag --apiserver-host : Using this flag --apiserver-host=http://localhost:8080 it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy . Please do not use this approach in production. In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side. Kubernetes Workstation +---------------------------------------------------+ +------------------+ | | | | | +-----------+ apiserver +------------+ | | +------------+ | | | | proxy | | | | | | | | | apiserver | | ingress | | | | ingress | | | | | | controller | | | | controller | | | | | | | | | | | | | | | | | | | | | | | | | service account/ | | | | | | | | | | kubeconfig | | | | | | | | | +<-------------------+ | | | | | | | | | | | | | | | | | +------+----+ kubeconfig +------+-----+ | | +------+-----+ | | |<--------------------------------------------------------| | | | | | +---------------------------------------------------+ +------------------+","title":"Authentication to the Kubernetes API Server"},{"location":"troubleshooting/#service-account","text":"If using a service account to connect to the API server, the ingress-controller expects the file /var/run/secrets/kubernetes.io/serviceaccount/token to be present. It provides a secret token that is required to authenticate with the API server. Verify with the following commands: # start a container that contains curl $ kubectl run -it --rm test --image = curlimages/curl --restart = Never -- /bin/sh # check if secret exists / $ ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token / $ # check base connectivity from cluster inside / $ curl -k https://kubernetes.default.svc.cluster.local { \"kind\": \"Status\", \"apiVersion\": \"v1\", \"metadata\": { }, \"status\": \"Failure\", \"message\": \"forbidden: User \\\"system:anonymous\\\" cannot get path \\\"/\\\"\", \"reason\": \"Forbidden\", \"details\": { }, \"code\": 403 }/ $ # connect using tokens }/ $ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H \"Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)\" https://kubernetes.default.svc.cluster.local && echo { \"paths\": [ \"/api\", \"/api/v1\", \"/apis\", \"/apis/\", ... TRUNCATED \"/readyz/shutdown\", \"/version\" ] } / $ # when you type ` exit ` or ` ^D ` the test pod will be deleted. If it is not working, there are two possible reasons: The contents of the tokens are invalid. Find the secret name with kubectl get secrets | grep service-account and delete it with kubectl delete secret . It will automatically be recreated. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the --admission-control parameter. Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers. More information: User Guide: Service Accounts Cluster Administrator Guide: Managing Service Accounts","title":"Service Account"},{"location":"troubleshooting/#kube-config","text":"If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag --kubeconfig=/etc/kubernetes/kubeconfig.yaml to the args section of the deployment.","title":"Kube-Config"},{"location":"troubleshooting/#using-gdb-with-nginx","text":"Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations. Note: The below is based on the nginx documentation . SSH into the worker $ ssh user@workerIP Obtain the Docker Container Running nginx $ docker ps | grep ingress-nginx-controller CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d9e1d243156a registry.k8s.io/ingress-nginx/controller \"/usr/bin/dumb-init \u2026\" 19 minutes ago Up 19 minutes k8s_ingress-nginx-controller_ingress-nginx-controller-67956bf89d-mqxzt_kube-system_079f31ec-aa37-11e8-ad39-080027a227db_0 Exec into the container $ docker exec -it --user = 0 --privileged d9e1d243156a bash Make sure nginx is running in --with-debug $ nginx -V 2 > & 1 | grep -- '--with-debug' Get list of processes running on container $ ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 20:23 ? 00:00:00 /usr/bin/dumb-init /nginx-ingres root 5 1 0 20:23 ? 00:00:05 /ingress-nginx-controller --defa root 21 5 0 20:23 ? 00:00:00 nginx: master process /usr/sbin/ nobody 106 21 0 20:23 ? 00:00:00 nginx: worker process nobody 107 21 0 20:23 ? 00:00:00 nginx: worker process root 172 0 0 20:43 pts/0 00:00:00 bash Attach gdb to the nginx master process $ gdb -p 21 .... Attaching to process 21 Reading symbols from /usr/sbin/nginx...done. .... (gdb) Copy and paste the following: set $cd = ngx_cycle->config_dump set $nelts = $cd.nelts set $elts = (ngx_conf_dump_t*)($cd.elts) while ($nelts-- > 0) set $name = $elts[$nelts]->name.data printf \"Dumping %s to nginx_conf.txt\\n\", $name append memory nginx_conf.txt \\ $ elts [ $nelts ] ->buffer.start $elts [ $nelts ] ->buffer.end end Quit GDB by pressing CTRL+D Open nginx_conf.txt cat nginx_conf.txt","title":"Using GDB with Nginx"},{"location":"troubleshooting/#image-related-issues-faced-on-nginx-425-or-other-versions-helm-chart-versions","text":"Incase you face below error while installing Nginx using helm chart (either by helm commands or helm_release terraform provider ) Warning Failed 5m5s (x4 over 6m34s) kubelet Failed to pull image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to resolve reference \"registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": failed to do request: Head \"https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47\": EOF Then please follow the below steps. During troubleshooting you can also execute the below commands to test the connectivities from you local machines and repositories details a. curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null (\u2388 |myprompt)\u279c ~ curl registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 > /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 (\u2388 |myprompt)\u279c ~ b. curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 (\u2388 |myprompt)\u279c ~ curl -I https://eu.gcr.io/v2/k8s-artifacts-prod/ingress-nginx/kube-webhook-certgen/manifests/sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 HTTP/2 200 docker-distribution-api-version: registry/2.0 content-type: application/vnd.docker.distribution.manifest.list.v2+json docker-content-digest: sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47 content-length: 1384 date: Wed, 28 Sep 2022 16:46:28 GMT server: Docker Registry x-xss-protection: 0 x-frame-options: SAMEORIGIN alt-svc: h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\" (\u2388 |myprompt)\u279c ~ Redirection in the proxy is implemented to ensure the pulling of the images. This is the solution recommended to whitelist the below image repositories : *.appspot.com *.k8s.io *.pkg.dev *.gcr.io More details about the above repos : a. *.k8s.io -> To ensure you can pull any images from registry.k8s.io b. *.gcr.io -> GCP services are used for image hosting. This is part of the domains suggested by GCP to allow and ensure users can pull images from their container registry services. c. *.appspot.com -> This a Google domain. part of the domain used for GCR.","title":"Image related issues faced on Nginx 4.2.5 or other versions (Helm chart versions)"},{"location":"troubleshooting/#unable-to-listen-on-port-80443","text":"One possible reason for this error is lack of permission to bind to the port. Ports 80, 443, and any other port < 1024 are Linux privileged ports which historically could only be bound by root. The ingress-nginx-controller uses the CAP_NET_BIND_SERVICE linux capability to allow binding these ports as a normal user (www-data / 101). This involves two components: 1. In the image, the /nginx-ingress-controller file has the cap_net_bind_service capability added (e.g. via setcap ) 2. The NET_BIND_SERVICE capability is added to the container in the containerSecurityContext of the deployment. If encountering this on one/some node(s) and not on others, try to purge and pull a fresh copy of the image to the affected node(s), in case there has been corruption of the underlying layers to lose the capability on the executable.","title":"Unable to listen on port (80/443)"},{"location":"troubleshooting/#create-a-test-pod","text":"The /nginx-ingress-controller process exits/crashes when encountering this error, making it difficult to troubleshoot what is happening inside the container. To get around this, start an equivalent container running \"sleep 3600\", and exec into it for further troubleshooting. For example: apiVersion : v1 kind : Pod metadata : name : ingress-nginx-sleep namespace : default labels : app : nginx spec : containers : - name : nginx image : ##_CONTROLLER_IMAGE_## resources : requests : memory : \"512Mi\" cpu : \"500m\" limits : memory : \"1Gi\" cpu : \"1\" command : [ \"sleep\" ] args : [ \"3600\" ] ports : - containerPort : 80 name : http protocol : TCP - containerPort : 443 name : https protocol : TCP securityContext : allowPrivilegeEscalation : true capabilities : add : - NET_BIND_SERVICE drop : - ALL runAsUser : 101 restartPolicy : Never nodeSelector : kubernetes.io/hostname : ##_NODE_NAME_## tolerations : - key : \"node.kubernetes.io/unschedulable\" operator : \"Exists\" effect : NoSchedule * update the namespace if applicable/desired * replace ##_NODE_NAME_## with the problematic node (or remove nodeSelector section if problem is not confined to one node) * replace ##_CONTROLLER_IMAGE_## with the same image as in use by your ingress-nginx deployment * confirm the securityContext section matches what is in place for ingress-nginx-controller pods in your cluster Apply the YAML and open a shell into the pod. Try to manually run the controller process: $ /nginx-ingress-controller You should get the same error as from the ingress controller pod logs. Confirm the capabilities are properly surfacing into the pod: $ grep CapBnd /proc/1/status CapBnd: 0000000000000400 The above value has only net_bind_service enabled (per security context in YAML which adds that and drops all). If you get a different value, then you can decode it on another linux box (capsh not available in this container) like below, and then figure out why specified capabilities are not propagating into the pod/container. $ capsh --decode = 0000000000000400 0x0000000000000400=cap_net_bind_service","title":"Create a test pod"},{"location":"troubleshooting/#create-a-test-pod-as-root","text":"(Note, this may be restricted by PodSecurityPolicy, PodSecurityAdmission/Standards, OPA Gatekeeper, etc. in which case you will need to do the appropriate workaround for testing, e.g. deploy in a new namespace without the restrictions.) To test further you may want to install additional utilities, etc. Modify the pod yaml by: * changing runAsUser from 101 to 0 * removing the \"drop..ALL\" section from the capabilities. Some things to try after shelling into this container: Try running the controller as the www-data (101) user: $ chmod 4755 /nginx-ingress-controller $ /nginx-ingress-controller Examine the errors to see if there is still an issue listening on the port or if it passed that and moved on to other expected errors due to running out of context. Install the libcap package and check capabilities on the file: $ apk add libcap (1/1) Installing libcap (2.50-r0) Executing busybox-1.33.1-r7.trigger OK: 26 MiB in 41 packages $ getcap /nginx-ingress-controller /nginx-ingress-controller cap_net_bind_service=ep (if missing, see above about purging image on the server and re-pulling) Strace the executable to see what system calls are being executed when it fails: $ apk add strace (1/1) Installing strace (5.12-r0) Executing busybox-1.33.1-r7.trigger OK: 28 MiB in 42 packages $ strace /nginx-ingress-controller execve(\"/nginx-ingress-controller\", [\"/nginx-ingress-controller\"], 0x7ffeb9eb3240 /* 131 vars */) = 0 arch_prctl(ARCH_SET_FS, 0x29ea690) = 0 ...","title":"Create a test pod as root"},{"location":"deploy/","text":"Installation Guide \u00b6 There are multiple ways to install the Ingress-Nginx Controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider. Contents \u00b6 Quick start Environment-specific instructions ... Docker Desktop ... Rancher Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... OVHcloud ... Bare-metal Miscellaneous Quick start \u00b6 If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you want a full list of values that you can set, while installing with Helm, then run: helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider. Firewall configuration \u00b6 To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml . In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller . - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing. Pre-flight check \u00b6 A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Local testing \u00b6 Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses a host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=\"demo.localdev.me/*=demo:80\" Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 Info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. This issue shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080 You should see a HTML response containing text like \"It works!\" . Online testing \u00b6 If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89 Environment-specific instructions \u00b6 Local development clusters \u00b6 minikube \u00b6 The ingress controller can be installed through minikube's addons system: minikube addons enable ingress MicroK8s \u00b6 The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details. Docker Desktop \u00b6 Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section . Rancher Desktop \u00b6 Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop. Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu. Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample. Cloud deployments \u00b6 If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers. AWS \u00b6 In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller . Network Load Balancer (NLB) \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml TLS termination in AWS Load Balancer (NLB) \u00b6 By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB. Download the deploy.yaml template wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy.yaml NLB Idle Timeouts \u00b6 Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default, NGINX keepalive_timeout is set to 75s . More information with regard to timeouts can be found in the official AWS documentation GCE-GKE \u00b6 First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . More information can be found in the Official GCP Documentation . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Proxy-protocol is supported in GCE check the Official Documentations on how to enable. Azure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation . Digital Ocean \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/do/deploy.yaml - By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\" . While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data , unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue . Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data. Scaleway \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/scw/deploy.yaml Exoscale \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation . Oracle Cloud Infrastructure \u00b6 kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation. OVHcloud \u00b6 helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace You can find the complete tutorial . Bare metal clusters \u00b6 This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations . Miscellaneous \u00b6 Checking ingress controller version \u00b6 Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version Scope \u00b6 By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details. Webhook network access \u00b6 Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service. Certificate generation \u00b6 Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s Running on Kubernetes versions older than 1.19 \u00b6 Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49). The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).","title":"Installation Guide"},{"location":"deploy/#installation-guide","text":"There are multiple ways to install the Ingress-Nginx Controller: with Helm , using the project repository chart; with kubectl apply , using YAML manifests; with specific addons (e.g. for minikube or MicroK8s ). On most Kubernetes clusters, the ingress controller will work without requiring any extra configuration. If you want to get started as fast as possible, you can check the quick start instructions. However, in many environments, you can improve the performance or get better logs by enabling extra features. We recommend that you check the environment-specific instructions for details about optimizing the ingress controller for your particular environment or cloud provider.","title":"Installation Guide"},{"location":"deploy/#contents","text":"Quick start Environment-specific instructions ... Docker Desktop ... Rancher Desktop ... minikube ... MicroK8s ... AWS ... GCE - GKE ... Azure ... Digital Ocean ... Scaleway ... Exoscale ... Oracle Cloud Infrastructure ... OVHcloud ... Bare-metal Miscellaneous","title":"Contents"},{"location":"deploy/#quick-start","text":"If you have Helm, you can deploy the ingress controller with the following command: helm upgrade --install ingress-nginx ingress-nginx \\ --repo https://kubernetes.github.io/ingress-nginx \\ --namespace ingress-nginx --create-namespace It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn't already exist. Info This command is idempotent : if the ingress controller is not installed, it will install it, if the ingress controller is already installed, it will upgrade it. If you want a full list of values that you can set, while installing with Helm, then run: helm show values ingress-nginx --repo https://kubernetes.github.io/ingress-nginx If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Info The YAML manifest in the command above was generated with helm template , so you will end up with almost the same resources as if you had used Helm to install the controller. Attention If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.","title":"Quick start"},{"location":"deploy/#firewall-configuration","text":"To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml . In general, you need: - Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller . - Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.","title":"Firewall configuration"},{"location":"deploy/#pre-flight-check","text":"A few pods should start in the ingress-nginx namespace: kubectl get pods --namespace=ingress-nginx After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Pre-flight check"},{"location":"deploy/#local-testing","text":"Let's create a simple web server and the associated service: kubectl create deployment demo --image=httpd --port=80 kubectl expose deployment demo Then create an ingress resource. The following example uses a host that maps to localhost : kubectl create ingress demo-localhost --class=nginx \\ --rule=\"demo.localdev.me/*=demo:80\" Now, forward a local port to the ingress controller: kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80 Info A note on DNS & network-connection. This documentation assumes that a user has awareness of the DNS and the network routing aspects involved in using ingress. The port-forwarding mentioned above, is the easiest way to demo the working of ingress. The \"kubectl port-forward...\" command above has forwarded the port number 8080, on the localhost's tcp/ip stack, where the command was typed, to the port number 80, of the service created by the installation of ingress-nginx controller. So now, the traffic sent to port number 8080 on localhost will reach the port number 80, of the ingress-controller's service. Port-forwarding is not for a production environment use-case. But here we use port-forwarding, to simulate a HTTP request, originating from outside the cluster, to reach the service of the ingress-nginx controller, that is exposed to receive traffic from outside the cluster. This issue shows a typical DNS problem and its solution. At this point, you can access your deployment using curl ; curl --resolve demo.localdev.me:8080:127.0.0.1 http://demo.localdev.me:8080 You should see a HTML response containing text like \"It works!\" .","title":"Local testing"},{"location":"deploy/#online-testing","text":"If your Kubernetes cluster is a \"real\" cluster that supports services of type LoadBalancer , it will have allocated an external IP address or FQDN to the ingress controller. You can see that IP address or FQDN with the following command: kubectl get service ingress-nginx-controller --namespace=ingress-nginx It will be the EXTERNAL-IP field. If that field shows , this means that your Kubernetes cluster wasn't able to provision the load balancer (generally, this is because it doesn't support services of type LoadBalancer ). Once you have the external IP address (or FQDN), set up a DNS record pointing to it. Then you can create an ingress resource. The following example assumes that you have set up a DNS record for www.demo.io : kubectl create ingress demo --class=nginx \\ --rule=\"www.demo.io/*=demo:80\" Alternatively, the above command can be rewritten as follows for the --rule command and below. kubectl create ingress demo --class=nginx \\ --rule www.demo.io/=demo:80 You should then be able to see the \"It works!\" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! \ud83c\udf89","title":"Online testing"},{"location":"deploy/#environment-specific-instructions","text":"","title":"Environment-specific instructions"},{"location":"deploy/#local-development-clusters","text":"","title":"Local development clusters"},{"location":"deploy/#minikube","text":"The ingress controller can be installed through minikube's addons system: minikube addons enable ingress","title":"minikube"},{"location":"deploy/#microk8s","text":"The ingress controller can be installed through MicroK8s's addons system: microk8s enable ingress Please check the MicroK8s documentation page for details.","title":"MicroK8s"},{"location":"deploy/#docker-desktop","text":"Kubernetes is available in Docker Desktop: Mac, from version 18.06.0-ce Windows, from version 18.06.0-ce First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop . The ingress controller can be installed on Docker Desktop using the default quick start instructions. On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost , which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section .","title":"Docker Desktop"},{"location":"deploy/#rancher-desktop","text":"Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop. Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu. Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.","title":"Rancher Desktop"},{"location":"deploy/#cloud-deployments","text":"If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster ) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command. Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true ) and in the cloud provider's load balancer configuration to function correctly. In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.","title":"Cloud deployments"},{"location":"deploy/#aws","text":"In AWS, we use a Network load balancer (NLB) to expose the Ingress-Nginx Controller behind a Service of Type=LoadBalancer . Info The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use Network load balancing on Amazon EKS with AWS Load Balancer Controller .","title":"AWS"},{"location":"deploy/#network-load-balancer-nlb","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/deploy.yaml","title":"Network Load Balancer (NLB)"},{"location":"deploy/#tls-termination-in-aws-load-balancer-nlb","text":"By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB. Download the deploy.yaml template wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml Edit the file and change the VPC CIDR in use for the Kubernetes cluster: proxy-real-ip-cidr: XXX.XXX.XXX/XX Change the AWS Certificate Manager (ACM) ID as well: arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX Deploy the manifest: kubectl apply -f deploy.yaml","title":"TLS termination in AWS Load Balancer (NLB)"},{"location":"deploy/#nlb-idle-timeouts","text":"Idle timeout value for TCP flows is 350 seconds and cannot be modified . For this reason, you need to ensure the keepalive_timeout value is configured less than 350 seconds to work as expected. By default, NGINX keepalive_timeout is set to 75s . More information with regard to timeouts can be found in the official AWS documentation","title":"NLB Idle Timeouts"},{"location":"deploy/#gce-gke","text":"First, your user needs to have cluster-admin permissions on the cluster. This can be done with the following command: kubectl create clusterrolebinding cluster-admin-binding \\ --clusterrole cluster-admin \\ --user $(gcloud config get-value account) Then, the ingress controller can be installed like this: kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml Warning For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp , 443/tcp and 10254/tcp to also allow access to port 8443/tcp . More information can be found in the Official GCP Documentation . See the GKE documentation on adding rules and the Kubernetes issue for more detail. Proxy-protocol is supported in GCE check the Official Documentations on how to enable.","title":"GCE-GKE"},{"location":"deploy/#azure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml More information with regard to Azure annotations for ingress controller can be found in the official AKS documentation .","title":"Azure"},{"location":"deploy/#digital-ocean","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/do/deploy.yaml - By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: \"true\" . While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data , unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue . Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.","title":"Digital Ocean"},{"location":"deploy/#scaleway","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/scw/deploy.yaml","title":"Scaleway"},{"location":"deploy/#exoscale","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation .","title":"Exoscale"},{"location":"deploy/#oracle-cloud-infrastructure","text":"kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.","title":"Oracle Cloud Infrastructure"},{"location":"deploy/#ovhcloud","text":"helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm -n ingress-nginx install ingress-nginx ingress-nginx/ingress-nginx --create-namespace You can find the complete tutorial .","title":"OVHcloud"},{"location":"deploy/#bare-metal-clusters","text":"This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as \"raw\" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...) For quick testing, you can use a NodePort . This should work on almost every cluster, but it will typically use a port in the range 30000-32767. kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/baremetal/deploy.yaml For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations .","title":"Bare metal clusters"},{"location":"deploy/#miscellaneous","text":"","title":"Miscellaneous"},{"location":"deploy/#checking-ingress-controller-version","text":"Run /nginx-ingress-controller --version within the pod, for instance with kubectl exec : POD_NAMESPACE=ingress-nginx POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name) kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version","title":"Checking ingress controller version"},{"location":"deploy/#scope","text":"By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace. See also \u201cHow to easily install multiple instances of the Ingress NGINX controller in the same cluster\u201d for more details.","title":"Scope"},{"location":"deploy/#webhook-network-access","text":"Warning The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.","title":"Webhook network access"},{"location":"deploy/#certificate-generation","text":"Attention The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook. This can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions. You can wait until it is ready to run the next command: kubectl wait --namespace ingress-nginx \\ --for=condition=ready pod \\ --selector=app.kubernetes.io/component=controller \\ --timeout=120s","title":"Certificate generation"},{"location":"deploy/#running-on-kubernetes-versions-older-than-119","text":"Ingress resources evolved over time. They started with apiVersion: extensions/v1beta1 , then moved to apiVersion: networking.k8s.io/v1beta1 and more recently to apiVersion: networking.k8s.io/v1 . Here is how these Ingress versions are supported in Kubernetes: - before Kubernetes 1.19, only v1beta1 Ingress resources are supported - from Kubernetes 1.19 to 1.21, both v1beta1 and v1 Ingress resources are supported - in Kubernetes 1.22 and above, only v1 Ingress resources are supported And here is how these Ingress versions are supported in Ingress-Nginx Controller: - before version 1.0, only v1beta1 Ingress resources are supported - in version 1.0 and above, only v1 Ingress resources are As a result, if you're running Kubernetes 1.19 or later, you should be able to use the latest version of the NGINX Ingress Controller; but if you're using an old version of Kubernetes (1.18 or earlier) you will have to use version 0.X of the Ingress-Nginx Controller (e.g. version 0.49). The Helm chart of the Ingress-Nginx Controller switched to version 1 in version 4 of the chart. In other words, if you're running Kubernetes 1.19 or earlier, you should use version 3.X of the chart (this can be done by adding --version='<4' to the helm install command ).","title":"Running on Kubernetes versions older than 1.19"},{"location":"deploy/baremetal/","text":"Bare-metal considerations \u00b6 In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal. A pure software solution: MetalLB \u00b6 MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide . MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. --- apiVersion : metallb.io/v1beta1 kind : IPAddressPool metadata : name : default namespace : metallb-system spec : addresses : - 203.0.113.10-203.0.113.15 autoAssign : true --- apiVersion : metallb.io/v1beta1 kind : L2Advertisement metadata : name : default namespace : metallb-system spec : ipAddressPools : - default $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section. Over a NodePort Service \u00b6 Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect Via the host network \u00b6 In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments . Using a self-provisioned edge \u00b6 Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below: External IPs \u00b6 Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#bare-metal-considerations","text":"In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.","title":"Bare-metal considerations"},{"location":"deploy/baremetal/#a-pure-software-solution-metallb","text":"MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide . MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly. --- apiVersion : metallb.io/v1beta1 kind : IPAddressPool metadata : name : default namespace : metallb-system spec : addresses : - 203.0.113.10-203.0.113.15 autoAssign : true --- apiVersion : metallb.io/v1beta1 kind : L2Advertisement metadata : name : default namespace : metallb-system spec : ipAddressPools : - default $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.10 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.10 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.","title":"A pure software solution: MetalLB"},{"location":"deploy/baremetal/#over-a-nodeport-service","text":"Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100 , where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a ingress-nginx-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 ingress-nginx-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service. Warning There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : externalIPs : - 203.0.113.1 - 203.0.113.2 - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without NodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect","title":"Over a NodePort Service"},{"location":"deploy/baremetal/#via-the-host-network","text":"In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure ingress-nginx Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services. Note This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it . This can be achieved by enabling the hostNetwork option in the Pods' spec. template : spec : hostNetwork : true Security considerations Enabling this option exposes every system daemon to the Ingress-Nginx Controller on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully. Example Consider this ingress-nginx-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP. $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 One major limitation of this deployment approach is that only a single Ingress-Nginx Controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event: $ kubectl -n ingress-nginx describe pod ... Events: Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports. One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment. Info A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods . For more information, see DaemonSet . Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion. Like with NodePorts, this approach has a few quirks it is important to be aware of. DNS resolution Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS ), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet . Consider using this setting if NGINX is expected to resolve internal names for any reason. Ingress status Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank. $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller. Example Given a ingress-nginx-controller DaemonSet composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 ingress-nginx-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3 ingress-nginx-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2 the controller sets the status of all Ingress objects it manages to the following value: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.2,203.0.113.3 80 Note Alternatively, it is possible to override the address written to Ingress objects using the --publish-status-address flag. See Command line arguments .","title":"Via the host network"},{"location":"deploy/baremetal/#using-a-self-provisioned-edge","text":"Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service , with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:","title":"Using a self-provisioned edge"},{"location":"deploy/baremetal/#external-ips","text":"Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not recommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that Service. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is ) $ kubectl get node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port: spec : externalIPs : - 203.0.113.2 - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.","title":"External IPs"},{"location":"deploy/hardening-guide/","text":"Hardening Guide \u00b6 Overview \u00b6 There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself Configuration Guide \u00b6 Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Hardening guide"},{"location":"deploy/hardening-guide/#hardening-guide","text":"","title":"Hardening Guide"},{"location":"deploy/hardening-guide/#overview","text":"There are several ways to do hardening and securing of nginx. In this documentation two guides are used, the guides are overlapping in some points: nginx CIS Benchmark cipherlist.eu (one of many forks of the now dead project cipherli.st) This guide describes, what of the different configurations described in those guides is already implemented as default in the nginx implementation of kubernetes ingress, what needs to be configured, what is obsolete due to the fact that the nginx is running as container (the CIS benchmark relates to a non-containerized installation) and what is difficult or not possible. Be aware that this is only a guide and you are responsible for your own implementation. Some of the configurations may lead to have specific clients unable to reach your site or similar consequences. This guide refers to chapters in the CIS Benchmark. For full explanation you should refer to the benchmark document itself","title":"Overview"},{"location":"deploy/hardening-guide/#configuration-guide","text":"Chapter in CIS benchmark Status Default Action to do if not default 1 Initial Setup 1.1 Installation 1.1.1 Ensure NGINX is installed (Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.1.2 Ensure NGINX is installed from source (Not Scored) OK done through helm charts / following documentation to deploy nginx ingress 1.2 Configure Software Updates 1.2.1 Ensure package manager repositories are properly configured (Not Scored) OK done via helm, nginx version could be overwritten, however compatibility is not ensured then 1.2.2 Ensure the latest software package is installed (Not Scored) ACTION NEEDED done via helm, nginx version could be overwritten, however compatibility is not ensured then Plan for periodic updates 2 Basic Configuration 2.1 Minimize NGINX Modules 2.1.1 Ensure only required modules are installed (Not Scored) OK Already only needed modules are installed, however proposals for further reduction are welcome 2.1.2 Ensure HTTP WebDAV module is not installed (Scored) OK 2.1.3 Ensure modules with gzip functionality are disabled (Scored) OK 2.1.4 Ensure the autoindex module is disabled (Scored) OK No autoindex configs so far in ingress defaults 2.2 Account Security 2.2.1 Ensure that NGINX is run using a non-privileged, dedicated service account (Not Scored) OK Pod configured as user www-data: See this line in helm chart values . Compiled with user www-data: See this line in build script 2.2.2 Ensure the NGINX service account is locked (Scored) OK Docker design ensures this 2.2.3 Ensure the NGINX service account has an invalid shell (Scored) OK Shell is nologin: see this line in build script 2.3 Permissions and Ownership 2.3.1 Ensure NGINX directories and files are owned by root (Scored) OK Obsolete through docker-design and ingress controller needs to update the configs dynamically 2.3.2 Ensure access to NGINX directories and files is restricted (Scored) OK See previous answer 2.3.3 Ensure the NGINX process ID (PID) file is secured (Scored) OK No PID-File due to docker design 2.3.4 Ensure the core dump directory is secured (Not Scored) OK No working_directory configured by default 2.4 Network Configuration 2.4.1 Ensure NGINX only listens for network connections on authorized ports (Not Scored) OK Ensured by automatic nginx.conf configuration 2.4.2 Ensure requests for unknown host names are rejected (Not Scored) OK They are not rejected but send to the \"default backend\" delivering appropriate errors (mostly 404) 2.4.3 Ensure keepalive_timeout is 10 seconds or less, but not 0 (Scored) ACTION NEEDED Default is 75s configure keep-alive to 10 seconds according to this documentation 2.4.4 Ensure send_timeout is set to 10 seconds or less, but not 0 (Scored) RISK TO BE ACCEPTED Not configured, however the nginx default is 60s Not configurable 2.5 Information Disclosure 2.5.1 Ensure server_tokens directive is set to off (Scored) OK server_tokens is configured to off by default 2.5.2 Ensure default error and index.html pages do not reference NGINX (Scored) ACTION NEEDED 404 shows no version at all, 503 and 403 show \"nginx\", which is hardcoded see this line in nginx source code configure custom error pages at least for 403, 404 and 503 and 500 2.5.3 Ensure hidden file serving is disabled (Not Scored) ACTION NEEDED config not set configure a config.server-snippet Snippet, but beware of .well-known challenges or similar. Refer to the benchmark here please 2.5.4 Ensure the NGINX reverse proxy does not enable information disclosure (Scored) ACTION NEEDED hide not configured configure hide-headers with array of \"X-Powered-By\" and \"Server\": according to this documentation 3 Logging 3.1 Ensure detailed logging is enabled (Not Scored) OK nginx ingress has a very detailed log format by default 3.2 Ensure access logging is enabled (Scored) OK Access log is enabled by default 3.3 Ensure error logging is enabled and set to the info logging level (Scored) OK Error log is configured by default. The log level does not matter, because it is all sent to STDOUT anyway 3.4 Ensure log files are rotated (Scored) OBSOLETE Log file handling is not part of the nginx ingress and should be handled separately 3.5 Ensure error logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.6 Ensure access logs are sent to a remote syslog server (Not Scored) OBSOLETE See previous answer 3.7 Ensure proxies pass source IP information (Scored) OK Headers are set by default 4 Encryption 4.1 TLS / SSL Configuration 4.1.1 Ensure HTTP is redirected to HTTPS (Scored) OK Redirect to TLS is default 4.1.2 Ensure a trusted certificate and trust chain is installed (Not Scored) ACTION NEEDED For installing certs there are enough manuals in the web. A good way is to use lets encrypt through cert-manager Install proper certificates or use lets encrypt with cert-manager 4.1.3 Ensure private key permissions are restricted (Scored) ACTION NEEDED See previous answer 4.1.4 Ensure only modern TLS protocols are used (Scored) OK/ACTION NEEDED Default is TLS 1.2 + 1.3, while this is okay for CIS Benchmark, cipherlist.eu only recommends 1.3. This may cut off old OS's Set controller.config.ssl-protocols to \"TLSv1.3\" 4.1.5 Disable weak ciphers (Scored) ACTION NEEDED Default ciphers are already good, but cipherlist.eu recommends even stronger ciphers Set controller.config.ssl-ciphers to \"EECDH+AESGCM:EDH+AESGCM\" 4.1.6 Ensure custom Diffie-Hellman parameters are used (Scored) ACTION NEEDED No custom DH parameters are generated Generate dh parameters for each ingress deployment you use - see here for a how to 4.1.7 Ensure Online Certificate Status Protocol (OCSP) stapling is enabled (Scored) ACTION NEEDED Not enabled set via this configuration parameter 4.1.8 Ensure HTTP Strict Transport Security (HSTS) is enabled (Scored) OK HSTS is enabled by default 4.1.9 Ensure HTTP Public Key Pinning is enabled (Not Scored) ACTION NEEDED / RISK TO BE ACCEPTED HKPK not enabled by default If lets encrypt is not used, set correct HPKP header. There are several ways to implement this - with the helm charts it works via controller.add-headers. If lets encrypt is used, this is complicated, a solution here is yet unknown 4.1.10 Ensure upstream server traffic is authenticated with a client certificate (Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, manual is here 4.1.11 Ensure the upstream traffic server certificate is trusted (Not Scored) DEPENDS ON BACKEND Highly dependent on backends, not every backend allows configuring this, can also be mitigated via a service mesh If backend allows it, see configuration here 4.1.12 Ensure your domain is preloaded (Not Scored) ACTION NEEDED Preload is not active by default Set controller.config.hsts-preload to true 4.1.13 Ensure session resumption is disabled to enable perfect forward security (Scored) OK Session tickets are disabled by default 4.1.14 Ensure HTTP/2.0 is used (Not Scored) OK http2 is set by default 5 Request Filtering and Restrictions 5.1 Access Control 5.1.1 Ensure allow and deny filters limit access to specific IP addresses (Not Scored) OK/ACTION NEEDED Depends on use case, geo ip module is compiled into Ingress-Nginx Controller, there are several ways to use it If needed set IP restrictions via annotations or work with config snippets (be careful with lets-encrypt-http-challenge!) 5.1.2 Ensure only whitelisted HTTP methods are allowed (Not Scored) OK/ACTION NEEDED Depends on use case If required it can be set via config snippet 5.2 Request Limits 5.2.1 Ensure timeout values for reading the client header and body are set correctly (Scored) ACTION NEEDED Default timeout is 60s Set via this configuration parameter and respective body equivalent 5.2.2 Ensure the maximum request body size is set correctly (Scored) ACTION NEEDED Default is 1m set via this configuration parameter 5.2.3 Ensure the maximum buffer size for URIs is defined (Scored) ACTION NEEDED Default is 4 8k Set via this configuration parameter 5.2.4 Ensure the number of connections per IP address is limited (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.2.5 Ensure rate limits by IP address are set (Not Scored) OK/ACTION NEEDED No limit set Depends on use case, limit can be set via these annotations 5.3 Browser Security 5.3.1 Ensure X-Frame-Options header is configured and enabled (Scored) ACTION NEEDED Header not set by default Several ways to implement this - with the helm charts it works via controller.add-headers 5.3.2 Ensure X-Content-Type-Options header is configured and enabled (Scored) ACTION NEEDED See previous answer See previous answer 5.3.3 Ensure the X-XSS-Protection Header is enabled and configured properly (Scored) ACTION NEEDED See previous answer See previous answer 5.3.4 Ensure that Content Security Policy (CSP) is enabled and configured properly (Not Scored) ACTION NEEDED See previous answer See previous answer 5.3.5 Ensure the Referrer Policy is enabled and configured properly (Not Scored) ACTION NEEDED Depends on application. It should be handled in the applications webserver itself, not in the load balancing ingress check backend webserver 6 Mandatory Access Control n/a too high level, depends on backends @media only screen and (min-width: 768px) { td:nth-child(1){ white-space:normal !important; } .md-typeset table:not([class]) td { padding: .2rem .3rem; } }","title":"Configuration Guide"},{"location":"deploy/rbac/","text":"Role Based Access Control (RBAC) \u00b6 Overview \u00b6 This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller. Service Accounts created in this example \u00b6 One ServiceAccount is created in this example, ingress-nginx . Permissions Granted in this example \u00b6 There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx . Cluster Permissions \u00b6 These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses , ingressclasses , endpointslices : get, list, watch events : create, patch ingresses/status : update leases : list, watch Namespace Permissions \u00b6 These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). leases : get, update (for resourceName ingress-controller-leader ) leases : create This resourceName is the election-id defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader resourceName : Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller. Bindings \u00b6 The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#role-based-access-control-rbac","text":"","title":"Role Based Access Control (RBAC)"},{"location":"deploy/rbac/#overview","text":"This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled. Role Based Access Control is comprised of four layers: ClusterRole - permissions assigned to a role that apply to an entire cluster ClusterRoleBinding - binding a ClusterRole to a specific account Role - permissions assigned to a role that apply to a specific namespace RoleBinding - binding a Role to a specific account In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount . That ServiceAccount should be bound to the Role s and ClusterRole s defined for the ingress-nginx-controller.","title":"Overview"},{"location":"deploy/rbac/#service-accounts-created-in-this-example","text":"One ServiceAccount is created in this example, ingress-nginx .","title":"Service Accounts created in this example"},{"location":"deploy/rbac/#permissions-granted-in-this-example","text":"There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx , and namespace specific permissions defined by the Role named ingress-nginx .","title":"Permissions Granted in this example"},{"location":"deploy/rbac/#cluster-permissions","text":"These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx configmaps , endpoints , nodes , pods , secrets : list, watch nodes : get services , ingresses , ingressclasses , endpointslices : get, list, watch events : create, patch ingresses/status : update leases : list, watch","title":"Cluster Permissions"},{"location":"deploy/rbac/#namespace-permissions","text":"These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx configmaps , pods , secrets : get endpoints : get Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a leases using the resourceName ingress-nginx-leader Note that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a \u201ccreate\u201d request are part of the request body). leases : get, update (for resourceName ingress-controller-leader ) leases : create This resourceName is the election-id defined by the ingress-controller, which defaults to: election-id : ingress-controller-leader resourceName : Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.","title":"Namespace Permissions"},{"location":"deploy/rbac/#bindings","text":"The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx . The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.","title":"Bindings"},{"location":"deploy/upgrade/","text":"Upgrading \u00b6 Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx . Without Helm \u00b6 To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx . With Helm \u00b6 If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx Migrating from stable/nginx-ingress \u00b6 See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Upgrade"},{"location":"deploy/upgrade/#upgrading","text":"Important No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx .","title":"Upgrading"},{"location":"deploy/upgrade/#without-helm","text":"To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment. I.e. if your deployment resource looks like (partial example): kind : Deployment metadata : name : ingress-nginx-controller namespace : ingress-nginx spec : replicas : 1 selector : ... template : metadata : ... spec : containers : - name : ingress-nginx-controller image : registry.k8s.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef args : ... simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation): kubectl set image deployment/ingress-nginx-controller \\ controller=registry.k8s.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \\ -n ingress-nginx For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx .","title":"Without Helm"},{"location":"deploy/upgrade/#with-helm","text":"If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx , you should be able to upgrade using helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx","title":"With Helm"},{"location":"deploy/upgrade/#migrating-from-stablenginx-ingress","text":"See detailed steps in the upgrading section of the ingress-nginx chart README .","title":"Migrating from stable/nginx-ingress"},{"location":"developer-guide/code-overview/","text":"Ingress NGINX - Code Overview \u00b6 This document provides an overview of Ingress NGINX code. Core Golang code \u00b6 This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration. Core Sync Logics: \u00b6 Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that: One copy is the currently running configuration model Second copy is the one generated in response to some changes in the cluster The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found: Entrypoint \u00b6 The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory. Version \u00b6 Is the package of the code responsible for adding version subcommand, and can be found in version directory. Internal code \u00b6 This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into: Admission Controller \u00b6 Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory. File functions \u00b6 Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory. Ingress functions \u00b6 Contains all the logics from Ingress-Nginx Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller defaults - define the default struct - internal/ingress/defaults . Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller - internal/ingress/resolver . Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future. K8s functions \u00b6 Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory. Networking functions \u00b6 Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory. NGINX functions \u00b6 Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory. Tasks / Queue \u00b6 Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory. Other parts of internal \u00b6 Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future. E2E Test \u00b6 The e2e tests code is in test directory. Other programs \u00b6 Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts. kubectl plugin \u00b6 It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin Deploy files \u00b6 This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory. Helm Chart \u00b6 Used to generate the Helm chart published. Code is in charts/ingress-nginx . Documentation/Website \u00b6 The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages. Container Images \u00b6 Container images used to run ingress-nginx, or to build the final image. Base Images \u00b6 Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory. Ingress Controller Image \u00b6 The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile nginx config Ingress NGINX Lua Scripts \u00b6 Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua . Nginx Go template file \u00b6 One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Code Overview"},{"location":"developer-guide/code-overview/#ingress-nginx-code-overview","text":"This document provides an overview of Ingress NGINX code.","title":"Ingress NGINX - Code Overview"},{"location":"developer-guide/code-overview/#core-golang-code","text":"This part of the code is responsible for the main logic of Ingress NGINX. It contains all the logics that parses Ingress Objects , annotations , watches Endpoints and turn them into usable nginx.conf configuration.","title":"Core Golang code"},{"location":"developer-guide/code-overview/#core-sync-logics","text":"Ingress-nginx has an internal model of the ingresses, secrets and endpoints in a given cluster. It maintains two copies of that: One copy is the currently running configuration model Second copy is the one generated in response to some changes in the cluster The sync logic diffs the two models and if there's a change it tries to converge the running configuration to the new one. There are static and dynamic configuration changes. All endpoints and certificate changes are handled dynamically by posting the payload to an internal NGINX endpoint that is handled by Lua. The following parts of the code can be found:","title":"Core Sync Logics:"},{"location":"developer-guide/code-overview/#entrypoint","text":"The main package is responsible for starting ingress-nginx program, which can be found in cmd/nginx directory.","title":"Entrypoint"},{"location":"developer-guide/code-overview/#version","text":"Is the package of the code responsible for adding version subcommand, and can be found in version directory.","title":"Version"},{"location":"developer-guide/code-overview/#internal-code","text":"This part of the code contains the internal logics that compose Ingress NGINX Controller, and it's split into:","title":"Internal code"},{"location":"developer-guide/code-overview/#admission-controller","text":"Contains the code of Kubernetes Admission Controller which validates the syntax of ingress objects before accepting it. This code can be found in internal/admission/controller directory.","title":"Admission Controller"},{"location":"developer-guide/code-overview/#file-functions","text":"Contains auxiliary codes that deal with files, such as generating the SHA1 checksum of a file, or creating required directories. This code can be found in internal/file directory.","title":"File functions"},{"location":"developer-guide/code-overview/#ingress-functions","text":"Contains all the logics from Ingress-Nginx Controller, with some examples being: Expected Golang structures that will be used in templates and other parts of the code - internal/ingress/types.go . supported annotations and its parsing logics - internal/ingress/annotations . reconciliation loops and logics - internal/ingress/controller defaults - define the default struct - internal/ingress/defaults . Error interface and types implementation - internal/ingress/errors Metrics collectors for Prometheus exporting - internal/ingress/metric . Resolver - Extracts information from a controller - internal/ingress/resolver . Ingress Object status publisher - internal/ingress/status . And other parts of the code that will be written in this document in a future.","title":"Ingress functions"},{"location":"developer-guide/code-overview/#k8s-functions","text":"Contains helper functions for parsing Kubernetes objects. This part of the code can be found in internal/k8s directory.","title":"K8s functions"},{"location":"developer-guide/code-overview/#networking-functions","text":"Contains helper functions for networking, such as IPv4 and IPv6 parsing, SSL certificate parsing, etc. This part of the code can be found in internal/net directory.","title":"Networking functions"},{"location":"developer-guide/code-overview/#nginx-functions","text":"Contains helper function to deal with NGINX, such as verify if it's running and reading it's configuration file parts. This part of the code can be found in internal/nginx directory.","title":"NGINX functions"},{"location":"developer-guide/code-overview/#tasks-queue","text":"Contains the functions responsible for the sync queue part of the controller. This part of the code can be found in internal/task directory.","title":"Tasks / Queue"},{"location":"developer-guide/code-overview/#other-parts-of-internal","text":"Other parts of internal code might not be covered here, like runtime and watch but they can be added in a future.","title":"Other parts of internal"},{"location":"developer-guide/code-overview/#e2e-test","text":"The e2e tests code is in test directory.","title":"E2E Test"},{"location":"developer-guide/code-overview/#other-programs","text":"Describe here kubectl plugin , dbg , waitshutdown and cover the hack scripts.","title":"Other programs"},{"location":"developer-guide/code-overview/#kubectl-plugin","text":"It contains kubectl plugin for inspecting your ingress-nginx deployments. This part of code can be found in cmd/plugin directory Detail functions flow and available flow can be found in kubectl-plugin","title":"kubectl plugin"},{"location":"developer-guide/code-overview/#deploy-files","text":"This directory contains the yaml deploy files used as examples or references in the docs to deploy Ingress NGINX and other components. Those files are in deploy directory.","title":"Deploy files"},{"location":"developer-guide/code-overview/#helm-chart","text":"Used to generate the Helm chart published. Code is in charts/ingress-nginx .","title":"Helm Chart"},{"location":"developer-guide/code-overview/#documentationwebsite","text":"The documentation used to generate the website https://kubernetes.github.io/ingress-nginx/ This code is available in docs and it's main \"language\" is Markdown , used by mkdocs file to generate static pages.","title":"Documentation/Website"},{"location":"developer-guide/code-overview/#container-images","text":"Container images used to run ingress-nginx, or to build the final image.","title":"Container Images"},{"location":"developer-guide/code-overview/#base-images","text":"Contains the Dockerfiles and scripts used to build base images that are used in other parts of the repo. They are present in images repo. Some examples: * nginx - The base NGINX image ingress-nginx uses is not a vanilla NGINX. It bundles many libraries together and it is a job in itself to maintain that and keep things up-to-date. * custom-error-pages - Used on the custom error page examples. There are other images inside this directory.","title":"Base Images"},{"location":"developer-guide/code-overview/#ingress-controller-image","text":"The image used to build the final ingress controller, used in deploy scripts and Helm charts. This is NGINX with some Lua enhancement. We do dynamic certificate, endpoints handling, canary traffic split, custom load balancing etc at this component. One can also add new functionalities using Lua plugin system. The files are in rootfs directory and contains: The Dockerfile nginx config","title":"Ingress Controller Image"},{"location":"developer-guide/code-overview/#ingress-nginx-lua-scripts","text":"Ingress NGINX uses Lua Scripts to enable features like hot reloading, rate limiting and monitoring. Some are written using the OpenResty helper. The directory containing Lua scripts is rootfs/etc/nginx/lua .","title":"Ingress NGINX Lua Scripts"},{"location":"developer-guide/code-overview/#nginx-go-template-file","text":"One of the functions of Ingress NGINX is to turn Ingress objects into nginx.conf file. To do so, the final step is to apply those configurations in nginx.tmpl turning it into a final nginx.conf file.","title":"Nginx Go template file"},{"location":"developer-guide/getting-started/","text":"Developing for Ingress-Nginx Controller This document explains how to get started with developing for Ingress-Nginx Controller. For the really new contributors, who want to contribute to the INGRESS-NGINX project, but need help with understanding some basic concepts, that are needed to work with the Kubernetes ingress resource, here is a link to the New Contributors Guide . This guide contains tips on how a http/https request travels, from a browser or a curl command, to the webserver process running inside a container, in a pod, in a Kubernetes cluster, but enters the cluster via a ingress resource. For those who are familiar with those basic networking concepts like routing of a packet with regards to a http request, termination of connection, reverseproxy etc. etc., you can skip this and move on to the sections below. (or read it anyways just for context and also provide feedbacks if any) Prerequisites \u00b6 Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers Quick Start \u00b6 Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies Local build \u00b6 Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file. Testing \u00b6 Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention _test.lua or it will be ignored Run e2e test suite make kind-e2e-test To limit the scope of the tests to execute, we can use the environment variable FOCUS FOCUS=\"no-auth-locations\" make kind-e2e-test Note The variable FOCUS defines Ginkgo Focused Specs Valid values are defined in the describe definition of the e2e tests like Default Backend The complete list of tests can be found here Custom docker image \u00b6 In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location. This can be done setting two environment variables, REGISTRY and TAG export TAG=\"dev\" export REGISTRY=\"$USER\" make build image and then publish such version with docker push $REGISTRY/controller:$TAG","title":"Getting Started"},{"location":"developer-guide/getting-started/#prerequisites","text":"Install Go 1.14 or later. Note The project uses Go Modules Install Docker (v19.03.0 or later with experimental feature on) Important The majority of make tasks run as docker containers","title":"Prerequisites"},{"location":"developer-guide/getting-started/#quick-start","text":"Fork the repository Clone the repository to any location in your work station Add a GO111MODULE environment variable with export GO111MODULE=on Run go mod download to install dependencies","title":"Quick Start"},{"location":"developer-guide/getting-started/#local-build","text":"Start a local Kubernetes cluster using kind , build and deploy the ingress controller make dev-env - If you are working on the v1.x.x version of this controller, and you want to create a cluster with kubernetes version 1.22, then please visit the documentation for kind , and look for how to set a custom image for the kind node (image: kindest/node...), in the kind config file.","title":"Local build"},{"location":"developer-guide/getting-started/#testing","text":"Run go unit tests make test Run unit-tests for lua code make lua-test Lua tests are located in the directory rootfs/etc/nginx/lua/test Important Test files must follow the naming convention _test.lua or it will be ignored Run e2e test suite make kind-e2e-test To limit the scope of the tests to execute, we can use the environment variable FOCUS FOCUS=\"no-auth-locations\" make kind-e2e-test Note The variable FOCUS defines Ginkgo Focused Specs Valid values are defined in the describe definition of the e2e tests like Default Backend The complete list of tests can be found here","title":"Testing"},{"location":"developer-guide/getting-started/#custom-docker-image","text":"In some cases, it can be useful to build a docker image and publish such an image to a private or custom registry location. This can be done setting two environment variables, REGISTRY and TAG export TAG=\"dev\" export REGISTRY=\"$USER\" make build image and then publish such version with docker push $REGISTRY/controller:$TAG","title":"Custom docker image"},{"location":"enhancements/","text":"Kubernetes Enhancement Proposals (KEPs) \u00b6 A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it. Quick start for the KEP process \u00b6 Follow the process outlined in the KEP template Do I have to use the KEP process? \u00b6 No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project. Why would I want to use the KEP process? \u00b6 Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#kubernetes-enhancement-proposals-keps","text":"A Kubernetes Enhancement Proposal (KEP) is a way to propose, communicate and coordinate on new efforts for the Kubernetes project. For this reason, the ingress-nginx project is adopting it.","title":"Kubernetes Enhancement Proposals (KEPs)"},{"location":"enhancements/#quick-start-for-the-kep-process","text":"Follow the process outlined in the KEP template","title":"Quick start for the KEP process"},{"location":"enhancements/#do-i-have-to-use-the-kep-process","text":"No... but we hope that you will. Over time having a rich set of KEPs in one place will make it easier for people to track what is going on in the community and find a structured historic record. KEPs are only required when the changes are wide ranging and impact most of the project.","title":"Do I have to use the KEP process?"},{"location":"enhancements/#why-would-i-want-to-use-the-kep-process","text":"Our aim with KEPs is to clearly communicate new efforts to the Kubernetes contributor community. As such, we want to build a well curated set of clear proposals in a common format with useful metadata. Benefits to KEP users (in the limit): Exposure on a kubernetes blessed web site that is findable via web search engines. Cross indexing of KEPs so that users can find connections and the current status of any KEP. A clear process with approvers and reviewers for making decisions. This will lead to more structured decisions that stick as there is a discoverable record around the decisions. We are inspired by IETF RFCs, Python PEPs, and Rust RFCs.","title":"Why would I want to use the KEP process?"},{"location":"enhancements/20190724-only-dynamic-ssl/","text":"Remove static SSL configuration mode \u00b6 Table of Contents \u00b6 Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives Summary \u00b6 Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic. Motivation \u00b6 The static configuration implies reloads, something that affects the majority of the users. Goals \u00b6 Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase. Non-Goals \u00b6 Features related to certificate authentication are not changed in any way. Proposal \u00b6 Remove static SSL configuration Implementation Details/Notes/Constraints \u00b6 Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates Drawbacks \u00b6 Alternatives \u00b6 Keep both implementations","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#remove-static-ssl-configuration-mode","text":"","title":"Remove static SSL configuration mode"},{"location":"enhancements/20190724-only-dynamic-ssl/#table-of-contents","text":"Summary Motivation Goals Non-Goals Proposal Implementation Details/Notes/Constraints Drawbacks Alternatives","title":"Table of Contents"},{"location":"enhancements/20190724-only-dynamic-ssl/#summary","text":"Since release 0.19.0 is possible to configure SSL certificates without the need of NGINX reloads (thanks to lua) and after release 0.24.0 the default enabled mode is dynamic.","title":"Summary"},{"location":"enhancements/20190724-only-dynamic-ssl/#motivation","text":"The static configuration implies reloads, something that affects the majority of the users.","title":"Motivation"},{"location":"enhancements/20190724-only-dynamic-ssl/#goals","text":"Deprecation of the flag --enable-dynamic-certificates . Cleanup of the codebase.","title":"Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#non-goals","text":"Features related to certificate authentication are not changed in any way.","title":"Non-Goals"},{"location":"enhancements/20190724-only-dynamic-ssl/#proposal","text":"Remove static SSL configuration","title":"Proposal"},{"location":"enhancements/20190724-only-dynamic-ssl/#implementation-detailsnotesconstraints","text":"Deprecate the flag Move the directives ssl_certificate and ssl_certificate_key from each server block to the http section. These settings are required to avoid NGINX errors in the logs. Remove any action of the flag --enable-dynamic-certificates","title":"Implementation Details/Notes/Constraints"},{"location":"enhancements/20190724-only-dynamic-ssl/#drawbacks","text":"","title":"Drawbacks"},{"location":"enhancements/20190724-only-dynamic-ssl/#alternatives","text":"Keep both implementations","title":"Alternatives"},{"location":"enhancements/20190815-zone-aware-routing/","text":"Availability zone aware routing \u00b6 Table of Contents \u00b6 Availability zone aware routing Table of Contents Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional] Summary \u00b6 Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint. Motivation \u00b6 When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money. At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross-zone. Goals \u00b6 Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there are no zonal endpoints Non-Goals \u00b6 This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases Proposal \u00b6 The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior. Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem. Implementation History \u00b6 initial version of KEP is shipped proposal and implementation details are done Drawbacks [optional] \u00b6 More load on the Kubernetes API server.","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#availability-zone-aware-routing","text":"","title":"Availability zone aware routing"},{"location":"enhancements/20190815-zone-aware-routing/#table-of-contents","text":"Availability zone aware routing Table of Contents Summary Motivation Goals Non-Goals Proposal Implementation History Drawbacks [optional]","title":"Table of Contents"},{"location":"enhancements/20190815-zone-aware-routing/#summary","text":"Teach ingress-nginx about availability zones where endpoints are running in. This way ingress-nginx pod will do its best to proxy to zone-local endpoint.","title":"Summary"},{"location":"enhancements/20190815-zone-aware-routing/#motivation","text":"When users run their services across multiple availability zones they usually pay for egress traffic between zones. Providers such as GCP, and Amazon EC2 usually charge extra for this feature. ingress-nginx when picking an endpoint to route request to does not consider whether the endpoint is in a different zone or the same one. That means it's at least equally likely that it will pick an endpoint from another zone and proxy the request to it. In this situation response from the endpoint to the ingress-nginx pod is considered inter-zone traffic and usually costs extra money. At the time of this writing, GCP charges $0.01 per GB of inter-zone egress traffic according to https://cloud.google.com/compute/network-pricing. According to https://datapath.io/resources/blog/what-are-aws-data-transfer-costs-and-how-to-minimize-them/ Amazon also charges the same amount of money as GCP for cross-zone, egress traffic. This can be a lot of money depending on once's traffic. By teaching ingress-nginx about zones we can eliminate or at least decrease this cost. Arguably inter-zone network latency should also be better than cross-zone.","title":"Motivation"},{"location":"enhancements/20190815-zone-aware-routing/#goals","text":"Given a regional cluster running ingress-nginx, ingress-nginx should do best-effort to pick a zone-local endpoint when proxying This should not impact canary feature ingress-nginx should be able to operate successfully if there are no zonal endpoints","title":"Goals"},{"location":"enhancements/20190815-zone-aware-routing/#non-goals","text":"This feature inherently assumes that endpoints are distributed across zones in a way that they can handle all the traffic from ingress-nginx pod(s) in that zone This feature will be relying on https://kubernetes.io/docs/reference/kubernetes-api/labels-annotations-taints/#failure-domainbetakubernetesiozone, it is not this KEP's goal to support other cases","title":"Non-Goals"},{"location":"enhancements/20190815-zone-aware-routing/#proposal","text":"The idea here is to have the controller part of ingress-nginx (1) detect what zone its current pod is running in and (2) detect the zone for every endpoint it knows about. After that, it will post that data as part of endpoints to Lua land. When picking an endpoint, the Lua balancer will try to pick zone-local endpoint first and if there is no zone-local endpoint then it will fall back to current behavior. Initially, this feature should be optional since it is going to make it harder to reason about the load balancing and not everyone might want that. How does controller know what zone it runs in? We can have the pod spec pass the node name using downward API as an environment variable. Upon startup, the controller can get node details from the API based on the node name. Once the node details are obtained we can extract the zone from the failure-domain.beta.kubernetes.io/zone annotation. Then we can pass that value to Lua land through Nginx configuration when loading lua_ingress.lua module in init_by_lua phase. How do we extract zones for endpoints? We can have the controller watch create and update events on nodes in the entire cluster and based on that keep the map of nodes to zones in the memory. And when we generate endpoints list, we can access node name using .subsets.addresses[i].nodeName and based on that fetch zone from the map in memory and store it as a field on the endpoint. This solution assumes failure-domain.beta.kubernetes.io/zone annotation does not change until the end of the node's life. Otherwise, we have to watch update events as well on the nodes and that'll add even more overhead. Alternatively, we can get the list of nodes only when there's no node in the memory for the given node name. This is probably a better solution because then we would avoid watching for API changes on node resources. We can eagerly fetch all the nodes and build node name to zone mapping on start. From there on, it will sync during endpoint building in the main event loop if there's no existing entry for the node of an endpoint. This means an extra API call in case cluster has expanded. How do we make sure we do our best to choose zone-local endpoint? This will be done on the Lua side. For every backend, we will initialize two balancer instances: (1) with all endpoints (2) with all endpoints corresponding to the current zone for the backend. Then given the request once we choose what backend needs to serve the request, we will first try to use a zonal balancer for that backend. If a zonal balancer does not exist (i.e. there's no zonal endpoint) then we will use a general balancer. In case of zonal outages, we assume that the readiness probe will fail and the controller will see no endpoints for the backend and therefore we will use a general balancer. We can enable the feature using a configmap setting. Doing it this way makes it easier to rollback in case of a problem.","title":"Proposal"},{"location":"enhancements/20190815-zone-aware-routing/#implementation-history","text":"initial version of KEP is shipped proposal and implementation details are done","title":"Implementation History"},{"location":"enhancements/20190815-zone-aware-routing/#drawbacks-optional","text":"More load on the Kubernetes API server.","title":"Drawbacks [optional]"},{"location":"enhancements/YYYYMMDD-kep-template/","text":"Title \u00b6 This is the title of the KEP. Keep it simple and descriptive. A good title can help communicate what the KEP is and should be considered as part of any review. The title should be lowercased and spaces/punctuation should be replaced with - . To get started with this template: Make a copy of this template. Create a copy of this template and name it YYYYMMDD-my-title.md , where YYYYMMDD is the date the KEP was first drafted. Fill out the \"overview\" sections. This includes the Summary and Motivation sections. These should be easy if you've preflighted the idea of the KEP in an issue. Create a PR. Assign it to folks that are sponsoring this process. Create an issue When filing an enhancement tracking issue, please ensure to complete all fields in the template. Merge early. Avoid getting hung up on specific details and instead aim to get the goal of the KEP merged quickly. The best way to do this is to just start with the \"Overview\" sections and fill out details incrementally in follow on PRs. View anything marked as a provisional as a working document and subject to change. Aim for single topic PRs to keep discussions focused. If you disagree with what is already in a document, open a new PR with suggested changes. The canonical place for the latest set of instructions (and the likely source of this file) is here . The Metadata section above is intended to support the creation of tooling around the KEP process. This will be a YAML section that is fenced as a code block. See the KEP process for details on each of these items. Table of Contents \u00b6 A table of contents is helpful for quickly jumping to sections of a KEP and for highlighting any additional information provided beyond the standard KEP template. Ensure the TOC is wrapped with |name: nginx| sb sb --> |hello nginx!| sa end subgraph otel otc[\"Otel Collector\"] end subgraph observability tempo[\"Tempo\"] grafana[\"Grafana\"] backend[\"Jaeger\"] zipkin[\"Zipkin\"] end subgraph ingress-nginx ngx[nginx] end subgraph ngx[nginx] ng[nginx] om[OpenTelemetry module] end subgraph Node app otel observability ingress-nginx om --> |otlp-gRPC| otc --> |jaeger| backend otc --> |zipkin| zipkin otc --> |otlp-gRPC| tempo --> grafana sa --> |otlp-gRPC| otc sb --> |otlp-gRPC| otc start --> ng --> sa end To install the example and collectors run: Enable Ingress addon with: opentelemetry : enabled : true image : registry.k8s.io/ingress-nginx/opentelemetry:v20230527@sha256:fd7ec835f31b7b37187238eb4fdad4438806e69f413a203796263131f4f02ed0 containerSecurityContext : allowPrivilegeEscalation : false Enable OpenTelemetry and set the otlp-collector-host: $ echo ' apiVersion : v1 kind : ConfigMap data : enable-opentelemetry : \"true\" opentelemetry-config : \"/etc/nginx/opentelemetry.toml\" opentelemetry-operation-name : \"HTTP $request_method $service_name $uri\" opentelemetry-trust-incoming-span : \"true\" otlp-collector-host : \"otel-coll-collector.otel.svc\" otlp-collector-port : \"4317\" otel-max-queuesize : \"2048\" otel-schedule-delay-millis : \"5000\" otel-max-export-batch-size : \"512\" otel-service-name : \"nginx-proxy\" # Opentelemetry resource name otel-sampler : \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased otel-sampler-ratio : \"1.0\" otel-sampler-parent-based : \"false\" metadata : name : ingress-nginx-controller namespace : ingress-nginx ' | kubectl replace -f - Deploy otel-collector, grafana and Jaeger backend: # add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deply cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator # deploy OpenTelemetry collector kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml # deploy Jaeger all-in-one kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability # deploy zipkin kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability # deploy tempo and grafana helm upgrade --install tempo grafana/tempo --create-namespace -n observability helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability Build and deploy demo app: # build images make images # deploy demo app: make deploy-app Make a few requests to the Service: kubectl port-forward --namespace = ingress-nginx service/ingress-nginx-controller 8090 :80 curl http://esigo.dev:8090/hello/nginx StatusCode : 200 StatusDescription : OK Content : { \"v\" : \"hello nginx!\" } RawContent : HTTP/1.1 200 OK Connection: keep-alive Content-Length: 21 Content-Type: text/plain ; charset = utf-8 Date: Mon, 10 Oct 2022 17 :43:33 GMT { \"v\" : \"hello nginx!\" } Forms : {} Headers : {[ Connection, keep-alive ] , [ Content-Length, 21 ] , [ Content-Type, text/plain ; charset = utf-8 ] , [ Date, Mon, 10 Oct 2022 17 :43:33 GMT ]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 21 View the Grafana UI: kubectl port-forward --namespace = observability service/grafana 3000 :80 In the Grafana interface we can see the details: View the Jaeger UI: kubectl port-forward --namespace = observability service/jaeger-all-in-one-query 16686 :16686 In the Jaeger interface we can see the details: View the Zipkin UI: kubectl port-forward --namespace = observability service/zipkin 9411 :9411 In the Zipkin interface we can see the details: Migration from OpenTracing, Jaeger, Zipkin and Datadog \u00b6 If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations: Annotations \u00b6 Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry opentracing-trust-incoming-span opentracing-trust-incoming-span Configs \u00b6 Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port , otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio","title":"OpenTelemetry"},{"location":"user-guide/third-party-addons/opentelemetry/#opentelemetry","text":"Enables requests served by NGINX for distributed telemetry via The OpenTelemetry Project. Using the third party module opentelemetry-cpp-contrib/nginx the Ingress-Nginx Controller can configure NGINX to enable OpenTelemetry instrumentation. By default this feature is disabled. Check out this demo showcasing OpenTelemetry in Ingress NGINX. The video provides an overview and practical demonstration of how OpenTelemetry can be utilized in Ingress NGINX for observability and monitoring purposes. Demo: OpenTelemetry in Ingress NGINX.","title":"OpenTelemetry"},{"location":"user-guide/third-party-addons/opentelemetry/#usage","text":"To enable the instrumentation we must enable OpenTelemetry in the configuration ConfigMap: data : enable-opentelemetry : \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentelemetry annotation: kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/enable-opentelemetry : \"true\" We must also set the host to use when uploading traces: otlp-collector-host : \"otel-coll-collector.otel.svc\" NOTE: While the option is called otlp-collector-host , you will need to point this to any backend that receives otlp-grpc. Next you will need to deploy a distributed telemetry system which uses OpenTelemetry. opentelemetry-collector , Jaeger Tempo , and zipkin have been tested. Other optional configuration options: # specifies the name to use for the server span opentelemetry-operation-name # sets whether or not to trust incoming telemetry spans opentelemetry-trust-incoming-span # specifies the port to use when uploading traces, Default : 4317 otlp-collector-port # specifies the service name to use for any traces created, Default: nginx otel-service-name # The maximum queue size. After the size is reached data are dropped. otel-max-queuesize # The delay interval in milliseconds between two consecutive exports. otel-schedule-delay-millis # How long the export can run before it is cancelled. otel-schedule-delay-millis # The maximum batch size of every export. It must be smaller or equal to maxQueueSize. otel-max-export-batch-size # specifies sample rate for any traces created, Default: 0.01 otel-sampler-ratio # specifies the sampler to be used when sampling traces. # The available samplers are: AlwaysOn, AlwaysOff, TraceIdRatioBased, Default: AlwaysOff otel-sampler # Uses sampler implementation which by default will take a sample if parent Activity is sampled, Default: false otel-sampler-parent-based Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind : Ingress metadata : annotations : nginx.ingress.kubernetes.io/opentelemetry-trust-incoming-span : \"true\"","title":"Usage"},{"location":"user-guide/third-party-addons/opentelemetry/#examples","text":"The following examples show how to deploy and test different distributed telemetry systems. These example can be performed using Docker Desktop. In the esigo/nginx-example GitHub repository is an example of a simple hello service: graph TB subgraph Browser start[\"http://esigo.dev/hello/nginx\"] end subgraph app sa[service-a] sb[service-b] sa --> |name: nginx| sb sb --> |hello nginx!| sa end subgraph otel otc[\"Otel Collector\"] end subgraph observability tempo[\"Tempo\"] grafana[\"Grafana\"] backend[\"Jaeger\"] zipkin[\"Zipkin\"] end subgraph ingress-nginx ngx[nginx] end subgraph ngx[nginx] ng[nginx] om[OpenTelemetry module] end subgraph Node app otel observability ingress-nginx om --> |otlp-gRPC| otc --> |jaeger| backend otc --> |zipkin| zipkin otc --> |otlp-gRPC| tempo --> grafana sa --> |otlp-gRPC| otc sb --> |otlp-gRPC| otc start --> ng --> sa end To install the example and collectors run: Enable Ingress addon with: opentelemetry : enabled : true image : registry.k8s.io/ingress-nginx/opentelemetry:v20230527@sha256:fd7ec835f31b7b37187238eb4fdad4438806e69f413a203796263131f4f02ed0 containerSecurityContext : allowPrivilegeEscalation : false Enable OpenTelemetry and set the otlp-collector-host: $ echo ' apiVersion : v1 kind : ConfigMap data : enable-opentelemetry : \"true\" opentelemetry-config : \"/etc/nginx/opentelemetry.toml\" opentelemetry-operation-name : \"HTTP $request_method $service_name $uri\" opentelemetry-trust-incoming-span : \"true\" otlp-collector-host : \"otel-coll-collector.otel.svc\" otlp-collector-port : \"4317\" otel-max-queuesize : \"2048\" otel-schedule-delay-millis : \"5000\" otel-max-export-batch-size : \"512\" otel-service-name : \"nginx-proxy\" # Opentelemetry resource name otel-sampler : \"AlwaysOn\" # Also: AlwaysOff, TraceIdRatioBased otel-sampler-ratio : \"1.0\" otel-sampler-parent-based : \"false\" metadata : name : ingress-nginx-controller namespace : ingress-nginx ' | kubectl replace -f - Deploy otel-collector, grafana and Jaeger backend: # add helm charts needed for grafana and OpenTelemetry collector helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts helm repo add grafana https://grafana.github.io/helm-charts helm repo update # deply cert-manager needed for OpenTelemetry collector operator kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.9.1/cert-manager.yaml # create observability namespace kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/namespace.yaml # install OpenTelemetry collector operator helm upgrade --install otel-collector-operator -n otel --create-namespace open-telemetry/opentelemetry-operator # deploy OpenTelemetry collector kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/collector.yaml # deploy Jaeger all-in-one kubectl apply -f https://github.com/jaegertracing/jaeger-operator/releases/download/v1.37.0/jaeger-operator.yaml -n observability kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/jaeger.yaml -n observability # deploy zipkin kubectl apply -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/zipkin.yaml -n observability # deploy tempo and grafana helm upgrade --install tempo grafana/tempo --create-namespace -n observability helm upgrade -f https://raw.githubusercontent.com/esigo/nginx-example/main/observability/grafana/grafana-values.yaml --install grafana grafana/grafana --create-namespace -n observability Build and deploy demo app: # build images make images # deploy demo app: make deploy-app Make a few requests to the Service: kubectl port-forward --namespace = ingress-nginx service/ingress-nginx-controller 8090 :80 curl http://esigo.dev:8090/hello/nginx StatusCode : 200 StatusDescription : OK Content : { \"v\" : \"hello nginx!\" } RawContent : HTTP/1.1 200 OK Connection: keep-alive Content-Length: 21 Content-Type: text/plain ; charset = utf-8 Date: Mon, 10 Oct 2022 17 :43:33 GMT { \"v\" : \"hello nginx!\" } Forms : {} Headers : {[ Connection, keep-alive ] , [ Content-Length, 21 ] , [ Content-Type, text/plain ; charset = utf-8 ] , [ Date, Mon, 10 Oct 2022 17 :43:33 GMT ]} Images : {} InputFields : {} Links : {} ParsedHtml : System.__ComObject RawContentLength : 21 View the Grafana UI: kubectl port-forward --namespace = observability service/grafana 3000 :80 In the Grafana interface we can see the details: View the Jaeger UI: kubectl port-forward --namespace = observability service/jaeger-all-in-one-query 16686 :16686 In the Jaeger interface we can see the details: View the Zipkin UI: kubectl port-forward --namespace = observability service/zipkin 9411 :9411 In the Zipkin interface we can see the details:","title":"Examples"},{"location":"user-guide/third-party-addons/opentelemetry/#migration-from-opentracing-jaeger-zipkin-and-datadog","text":"If you are migrating from OpenTracing, Jaeger, Zipkin, or Datadog to OpenTelemetry, you may need to update various annotations and configurations. Here are the mappings for common annotations and configurations:","title":"Migration from OpenTracing, Jaeger, Zipkin and Datadog"},{"location":"user-guide/third-party-addons/opentelemetry/#annotations","text":"Legacy OpenTelemetry nginx.ingress.kubernetes.io/enable-opentracing nginx.ingress.kubernetes.io/enable-opentelemetry opentracing-trust-incoming-span opentracing-trust-incoming-span","title":"Annotations"},{"location":"user-guide/third-party-addons/opentelemetry/#configs","text":"Legacy OpenTelemetry opentracing-operation-name opentelemetry-operation-name opentracing-location-operation-name opentelemetry-operation-name opentracing-trust-incoming-span opentelemetry-trust-incoming-span zipkin-collector-port otlp-collector-port zipkin-service-name otel-service-name zipkin-sample-rate otel-sampler-ratio jaeger-collector-port otlp-collector-port jaeger-endpoint otlp-collector-port , otlp-collector-host jaeger-service-name otel-service-name jaeger-propagation-format N/A jaeger-sampler-type otel-sampler jaeger-sampler-param otel-sampler jaeger-sampler-host N/A jaeger-sampler-port N/A jaeger-trace-context-header-name N/A jaeger-debug-header N/A jaeger-baggage-header N/A jaeger-tracer-baggage-header-prefix N/A datadog-collector-port otlp-collector-port datadog-service-name otel-service-name datadog-environment N/A datadog-operation-name-override N/A datadog-priority-sampling otel-sampler datadog-sample-rate otel-sampler-ratio","title":"Configs"},{"location":"user-guide/third-party-addons/opentracing/","text":"OpenTracing \u00b6 Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the Ingress-Nginx Controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled. Usage \u00b6 To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data: enable-opentracing: \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentracing annotation: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/enable-opentracing: \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Alternatively, you can set jaeger-endpoint and specify the full endpoint for uploading traces. This will use TCP and should be used for a collector rather than an agent. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the name to use for the server span opentracing-operation-name # specifies specifies the name to use for the location span opentracing-location-operation-name # sets whether or not to trust incoming tracing spans opentracing-trust-incoming-span # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the endpoint to use when uploading traces to a collector instead of an agent jaeger-endpoint # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the traceparent/tracestate propagation format jaeger-propagation-format # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. # Default: http://127.0.0.1 jaeger-sampler-host # Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. Default: 5778 jaeger-sampler-port # Specifies the header name used for passing trace context. Must be a string. Default: uber-trace-id jaeger-trace-context-header-name # Specifies the header name used for force sampling. Must be a string. Default: jaeger-debug-id jaeger-debug-header # Specifies the header name used to submit baggage if there is no root span. Must be a string. Default: jaeger-baggage jaeger-baggage-header # Specifies the header prefix used to propagate baggage. Must be a string. Default: uberctx- jaeger-tracer-baggage-header-prefix # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the environment this trace belongs to, Default: prod datadog-environment # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override # Specifies to use client-side sampling for distributed priority sampling and ignore sample rate, Default: true datadog-priority-sampling # specifies sample rate for any traces created, Default: 1.0 datadog-sample-rate All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/opentracing-trust-incoming-span: \"true\" Examples \u00b6 The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube. Zipkin \u00b6 In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the Ingress-NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details: Jaeger \u00b6 Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \"$(minikube ip) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=registry.k8s.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echo-ingress spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /echo pathType: Prefix backend: service: name: echoheaders-x port: number: 80 ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address=172.17.0.5 command=POST real path=/echo query=nil request_version=1.1 request_uri=http://example.com:8080/echo SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close content-length=4 content-type=application/x-www-form-urlencoded host=example.com user-agent=curl/7.54.0 x-forwarded-for=192.168.99.1 x-forwarded-host=example.com x-forwarded-port=80 x-forwarded-proto=http x-original-uri=/echo x-real-ip=192.168.99.1 x-scheme=http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#opentracing","text":"Enables requests served by NGINX for distributed tracing via The OpenTracing Project. Using the third party module opentracing-contrib/nginx-opentracing the Ingress-Nginx Controller can configure NGINX to enable OpenTracing instrumentation. By default this feature is disabled.","title":"OpenTracing"},{"location":"user-guide/third-party-addons/opentracing/#usage","text":"To enable the instrumentation we must enable OpenTracing in the configuration ConfigMap: data: enable-opentracing: \"true\" To enable or disable instrumentation for a single Ingress, use the enable-opentracing annotation: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/enable-opentracing: \"true\" We must also set the host to use when uploading traces: zipkin-collector-host: zipkin.default.svc.cluster.local jaeger-collector-host: jaeger-agent.default.svc.cluster.local datadog-collector-host: datadog-agent.default.svc.cluster.local NOTE: While the option is called jaeger-collector-host , you will need to point this to a jaeger-agent , and not the jaeger-collector component. Alternatively, you can set jaeger-endpoint and specify the full endpoint for uploading traces. This will use TCP and should be used for a collector rather than an agent. Next you will need to deploy a distributed tracing system which uses OpenTracing. Zipkin and Jaeger and Datadog have been tested. Other optional configuration options: # specifies the name to use for the server span opentracing-operation-name # specifies specifies the name to use for the location span opentracing-location-operation-name # sets whether or not to trust incoming tracing spans opentracing-trust-incoming-span # specifies the port to use when uploading traces, Default: 9411 zipkin-collector-port # specifies the service name to use for any traces created, Default: nginx zipkin-service-name # specifies sample rate for any traces created, Default: 1.0 zipkin-sample-rate # specifies the port to use when uploading traces, Default: 6831 jaeger-collector-port # specifies the endpoint to use when uploading traces to a collector instead of an agent jaeger-endpoint # specifies the service name to use for any traces created, Default: nginx jaeger-service-name # specifies the traceparent/tracestate propagation format jaeger-propagation-format # specifies the sampler to be used when sampling traces. # The available samplers are: const, probabilistic, ratelimiting, remote, Default: const jaeger-sampler-type # specifies the argument to be passed to the sampler constructor, Default: 1 jaeger-sampler-param # Specifies the custom remote sampler host to be passed to the sampler constructor. Must be a valid URL. # Default: http://127.0.0.1 jaeger-sampler-host # Specifies the custom remote sampler port to be passed to the sampler constructor. Must be a number. Default: 5778 jaeger-sampler-port # Specifies the header name used for passing trace context. Must be a string. Default: uber-trace-id jaeger-trace-context-header-name # Specifies the header name used for force sampling. Must be a string. Default: jaeger-debug-id jaeger-debug-header # Specifies the header name used to submit baggage if there is no root span. Must be a string. Default: jaeger-baggage jaeger-baggage-header # Specifies the header prefix used to propagate baggage. Must be a string. Default: uberctx- jaeger-tracer-baggage-header-prefix # specifies the port to use when uploading traces, Default 8126 datadog-collector-port # specifies the service name to use for any traces created, Default: nginx datadog-service-name # specifies the environment this trace belongs to, Default: prod datadog-environment # specifies the operation name to use for any traces collected, Default: nginx.handle datadog-operation-name-override # Specifies to use client-side sampling for distributed priority sampling and ignore sample rate, Default: true datadog-priority-sampling # specifies sample rate for any traces created, Default: 1.0 datadog-sample-rate All these options (including host) allow environment variables, such as $HOSTNAME or $HOST_IP . In the case of Jaeger, if you have a Jaeger agent running on each machine in your cluster, you can use something like $HOST_IP (which can be 'mounted' with the status.hostIP fieldpath, as described here ) to make sure traces will be sent to the local agent. Note that you can also set whether to trust incoming spans (global default is true) per-location using annotations like the following: kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/opentracing-trust-incoming-span: \"true\"","title":"Usage"},{"location":"user-guide/third-party-addons/opentracing/#examples","text":"The following examples show how to deploy and test different distributed tracing systems. These example can be performed using Minikube.","title":"Examples"},{"location":"user-guide/third-party-addons/opentracing/#zipkin","text":"In the rnburn/zipkin-date-server GitHub repository is an example of a dockerized date service. To install the example and Zipkin collector run: kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/zipkin.yaml kubectl create -f https://raw.githubusercontent.com/rnburn/zipkin-date-server/master/kubernetes/deployment.yaml Also we need to configure the Ingress-NGINX controller ConfigMap with the required values: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" zipkin-collector-host: zipkin.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - In the Zipkin interface we can see the details:","title":"Zipkin"},{"location":"user-guide/third-party-addons/opentracing/#jaeger","text":"Enable Ingress addon in Minikube: $ minikube addons enable ingress Add Minikube IP to /etc/hosts: $ echo \"$(minikube ip) example.com\" | sudo tee -a /etc/hosts Apply a basic Service and Ingress Resource: # Create Echoheaders Deployment $ kubectl run echoheaders --image=registry.k8s.io/echoserver:1.4 --replicas=1 --port=8080 # Expose as a Cluster-IP $ kubectl expose deployment echoheaders --port=80 --target-port=8080 --name=echoheaders-x # Apply the Ingress Resource $ echo ' apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: echo-ingress spec: ingressClassName: nginx rules: - host: example.com http: paths: - path: /echo pathType: Prefix backend: service: name: echoheaders-x port: number: 80 ' | kubectl apply -f - Enable OpenTracing and set the jaeger-collector-host: $ echo ' apiVersion: v1 kind: ConfigMap data: enable-opentracing: \"true\" jaeger-collector-host: jaeger-agent.default.svc.cluster.local metadata: name: ingress-nginx-controller namespace: kube-system ' | kubectl replace -f - Apply the Jaeger All-In-One Template: $ kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/all-in-one/jaeger-all-in-one-template.yml Make a few requests to the Service: $ curl example.com/echo -d \"meow\" CLIENT VALUES: client_address=172.17.0.5 command=POST real path=/echo query=nil request_version=1.1 request_uri=http://example.com:8080/echo SERVER VALUES: server_version=nginx: 1.10.0 - lua: 10001 HEADERS RECEIVED: accept=*/* connection=close content-length=4 content-type=application/x-www-form-urlencoded host=example.com user-agent=curl/7.54.0 x-forwarded-for=192.168.99.1 x-forwarded-host=example.com x-forwarded-port=80 x-forwarded-proto=http x-original-uri=/echo x-real-ip=192.168.99.1 x-scheme=http BODY: meow View the Jaeger UI: $ minikube service jaeger-query --url http://192.168.99.100:30183 In the Jaeger interface we can see the details:","title":"Jaeger"}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml index 17515081d..df8d27fa9 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -1,223 +1,223 @@ https://kubernetes.github.io/ingress-nginx/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/how-it-works/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/troubleshooting/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/kubectl-plugin/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/deploy/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/deploy/baremetal/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/deploy/rbac/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/deploy/upgrade/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/deploy/hardening-guide/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/custom-errors/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/default-backend/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/fcgi-services/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/ingress-path-matching/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/external-articles/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/tls/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/modsecurity/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/opentracing/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/user-guide/third-party-addons/opentelemetry/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/PREREQUISITES/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/affinity/cookie/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/auth/basic/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/auth/client-certs/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/auth/external-auth/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/auth/oauth-external-auth/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/configuration-snippets/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-configuration/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-errors/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/custom-headers/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/external-auth-headers/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/ssl-dh-param/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/customization/sysctl/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/docker-registry/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/grpc/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/multi-tls/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/rewrite/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/static-ip/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/tls-termination/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/psp/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/openpolicyagent/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/examples/canary/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/developer-guide/getting-started/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/developer-guide/code-overview/ - 2023-09-09 + 2023-09-11 daily https://kubernetes.github.io/ingress-nginx/faq/ - 2023-09-09 + 2023-09-11 daily \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz index 964d0c8c8..e8a575f21 100644 Binary files a/sitemap.xml.gz and b/sitemap.xml.gz differ diff --git a/user-guide/nginx-configuration/configmap/index.html b/user-guide/nginx-configuration/configmap/index.html index 27cee0ebf..ab477c35e 100644 --- a/user-guide/nginx-configuration/configmap/index.html +++ b/user-guide/nginx-configuration/configmap/index.html @@ -1,7 +1,7 @@ ConfigMap - Ingress-Nginx Controller

ConfigMaps

ConfigMaps allow you to decouple configuration artifacts from image content to keep containerized applications portable.

The ConfigMap API resource stores configuration data as key-value pairs. The data provides the configurations for system components for the nginx-controller.

In order to overwrite nginx-controller configuration values as seen in config.go, you can add key-value pairs to the data section of the config-map. For Example:

data:
   map-hash-bucket-size: "128"
   ssl-protocols: SSLv2
-

Important

The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".

"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.

Configuration options

The following table shows a configuration option's name, type, and the default value:

name type default notes
add-headers string ""
allow-backend-server-header bool "false"
allow-cross-namespace-resources bool "true"
allow-snippet-annotations bool true
annotations-risk-level string Critical
annotation-value-word-blocklist string array ""
hide-headers string array empty
access-log-params string ""
access-log-path string "/var/log/nginx/access.log"
http-access-log-path string ""
stream-access-log-path string ""
enable-access-log-for-default-backend bool "false"
error-log-path string "/var/log/nginx/error.log"
enable-modsecurity bool "false"
modsecurity-snippet string ""
enable-owasp-modsecurity-crs bool "false"
client-header-buffer-size string "1k"
client-header-timeout int 60
client-body-buffer-size string "8k"
client-body-timeout int 60
disable-access-log bool false
disable-ipv6 bool false
disable-ipv6-dns bool false
enable-underscores-in-headers bool false
enable-ocsp bool false
ignore-invalid-headers bool true
retry-non-idempotent bool "false"
error-log-level string "notice"
http2-max-field-size string "" DEPRECATED in favour of large_client_header_buffers
http2-max-header-size string "" DEPRECATED in favour of large_client_header_buffers
http2-max-requests int 0 DEPRECATED in favour of keepalive_requests
http2-max-concurrent-streams int 128
hsts bool "true"
hsts-include-subdomains bool "true"
hsts-max-age string "15724800"
hsts-preload bool "false"
keep-alive int 75
keep-alive-requests int 1000
large-client-header-buffers string "4 8k"
log-format-escape-none bool "false"
log-format-escape-json bool "false"
log-format-upstream string $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id
log-format-stream string [$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time
enable-multi-accept bool "true"
max-worker-connections int 16384
max-worker-open-files int 0
map-hash-bucket-size int 64
nginx-status-ipv4-whitelist []string "127.0.0.1"
nginx-status-ipv6-whitelist []string "::1"
proxy-real-ip-cidr []string "0.0.0.0/0"
proxy-set-headers string ""
server-name-hash-max-size int 1024
server-name-hash-bucket-size int <size of the processor’s cache line>
proxy-headers-hash-max-size int 512
proxy-headers-hash-bucket-size int 64
plugins []string
reuse-port bool "true"
server-tokens bool "false"
ssl-ciphers string "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-ecdh-curve string "auto"
ssl-dh-param string ""
ssl-protocols string "TLSv1.2 TLSv1.3"
ssl-session-cache bool "true"
ssl-session-cache-size string "10m"
ssl-session-tickets bool "false"
ssl-session-ticket-key string <Randomly Generated>
ssl-session-timeout string "10m"
ssl-buffer-size string "4k"
use-proxy-protocol bool "false"
proxy-protocol-header-timeout string "5s"
use-gzip bool "false"
use-geoip bool "true"
use-geoip2 bool "false"
enable-brotli bool "false"
brotli-level int 4
brotli-min-length int 20
brotli-types string "application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component"
use-http2 bool "true"
gzip-disable string ""
gzip-level int 1
gzip-min-length int 256
gzip-types string "application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component"
worker-processes string <Number of CPUs>
worker-cpu-affinity string ""
worker-shutdown-timeout string "240s"
load-balance string "round_robin"
variables-hash-bucket-size int 128
variables-hash-max-size int 2048
upstream-keepalive-connections int 320
upstream-keepalive-time string "1h"
upstream-keepalive-timeout int 60
upstream-keepalive-requests int 10000
limit-conn-zone-variable string "$binary_remote_addr"
proxy-stream-timeout string "600s"
proxy-stream-next-upstream bool "true"
proxy-stream-next-upstream-timeout string "600s"
proxy-stream-next-upstream-tries int 3
proxy-stream-responses int 1
bind-address []string ""
use-forwarded-headers bool "false"
enable-real-ip bool "false"
forwarded-for-header string "X-Forwarded-For"
compute-full-forwarded-for bool "false"
proxy-add-original-uri-header bool "false"
generate-request-id bool "true"
enable-opentracing bool "false"
opentracing-operation-name string ""
opentracing-location-operation-name string ""
zipkin-collector-host string ""
zipkin-collector-port int 9411
zipkin-service-name string "nginx"
zipkin-sample-rate float 1.0
jaeger-collector-host string ""
jaeger-collector-port int 6831
jaeger-endpoint string ""
jaeger-service-name string "nginx"
jaeger-propagation-format string "jaeger"
jaeger-sampler-type string "const"
jaeger-sampler-param string "1"
jaeger-sampler-host string "http://127.0.0.1"
jaeger-sampler-port int 5778
jaeger-trace-context-header-name string uber-trace-id
jaeger-debug-header string uber-debug-id
jaeger-baggage-header string jaeger-baggage
jaeger-trace-baggage-header-prefix string uberctx-
datadog-collector-host string ""
datadog-collector-port int 8126
datadog-service-name string "nginx"
datadog-environment string "prod"
datadog-operation-name-override string "nginx.handle"
datadog-priority-sampling bool "true"
datadog-sample-rate float 1.0
enable-opentelemetry bool "false"
opentelemetry-trust-incoming-span bool "true"
opentelemetry-operation-name string ""
opentelemetry-config string "/etc/nginx/opentelemetry.toml"
otlp-collector-host string ""
otlp-collector-port int 4317
otel-max-queuesize int
otel-schedule-delay-millis int
otel-max-export-batch-size int
otel-service-name string "nginx"
otel-sampler string "AlwaysOff"
otel-sampler-parent-based bool "false"
otel-sampler-ratio float 0.01
main-snippet string ""
http-snippet string ""
server-snippet string ""
stream-snippet string ""
location-snippet string ""
custom-http-errors []int []int{}
proxy-body-size string "1m"
proxy-connect-timeout int 5
proxy-read-timeout int 60
proxy-send-timeout int 60
proxy-buffers-number int 4
proxy-buffer-size string "4k"
proxy-cookie-path string "off"
proxy-cookie-domain string "off"
proxy-next-upstream string "error timeout"
proxy-next-upstream-timeout int 0
proxy-next-upstream-tries int 3
proxy-redirect-from string "off"
proxy-request-buffering string "on"
ssl-redirect bool "true"
force-ssl-redirect bool "false"
denylist-source-range []string []string{}
whitelist-source-range []string []string{}
skip-access-log-urls []string []string{}
limit-rate int 0
limit-rate-after int 0
lua-shared-dicts string ""
http-redirect-code int 308
proxy-buffering string "off"
limit-req-status-code int 503
limit-conn-status-code int 503
enable-syslog bool false
syslog-host string ""
syslog-port int 514
no-tls-redirect-locations string "/.well-known/acme-challenge"
global-auth-url string ""
global-auth-method string ""
global-auth-signin string ""
global-auth-signin-redirect-param string "rd"
global-auth-response-headers string ""
global-auth-request-redirect string ""
global-auth-snippet string ""
global-auth-cache-key string ""
global-auth-cache-duration string "200 202 401 5m"
no-auth-locations string "/.well-known/acme-challenge"
block-cidrs []string ""
block-user-agents []string ""
block-referers []string ""
proxy-ssl-location-only bool "false"
default-type string "text/html"
global-rate-limit-memcached-host string ""
global-rate-limit-memcached-port int 11211
global-rate-limit-memcached-connect-timeout int 50
global-rate-limit-memcached-max-idle-timeout int 10000
global-rate-limit-memcached-pool-size int 50
global-rate-limit-status-code int 429
service-upstream bool "false"
ssl-reject-handshake bool "false"
debug-connections []string "127.0.0.1,1.1.1.1/24"
strict-validate-path-type bool "false" (v1.7.x)

add-headers

Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

allow-backend-server-header

Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

allow-cross-namespace-resources

Enables users to consume cross namespace resource on annotations, when was previously enabled . default: true

Annotations that may be impacted with this change: * auth-secret * auth-proxy-set-header * auth-tls-secret * fastcgi-params-configmap * proxy-ssl-secret

This option will be defaulted to false in the next major release

allow-snippet-annotations

Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ true

Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file

This option will be defaulted to false in the next major release

annotations-risk-level

Represents the risk accepted on an annotation. If the risk is, for instance Medium, annotations with risk High and Critical will not be accepted.

Accepted values are Critical, High, Medium and Low.

Defaults to Critical but will be changed to High on the next minor release

annotation-value-word-blocklist

Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742

When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.

default: ""

When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.

suggested: "load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\""

hide-headers

Sets additional header that will not be passed from the upstream server to the client response. default: empty

References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

access-log-params

Additional params for access_log. For example, buffer=16k, gzip, flush=1m

References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

access-log-path

Access log path for both http and stream context. Goes to /var/log/nginx/access.log by default.

Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

http-access-log-path

Access log path for http context globally. default: ""

Note: If not specified, the access-log-path will be used.

stream-access-log-path

Access log path for stream context globally. default: ""

Note: If not specified, the access-log-path will be used.

enable-access-log-for-default-backend

Enables logging access to default backend. default: is disabled.

error-log-path

Error log path. Goes to /var/log/nginx/error.log by default.

Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

References: https://nginx.org/en/docs/ngx_core_module.html#error_log

enable-modsecurity

Enables the modsecurity module for NGINX. default: is disabled

enable-owasp-modsecurity-crs

Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

modsecurity-snippet

Adds custom rules to modsecurity section of nginx configuration

client-header-buffer-size

Allows to configure a custom buffer size for reading client request header.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

client-header-timeout

Defines a timeout for reading client request header, in seconds.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

client-body-buffer-size

Sets buffer size for reading client request body.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

client-body-timeout

Defines a timeout for reading client request body, in seconds.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

disable-access-log

Disables the Access Log from the entire Ingress Controller. default: false

References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

disable-ipv6

Disable listening on IPV6. default: false; IPv6 listening is enabled

disable-ipv6-dns

Disable IPV6 for nginx DNS resolver. default: false; IPv6 resolving enabled.

enable-underscores-in-headers

Enables underscores in header names. default: is disabled

enable-ocsp

Enables Online Certificate Status Protocol stapling (OCSP) support. default: is disabled

ignore-invalid-headers

Set if header fields with invalid names should be ignored. default: is enabled

retry-non-idempotent

Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".

error-log-level

Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

References: https://nginx.org/en/docs/ngx_core_module.html#error_log

http2-max-field-size

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

Limits the maximum size of an HPACK-compressed request header field.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

http2-max-header-size

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

Limits the maximum size of the entire request header list after HPACK decompression.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

http2-max-requests

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.

Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests

http2-max-concurrent-streams

Sets the maximum number of concurrent HTTP/2 streams in a connection.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams

hsts

Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

References:

hsts-include-subdomains

Enables or disables the use of HSTS in all the subdomains of the server-name.

hsts-max-age

Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.

hsts-preload

Enables or disables the preload attribute in the HSTS feature (when it is enabled).

keep-alive

Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

Important

Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7

Changes with nginx 1.19.7                                        16 Feb 2021
+

Important

The key and values in a ConfigMap can only be strings. This means that we want a value with boolean values we need to quote the values, like "true" or "false". Same for numbers, like "100".

"Slice" types (defined below as []string or []int) can be provided as a comma-delimited string.

Configuration options

The following table shows a configuration option's name, type, and the default value:

name type default notes
add-headers string ""
allow-backend-server-header bool "false"
allow-cross-namespace-resources bool "true"
allow-snippet-annotations bool false
annotations-risk-level string Critical
annotation-value-word-blocklist string array ""
hide-headers string array empty
access-log-params string ""
access-log-path string "/var/log/nginx/access.log"
http-access-log-path string ""
stream-access-log-path string ""
enable-access-log-for-default-backend bool "false"
error-log-path string "/var/log/nginx/error.log"
enable-modsecurity bool "false"
modsecurity-snippet string ""
enable-owasp-modsecurity-crs bool "false"
client-header-buffer-size string "1k"
client-header-timeout int 60
client-body-buffer-size string "8k"
client-body-timeout int 60
disable-access-log bool false
disable-ipv6 bool false
disable-ipv6-dns bool false
enable-underscores-in-headers bool false
enable-ocsp bool false
ignore-invalid-headers bool true
retry-non-idempotent bool "false"
error-log-level string "notice"
http2-max-field-size string "" DEPRECATED in favour of large_client_header_buffers
http2-max-header-size string "" DEPRECATED in favour of large_client_header_buffers
http2-max-requests int 0 DEPRECATED in favour of keepalive_requests
http2-max-concurrent-streams int 128
hsts bool "true"
hsts-include-subdomains bool "true"
hsts-max-age string "15724800"
hsts-preload bool "false"
keep-alive int 75
keep-alive-requests int 1000
large-client-header-buffers string "4 8k"
log-format-escape-none bool "false"
log-format-escape-json bool "false"
log-format-upstream string $remote_addr - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id
log-format-stream string [$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time
enable-multi-accept bool "true"
max-worker-connections int 16384
max-worker-open-files int 0
map-hash-bucket-size int 64
nginx-status-ipv4-whitelist []string "127.0.0.1"
nginx-status-ipv6-whitelist []string "::1"
proxy-real-ip-cidr []string "0.0.0.0/0"
proxy-set-headers string ""
server-name-hash-max-size int 1024
server-name-hash-bucket-size int <size of the processor’s cache line>
proxy-headers-hash-max-size int 512
proxy-headers-hash-bucket-size int 64
plugins []string
reuse-port bool "true"
server-tokens bool "false"
ssl-ciphers string "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384"
ssl-ecdh-curve string "auto"
ssl-dh-param string ""
ssl-protocols string "TLSv1.2 TLSv1.3"
ssl-session-cache bool "true"
ssl-session-cache-size string "10m"
ssl-session-tickets bool "false"
ssl-session-ticket-key string <Randomly Generated>
ssl-session-timeout string "10m"
ssl-buffer-size string "4k"
use-proxy-protocol bool "false"
proxy-protocol-header-timeout string "5s"
use-gzip bool "false"
use-geoip bool "true"
use-geoip2 bool "false"
enable-brotli bool "false"
brotli-level int 4
brotli-min-length int 20
brotli-types string "application/xml+rss application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component"
use-http2 bool "true"
gzip-disable string ""
gzip-level int 1
gzip-min-length int 256
gzip-types string "application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/javascript text/plain text/x-component"
worker-processes string <Number of CPUs>
worker-cpu-affinity string ""
worker-shutdown-timeout string "240s"
load-balance string "round_robin"
variables-hash-bucket-size int 128
variables-hash-max-size int 2048
upstream-keepalive-connections int 320
upstream-keepalive-time string "1h"
upstream-keepalive-timeout int 60
upstream-keepalive-requests int 10000
limit-conn-zone-variable string "$binary_remote_addr"
proxy-stream-timeout string "600s"
proxy-stream-next-upstream bool "true"
proxy-stream-next-upstream-timeout string "600s"
proxy-stream-next-upstream-tries int 3
proxy-stream-responses int 1
bind-address []string ""
use-forwarded-headers bool "false"
enable-real-ip bool "false"
forwarded-for-header string "X-Forwarded-For"
compute-full-forwarded-for bool "false"
proxy-add-original-uri-header bool "false"
generate-request-id bool "true"
enable-opentracing bool "false"
opentracing-operation-name string ""
opentracing-location-operation-name string ""
zipkin-collector-host string ""
zipkin-collector-port int 9411
zipkin-service-name string "nginx"
zipkin-sample-rate float 1.0
jaeger-collector-host string ""
jaeger-collector-port int 6831
jaeger-endpoint string ""
jaeger-service-name string "nginx"
jaeger-propagation-format string "jaeger"
jaeger-sampler-type string "const"
jaeger-sampler-param string "1"
jaeger-sampler-host string "http://127.0.0.1"
jaeger-sampler-port int 5778
jaeger-trace-context-header-name string uber-trace-id
jaeger-debug-header string uber-debug-id
jaeger-baggage-header string jaeger-baggage
jaeger-trace-baggage-header-prefix string uberctx-
datadog-collector-host string ""
datadog-collector-port int 8126
datadog-service-name string "nginx"
datadog-environment string "prod"
datadog-operation-name-override string "nginx.handle"
datadog-priority-sampling bool "true"
datadog-sample-rate float 1.0
enable-opentelemetry bool "false"
opentelemetry-trust-incoming-span bool "true"
opentelemetry-operation-name string ""
opentelemetry-config string "/etc/nginx/opentelemetry.toml"
otlp-collector-host string ""
otlp-collector-port int 4317
otel-max-queuesize int
otel-schedule-delay-millis int
otel-max-export-batch-size int
otel-service-name string "nginx"
otel-sampler string "AlwaysOff"
otel-sampler-parent-based bool "false"
otel-sampler-ratio float 0.01
main-snippet string ""
http-snippet string ""
server-snippet string ""
stream-snippet string ""
location-snippet string ""
custom-http-errors []int []int{}
proxy-body-size string "1m"
proxy-connect-timeout int 5
proxy-read-timeout int 60
proxy-send-timeout int 60
proxy-buffers-number int 4
proxy-buffer-size string "4k"
proxy-cookie-path string "off"
proxy-cookie-domain string "off"
proxy-next-upstream string "error timeout"
proxy-next-upstream-timeout int 0
proxy-next-upstream-tries int 3
proxy-redirect-from string "off"
proxy-request-buffering string "on"
ssl-redirect bool "true"
force-ssl-redirect bool "false"
denylist-source-range []string []string{}
whitelist-source-range []string []string{}
skip-access-log-urls []string []string{}
limit-rate int 0
limit-rate-after int 0
lua-shared-dicts string ""
http-redirect-code int 308
proxy-buffering string "off"
limit-req-status-code int 503
limit-conn-status-code int 503
enable-syslog bool false
syslog-host string ""
syslog-port int 514
no-tls-redirect-locations string "/.well-known/acme-challenge"
global-auth-url string ""
global-auth-method string ""
global-auth-signin string ""
global-auth-signin-redirect-param string "rd"
global-auth-response-headers string ""
global-auth-request-redirect string ""
global-auth-snippet string ""
global-auth-cache-key string ""
global-auth-cache-duration string "200 202 401 5m"
no-auth-locations string "/.well-known/acme-challenge"
block-cidrs []string ""
block-user-agents []string ""
block-referers []string ""
proxy-ssl-location-only bool "false"
default-type string "text/html"
global-rate-limit-memcached-host string ""
global-rate-limit-memcached-port int 11211
global-rate-limit-memcached-connect-timeout int 50
global-rate-limit-memcached-max-idle-timeout int 10000
global-rate-limit-memcached-pool-size int 50
global-rate-limit-status-code int 429
service-upstream bool "false"
ssl-reject-handshake bool "false"
debug-connections []string "127.0.0.1,1.1.1.1/24"
strict-validate-path-type bool "false" (v1.7.x)

add-headers

Sets custom headers from named configmap before sending traffic to the client. See proxy-set-headers. example

allow-backend-server-header

Enables the return of the header Server from the backend instead of the generic nginx string. default: is disabled

allow-cross-namespace-resources

Enables users to consume cross namespace resource on annotations, when was previously enabled . default: true

Annotations that may be impacted with this change: * auth-secret * auth-proxy-set-header * auth-tls-secret * fastcgi-params-configmap * proxy-ssl-secret

This option will be defaulted to false in the next major release

allow-snippet-annotations

Enables Ingress to parse and add -snippet annotations/directives created by the user. _**default:*_ false

Warning: We recommend enabling this option only if you TRUST users with permission to create Ingress objects, as this may allow a user to add restricted configurations to the final nginx.conf file

This option will be defaulted to false in the next major release

annotations-risk-level

Represents the risk accepted on an annotation. If the risk is, for instance Medium, annotations with risk High and Critical will not be accepted.

Accepted values are Critical, High, Medium and Low.

Defaults to Critical but will be changed to High on the next minor release

annotation-value-word-blocklist

Contains a comma-separated value of chars/words that are well known of being used to abuse Ingress configuration and must be blocked. Related to CVE-2021-25742

When an annotation is detected with a value that matches one of the blocked bad words, the whole Ingress won't be configured.

default: ""

When doing this, the default blocklist is override, which means that the Ingress admin should add all the words that should be blocked, here is a suggested block list.

suggested: "load_module,lua_package,_by_lua,location,root,proxy_pass,serviceaccount,{,},',\""

hide-headers

Sets additional header that will not be passed from the upstream server to the client response. default: empty

References: https://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_hide_header

access-log-params

Additional params for access_log. For example, buffer=16k, gzip, flush=1m

References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

access-log-path

Access log path for both http and stream context. Goes to /var/log/nginx/access.log by default.

Note: the file /var/log/nginx/access.log is a symlink to /dev/stdout

http-access-log-path

Access log path for http context globally. default: ""

Note: If not specified, the access-log-path will be used.

stream-access-log-path

Access log path for stream context globally. default: ""

Note: If not specified, the access-log-path will be used.

enable-access-log-for-default-backend

Enables logging access to default backend. default: is disabled.

error-log-path

Error log path. Goes to /var/log/nginx/error.log by default.

Note: the file /var/log/nginx/error.log is a symlink to /dev/stderr

References: https://nginx.org/en/docs/ngx_core_module.html#error_log

enable-modsecurity

Enables the modsecurity module for NGINX. default: is disabled

enable-owasp-modsecurity-crs

Enables the OWASP ModSecurity Core Rule Set (CRS). default: is disabled

modsecurity-snippet

Adds custom rules to modsecurity section of nginx configuration

client-header-buffer-size

Allows to configure a custom buffer size for reading client request header.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size

client-header-timeout

Defines a timeout for reading client request header, in seconds.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_timeout

client-body-buffer-size

Sets buffer size for reading client request body.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_buffer_size

client-body-timeout

Defines a timeout for reading client request body, in seconds.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#client_body_timeout

disable-access-log

Disables the Access Log from the entire Ingress Controller. default: false

References: https://nginx.org/en/docs/http/ngx_http_log_module.html#access_log

disable-ipv6

Disable listening on IPV6. default: false; IPv6 listening is enabled

disable-ipv6-dns

Disable IPV6 for nginx DNS resolver. default: false; IPv6 resolving enabled.

enable-underscores-in-headers

Enables underscores in header names. default: is disabled

enable-ocsp

Enables Online Certificate Status Protocol stapling (OCSP) support. default: is disabled

ignore-invalid-headers

Set if header fields with invalid names should be ignored. default: is enabled

retry-non-idempotent

Since 1.9.13 NGINX will not retry non-idempotent requests (POST, LOCK, PATCH) in case of an error in the upstream server. The previous behavior can be restored using the value "true".

error-log-level

Configures the logging level of errors. Log levels above are listed in the order of increasing severity.

References: https://nginx.org/en/docs/ngx_core_module.html#error_log

http2-max-field-size

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

Limits the maximum size of an HPACK-compressed request header field.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_field_size

http2-max-header-size

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use large-client-header-buffers instead.

Limits the maximum size of the entire request header list after HPACK decompression.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_header_size

http2-max-requests

Warning

This feature was deprecated in 1.1.3 and will be removed in 1.3.0. Use upstream-keepalive-requests instead.

Sets the maximum number of requests (including push requests) that can be served through one HTTP/2 connection, after which the next client request will lead to connection closing and the need of establishing a new connection.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_requests

http2-max-concurrent-streams

Sets the maximum number of concurrent HTTP/2 streams in a connection.

References: https://nginx.org/en/docs/http/ngx_http_v2_module.html#http2_max_concurrent_streams

hsts

Enables or disables the header HSTS in servers running SSL. HTTP Strict Transport Security (often abbreviated as HSTS) is a security feature (HTTP header) that tell browsers that it should only be communicated with using HTTPS, instead of using HTTP. It provides protection against protocol downgrade attacks and cookie theft.

References:

hsts-include-subdomains

Enables or disables the use of HSTS in all the subdomains of the server-name.

hsts-max-age

Sets the time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS.

hsts-preload

Enables or disables the preload attribute in the HSTS feature (when it is enabled).

keep-alive

Sets the time, in seconds, during which a keep-alive client connection will stay open on the server side. The zero value disables keep-alive client connections.

References: https://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout

Important

Setting keep-alive: '0' will most likely break concurrent http/2 requests due to changes introduced with nginx 1.19.7

Changes with nginx 1.19.7                                        16 Feb 2021
 
     *) Change: connections handling in HTTP/2 has been changed to better
        match HTTP/1.x; the "http2_recv_timeout", "http2_idle_timeout", and