diff --git a/deploy/baremetal/index.html b/deploy/baremetal/index.html index 924a8964e..b74eaf6e8 100644 --- a/deploy/baremetal/index.html +++ b/deploy/baremetal/index.html @@ -387,6 +387,13 @@
The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a Kubernetes cluster running on bare-metal.
+MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a +supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
+This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX
+Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all
+the traffic for the ingress-nginx
Service IP. See Traffic policies for more details.
Note
+The description of other supported configuration modes is off-scope for this document.
+Warning
+MetalLB is currently in beta. Read about the Project maturity and make sure you inform +yourself by reading the official documentation thoroughly.
+MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB +was deployed following the Installation instructions.
+MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx
Service. This pool
+can be defined in a ConfigMap named config
located in the same namespace as the MetalLB controller. In the simplest
+possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out
+by a DHCP server.
Example
+Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal +environments this value is <None>)
+$ kubectl describe node +NAME STATUS ROLES EXTERNAL-IP +host-1 Ready master 203.0.113.1 +host-2 Ready node 203.0.113.2 +host-3 Ready node 203.0.113.3 +
After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates
+the loadBalancer IP field of the ingress-nginx
Service accordingly.
apiVersion: v1 +kind: ConfigMap +metadata: + namespace: metallb-system + name: config +data: + config: | + address-pools: + - name: default + protocol: layer2 + addresses: + - 203.0.113.2-203.0.113.3 +
$ kubectl -n ingress-nginx get svc +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) +default-http-backend ClusterIP 10.0.64.249 <none> 80/TCP +ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP +
As soon as MetalLB sets the external IP address of the ingress-nginx
LoadBalancer Service, the corresponding entries
+are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on
+the ports configured in the LoadBalancer Service:
$ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' +HTTP/1.1 200 OK +Server: nginx/1.15.2 +
Tip
+In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local
+traffic policy. Traffic policies are described in more details in Traffic policies as
+well as in the next section.
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.
@@ -1166,7 +1264,7 @@ requests.and a Kubernetes node with the public IP address 203.0.113.2
(the external IP is added as an example, in most
bare-metal environments this value is <None>)
$ kubectl describe node +$ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 @@ -1203,7 +1301,7 @@ the NGINX Ingress controller should be scheduled or not scheduled.Example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)
-$ kubectl describe node +$ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 @@ -1234,11 +1332,17 @@ update the status of Ingress objects it manages.Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the
+externalIPs
field of theingress-nginx
Service.+Warning
+There is more to setting
+externalIPs
than just enabling the NGINX Ingress controller to update the status of +Ingress objects. Please read about this option in the Services page of official Kubernetes +documentation as well as the section about External IPs in this document for more information.Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
-$ kubectl describe node +$ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 @@ -1269,7 +1373,7 @@ for generating redirect URLs that take into account the URL used by external cliExample
Redirects generated by NGINX, for instance HTTP to HTTPS or
-domain
towww.domain
, are generated without NodePort:$ curl http://myapp.example.com:30100` +$ curl -D- http://myapp.example.com:30100` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect @@ -1384,7 +1488,52 @@ This is particularly suitable for private Kubernetes clusters where none of the nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:- +
External IPs¶
+++Source IP address
+This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not +recommended to use it despite its apparent simplicity.
+The
+externalIPs
Service option was previously mentioned in the NodePort section.As per the Services page of the official Kubernetes documentation, the
+externalIPs
option causes +kube-proxy
to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that +Service. These IP addresses must belong to the target node.+diff --git a/examples/PREREQUISITES/index.html b/examples/PREREQUISITES/index.html index d41c60b3b..cfd283079 100644 --- a/examples/PREREQUISITES/index.html +++ b/examples/PREREQUISITES/index.html @@ -1320,7 +1320,7 @@ which you can deploy as follows 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer -$ curl 108.59.87.126 +$ curl 108.59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET diff --git a/images/baremetal/metallb.gliffy b/images/baremetal/metallb.gliffy new file mode 100644 index 000000000..2ad2de0ec --- /dev/null +++ b/images/baremetal/metallb.gliffy @@ -0,0 +1 @@ +{"contentType":"application/gliffy+json","version":"1.3","stage":{"background":"#FFFFFF","width":737,"height":427,"nodeIndex":402,"autoFit":true,"exportBorder":false,"gridOn":true,"snapToGrid":true,"drawingGuidesOn":true,"pageBreaksOn":false,"printGridOn":false,"printPaper":null,"printShrinkToFit":false,"printPortrait":false,"maxWidth":5000,"maxHeight":5000,"themeData":null,"imageCache":{},"viewportType":"default","fitBB":{"min":{"x":36.86442430045474,"y":20},"max":{"x":736.8644243004547,"y":427}},"printModel":{"pageSize":"Letter","portrait":true,"fitToOnePage":false,"displayPageBreaks":false},"objects":[{"x":50.0,"y":52.0,"rotation":0.0,"id":356,"width":146.0,"height":1.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":72,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#999999","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":0,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[0.0,0.0],[125.0,0.0]],"lockSegments":{},"ortho":false}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":46.86442430045474,"y":29.0,"rotation":0.0,"id":354,"width":245.0,"height":22.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":71,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"Example
+Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal +environments this value is <None>)
++ +$ kubectl describe node +NAME STATUS ROLES EXTERNAL-IP +host-1 Ready master 203.0.113.1 +host-2 Ready node 203.0.113.2 +host-3 Ready node 203.0.113.3 +and the following
+ingress-nginx
NodePort Service+ +$ kubectl -n ingress-nginx get svc +NAME TYPE CLUSTER-IP PORT(S) +ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP +One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort +and the Service port:
++ +spec: + externalIPs: + - 203.0.113.2 + - 203.0.113.3 ++ +$ curl -D- http://myapp.example.com:30100 +HTTP/1.1 200 OK +Server: nginx/1.15.2 + +$ curl -D- http://myapp.example.com +HTTP/1.1 200 OK +Server: nginx/1.15.2 +We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.
+MetalLB: Layer 2
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":326.86442430045474,"y":20.0,"rotation":0.0,"id":325,"width":120.0,"height":90.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":62,"lockAspectRatio":false,"lockShape":false,"children":[{"x":7.225609756097583,"y":20.0,"rotation":0.0,"id":317,"width":62.5,"height":50.0,"uid":"com.gliffy.shape.android.android_v1.icons.monitor","order":60,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.android.android_v1.icons.monitor","strokeWidth":1.0,"strokeColor":"#000000","fillColor":"#676767","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":74.72560975609758,"y":30.0,"rotation":0.0,"id":314,"width":38.048780487804876,"height":40.0,"uid":"com.gliffy.shape.android.android_v1.icons.person","order":56,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.android.android_v1.icons.person","strokeWidth":1.0,"strokeColor":"#000000","fillColor":"#676767","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":0.0,"y":0.0,"rotation":0.0,"id":315,"width":120.0,"height":90.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":13,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":2.0,"strokeColor":"#cccccc","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":547.0,"y":626.5,"rotation":0.0,"id":351,"width":46.0,"height":91.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":70,"lockAspectRatio":false,"lockShape":false,"constraints":{"constraints":[],"startConstraint":{"type":"StartPositionConstraint","StartPositionConstraint":{"nodeId":315,"py":1.0,"px":0.5}},"endConstraint":{"type":"EndPositionConstraint","EndPositionConstraint":{"nodeId":334,"py":0.0,"px":0.5}}},"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#f50056","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":17,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[-160.13557569954526,-516.5],[-160.13557569954526,-482.25],[-66.802242366212,-482.25],[-66.802242366212,-398.0]],"lockSegments":{"1":true},"ortho":true}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":607.3644243004547,"y":228.5,"rotation":0.0,"id":338,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":5,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":460.197757633788,"y":228.5,"rotation":0.0,"id":334,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":4,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":314.0310909671214,"y":228.5,"rotation":0.0,"id":337,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":6,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":36.86442430045474,"y":167.0,"rotation":0.0,"id":221,"width":700.0,"height":260.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":7,"lockAspectRatio":false,"lockShape":false,"children":[{"x":10.0,"y":8.5,"rotation":0.0,"id":153,"width":42.0,"height":41.0,"uid":"com.gliffy.shape.basic.basic_v1.default.image","order":9,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Image","Image":{"url":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACoAAAApCAYAAABDV7v1AAAAAXNSR0IArs4c6QAAAAlwSFlzAAA9hAAAPYQB1ayvdAAAActpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDUuNC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIgogICAgICAgICAgICB4bWxuczp0aWZmPSJodHRwOi8vbnMuYWRvYmUuY29tL3RpZmYvMS4wLyI+CiAgICAgICAgIDx4bXA6Q3JlYXRvclRvb2w+d3d3Lmlua3NjYXBlLm9yZzwveG1wOkNyZWF0b3JUb29sPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAgPC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KGMtVWAAAD9hJREFUWAmdWAl0ltWZfr7l37JvSBIIq1AETTBhCUeBgAtKW9qisdYz6tixyAxLUceOMvYYempPrYwWgbFF0PbUnqqo48zRnsG2mlhFAiSAQDuCYRMSAglZ/iz/8n3fnee93/+ziY6de873/99y73uf+77Pu9xr4P/TlDJqVtVb9XU1LmAoEVGxpG2SZXn/ZCjzFgVlAGqz65nP7llfst+fgmPqOOYxjjH8MX/L1BT4tzRlVC1qsps2TEmmR1Utaa1UlrEUSn3bCuVleM6A/mTaGXDj3QME9bKpvLU71w7bdXbMop2Bpg1VTnqR6fdf9P/lgFKDVfddCPDqFe0z4LlLqZvbrFCu7SX7oTzHoUAtk2pWhmnbZiCTgHsc6vgVuN66XeuHf5gGVCWAf0nAX0LDXwy0Tpk1qDfr6+Zw9X6rXHpyLgxvGZ++YYXzDDfeS0hKNGwaUFbA9PslPb6GQWrAI5CAFcwRDcvHN6DMtc3rit/xewI1dcqul351Bkddul0aKAFWtTZZ55t48vK2+aZSy2GY86xgNtxElEi8BMXavAhSZgJ29fhzXZ1j6pfyxG/yR82ZwfPGbvE895nd68t+z2+6aQ2XVrmXAvwZoDV179pnNUiTVy5v+xbNuMy0gjWmFSLAPhFKDSqL/1p/IkSuvwwq/Pb2TH3/nZf6MTHii9fexu9sBKy1TA1nkTlxXol6w8AzzWtK3khT4AIM/jifT6l7gJqU1YgpejvabufAJaYdqqYiCLBX5hNTCriUgcW8QJBPnQmFLELf+mQxmQDMfrgdHXGFkrCBmMSGC1UiGualbKEEuQ064Tayen1O0ccvaUWlsLCfbmcnTIOsWt42p7ezrdkMRH5jBTOrvWRMEWRSI0qZOT1Y/hVReQRWGDTQQyZ39zroifJKKhSFDLj8xnB1/hC5l3ltAqPsaNJzYh4pUS1z9nZOaJ6yoq1Gm1/Appq+qa2lGanJqu+3jvCUetUOF1zFwY6b6E8K+eggAer+Ap3I1Bbf5BCgOFAXNXr4gIsBmn8gpnCQ990EG6SWcwKG7vsZuJRJa3C4MmQumdMO51/lut5rgkUw1da+IhQ7Z0J5UC5uskO5BU6sk0ZT4sWBOH22nU9iXkEq2pOWSRcapEnfa/WwtdPDyCwDDy0Io6jARlG+hYe+HkZZhoH3Ozy8d9JDgoaWMdJEhsgSmSe4qJjEBc4lczqxM3E7lFdAQ90kfQ/lj9HK1EM3T1qlsFleo0r/GmwKpsPeNuGOIc/e6VKozjZABerVCYDxBLf+GxHMKCeokiDycizYoma2nywp0jQ42pbEB7sH8eSf49hDml9bYKboAGyLKtxYaOBMnN7pkbAGTK7B0iRXagrFbBjTVeU18YYLYSgmLDF/S3HbNnrjFCfen8wOqMB7Jzz12r1ZxvxZ2XjutW4sf3UAVcMtNJ1myKsJ4c752RhTFhJcZ5s4krSLnAcHj8Tw/JtR/HRrAlVFJpqOufjFHRm455t5eKshioUbo5g5zEKfA8cKZEoC2dlcWDLd52qdadfeBpPKdFuGHB9Nfo8nT4TjlrglQjBOdroIh0wsu6MAoSBw36uDeOnuTNx6Yy6slPZa2xPYsW+QiQf41vU5MhL/8Yde2FTR1CsjKB4SwLhRYfxocQhXlHXj7pcHsOmuTHx3Yb7ue/w0vZBWk0XSkuJh8n5cVeep0dRmSw1mm/ah/CbhgMtZyxkrczwnLq5mDDgK02mmJVsG0XyoHSvvzMOiWwtw8zUJDBsagElKdNHN32zow6aGQTQ0Onj422EsvME3feP/JPDE5hjmTuvHousiuOnaLORm27jja3mYOz0Tw4uDWsuPv9iDX3/ioDrfxIAEP9pXeUkYVjCXgCv4piXamm2Y2aVV2lgkwFTDphkZUTTT2V/S4Iw8E5sOOhi7sgNHjsc1FwVktN/Fyl+cwV1PRhESvyTny4ZoB+UDUFbEe74TCtz+4yhWbTyDfnqfcFhAHjoWx/h/7cSvWxxcQ5BxASmRTBDAcEw7LBoWnkJ4yixk6DxuCHmVNrj0l84SIrWHTslmAJER57WMiInFC7IQYwj6FSeTVBAWT0u1kNzz9Z9Oefje14L4e/I5Qgqd38rz6ZxctMTaVEsL0KGXi9TOvXmz4eqR5YtPXsalX6FcyYx6RTJODxJt7WR4uf/GCEYNDyFJYMdPJmBxgoorMvD08kJs+GoE+NhDkPEy3YISTw572LggA6uXFaH8KxFNl+NtCSRJqzEjQrj/+gh2ciG2cE0PTSOml7CM4KuJGhtFaaBBOzmRShzmkRsSA2QyGSKBvJsxtDDPwLQrw/IaR07EMfGHHfhwV79+zsuxce8t+di6sQDXVhJwqs2qykDjswX4B37LkdzK9n5THyoe68BRypBWXR5BDkNeHxef8kv9nj8mfYVYjOEsxifISw3UNa1KKRJoele8TsMkUgH6EevgW8dYGEZeSduxP4Zom4d5/96DP25lBcXGqIsZkzMxati5UCXan1aeqb/Lz9sfMPys78UZWqfpLzH9vqw4gNsoezfnEK2mjMh/8WflSB1gmKiUL/5nT02VN2zMEQJUQmvKHEyN44fZyAibcGiyfYeTGFtmYjq1fMPjPXh1Sw+HMd+nUlbbqQTaTvsbAKZjkYnf/b4b837Sg7kFBsYOM7UMl8TMzLAwoZQcYXZiJNPhSWSlANNhGLKgpokMu2pRawbNXS4hwe/gC5ePurF7Ua5eBHO4h70nXLQc89BSbGIEAZ/qcsk5ap8zHTgcw91rurQDvnh/vo6dwsfuqIcrR5s6u4HW2JPvMgJ4mhJFjCq6+CPQixp3MMJTo0Iw2iriXg7PGqvICeI3xe/FJ6SYCAtxRE4qGIjWLh9iYs0dEUyZGMboYUEMLUolcI5rPeVg2w6iZrxu+zuHQJkPOf6+2nwsvM4hvxPY+dc4jrSzrEtZQCud08hcrq2YSg1Jp2JRzVOKHSsYbaWsCiuQFfKSfS4HmcIVqS1belKaPer55TtHZGVa+PHiQm0yPn6mTa/IwCs/dMVcmHpVhv4uMVfa0KKAvqZXZLLCOhchxBI44uFt6cQpx5BSUjI6JKHJstoIZIbc5MBk2/SMqaYVAC3vEqQVpakm0NSPXB9CKQN4yRAbV47zvVm0I7yS1seA39vv4VRnElkZJi4fGUaEPK69KVd/lx8JxAePxNE34GFooY1sen92pgmJwSn64vabc1E+LoSTHQ5OMJW+vj2OA91K5bCwpOaUadq0tDfNVoY3RfjpMRbksNDaQQ49Mi+Cuxb4ebizy0HLp3GMLA3qnH+I95v+qxcdveTpGQ9/2uvioZuDeGK57/ESAdJNwDzHvqvfjGN2uY2xTMmFTB7fW5Ct+TtIzgsdhD5XMc5Ks8wu3PXiAGrKLCXFN0t/vjUrSTBzJPctNK8ypb68jOR+f38CuVk9+Piog4074jj4iYf6x/Mwe2oWCnJZPR1zsOWQizkjqV2aat60iA5RMtG+A4NaWzKxmH1+NQP+tqSmz/N7HcwZbuJfcn1e79w3gFmPdqNwhIkHpwcxviyA+r0JlOQbxiAtK6FSYjvRjhS9fsq8Wuq6CYfFrTUqYqr/POIaz+3iJo5KmkotoNTA6w0DmF6ewZrTxhP35GLLz7pxfEDhm2MtVEzwtRHtc7FsQ7fwC2+tCjCY2ygfz0JlbB/2U/sibzXHFubbiHOyN/7MAFrMkMVN4MoGJoFYHMVMqyPCpkqw/KRxGOLDluslP+UO2HtUb30NUzKxm2SHUSy5ZnLl17BuFF+oKTTxzM4kGnb42ahiQgb+cG82Dn7oYsGUECt6X0PvNPahnrXq+7ze3e73FVBfZ5+PP3Dx1nezUTnJd7IPmvvx1PYkZlERMsc1nEPmHMG5BYNgMVhCCTYy9VFNqMplrSu4uXraSUQZHHRjP7+JQwQoSZxMBL72YAG+MtpPpw3b+3TJd/nIEE4w/9/y0070s584sk11vP5Ioa62pGg+xkr/uhnZWmgLK6fbVrOaYt88erhUTiL7XNMHEYrZ0vLi0RVN60rXaEDNa0t/7ib7XuBeRZ79gstPEZp7Sca8AgpklYcHftkNcShps6dl0dt9J9qxfxCNJzzQN/AJM+TONhfb9w7qfhIR0iCPtSbw8HNdaCZthnCXGmeGOgeSzqPnVa4VzrV4CvOCgJSXhmxBpIyqWsTNVaj1PZ4jVfOsiNMZktxlpF6rxOesANDIfU8l90pr7snRsTIdJ6U+7ex2EO3j5oeDchiGCvLsswWJBPjdfx3EP7/Qg3e5/5qZSyvRT86BPDtXwgrlBIlhW05h6UwpQwWjBqGPUnhCN3lF2yju6BrNQPgyVi8UowjtLFadGnMJfxfBynHE6Z8N4a4zoB0jJFvKS7QE00yQ1Y0sYtwPTqGbPjWLILupivMqptQkRpIHHgHXibUTWvXun5ccSWPT0butaYMnGt21PvtM6fQHP1LKvZM2l92gnGWeRSAZRwrdg8xaj04LYv7MLNTTgVb/rgcD3JXJ4UMXAbV3MvYeS2D7R4N49o0oItyBXTEmjEGWdw2HXYzMNLh9Zuw5F3NFYZzLZKbkrsj0FjavKW1Og5T1X0BhOcoRVV+95PgDdjj333znki6K21hxEn+CDmpj62NFuIzZZtYP2rGDhYreR1Nb8KtBne91bUa7TGYIkqMe0ercH3VAwksG1SncT4EVZsEOZptuPPpA87rSpy8+fzqrLfZDPVbJVOAZ5lM8DHs+EOZmxlCSGnQQlwOEPacUVs4JYwQz1f6DMexghX4Da8qaEoaXUhPVDDdyyf1svruRVdPu0ywPmQhkr/T9WWHs61QsQkQqQ7rf3EA4z5Q5BSRnM+rRoLGkvus1p+95SFbHIxR9SgcjXrzYiXc3WoFs0VFCVh6XspqOdLDVoVn78fI7jJUM1n3UsBzp0Pq68pHqR+7FJeUCT0xeYt/GPf04cIIftOkNRnSxqEqwQA44sZ5t8cIz/yhgamuJi1jkPt0uMH36ZZob5UuPjLaNYCMz1xBxLpo/EOBhxdGYZ7R3M7zQKcYQRIKWP0e3tBT/X/K97BSOsjhup7f7mUeCuvDOdx6eJZz2LGPa+c5zoZSLOHr+x0m1+4P7N09KVC5tnUcU/y1IuOOSMwZLwIYsZYjmJFh/Hsi0PAErm0QJDDFaJZ15mH/08Q3r4HlNa0vfFodu2mBIcv9Mu4Cj538VkEIDcmYLCfqA7KmkRiBc7kiUIablAdr/CVJkykKkL08jIWP5KCWHxzqYbFQPCkhx5M8DKTI+F6h83DxJB1EIwemNzwcyhwaoBEnsOtVekjcy8BJN+qb6eyJDZHnJ3k27nil9SrqnHfkSQ/WrLwSKOjmfPOdcyf72dcp1ZDcnJxLisXJ9+eaP4V7I7RZZiJWknEfOZy90nouF/i9FYjxzfww9WAAAAABJRU5ErkJggg==","urlHash":null,"fileName":null,"imageId":null,"strokeWidth":2.0,"strokeColor":"#000000","dropShadow":false,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":0.0,"y":0.0,"rotation":0.0,"id":3,"width":700.0,"height":260.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":11,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#f5f5f5","gradient":true,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":436.197757633788,"y":236.5,"rotation":0.0,"id":231,"width":88.0,"height":100.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":57,"lockAspectRatio":false,"lockShape":false,"children":[{"x":61.5,"y":56.0,"rotation":0.0,"id":190,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":43,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":52.5,"rotation":0.0,"id":191,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":41,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":61.5,"y":43.0,"rotation":0.0,"id":193,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":39,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":39.5,"rotation":0.0,"id":194,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":37,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-6.0,"y":6.0,"rotation":90.0,"id":195,"width":100.0,"height":88.0,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":35,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#3a5de9","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":142.86442430045474,"y":236.5,"rotation":0.0,"id":235,"width":88.0,"height":100.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":61,"lockAspectRatio":false,"lockShape":false,"children":[{"x":61.5,"y":56.0,"rotation":0.0,"id":181,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":23,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":52.5,"rotation":0.0,"id":182,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":21,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":61.5,"y":43.0,"rotation":0.0,"id":184,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":19,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":39.5,"rotation":0.0,"id":185,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":17,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-6.0,"y":6.0,"rotation":90.0,"id":186,"width":100.0,"height":88.0,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":15,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#3a5de9","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":392.0,"y":148.0,"rotation":0.0,"id":339,"width":46.0,"height":91.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":63,"lockAspectRatio":false,"lockShape":false,"constraints":{"constraints":[],"startConstraint":{"type":"StartPositionConstraint","StartPositionConstraint":{"nodeId":366,"py":0.0,"px":0.5}},"endConstraint":{"type":"EndPositionConstraint","EndPositionConstraint":{"nodeId":338,"py":0.0,"px":0.5}}},"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#00aeff","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":0,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[-177.38557569954526,62.25],[-177.38557569954526,45.5],[235.36442430045474,45.5],[235.36442430045474,80.5]],"lockSegments":{"1":true},"ortho":true}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":537.0,"y":496.5,"rotation":0.0,"id":340,"width":46.0,"height":91.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":64,"lockAspectRatio":false,"lockShape":false,"constraints":{"constraints":[],"startConstraint":{"type":"StartPositionConstraint","StartPositionConstraint":{"nodeId":368,"py":0.0,"px":0.4999999999999999}},"endConstraint":{"type":"EndPositionConstraint","EndPositionConstraint":{"nodeId":337,"py":0.0,"px":0.5}}},"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#00aeff","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":0,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[-322.38557569954526,-279.75],[-322.38557569954526,-302.75],[-202.96890903287863,-302.75],[-202.96890903287863,-268.0]],"lockSegments":{"1":true},"ortho":true}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":110.0,"y":180.0,"rotation":0.0,"id":341,"width":85.0,"height":38.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":65,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"MetalLB controller
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":144.36442430045474,"y":346.5,"rotation":0.0,"id":346,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":66,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"master
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":291.5310909671214,"y":346.5,"rotation":0.0,"id":348,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":67,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"node
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":439.197757633788,"y":346.5,"rotation":0.0,"id":349,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":68,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"node
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":584.8644243004547,"y":346.5,"rotation":0.0,"id":350,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":69,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"node
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":187.61442430045474,"y":210.25,"rotation":0.0,"id":377,"width":54.0,"height":54.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":81,"lockAspectRatio":false,"lockShape":false,"children":[{"x":37.63557569954526,"y":39.0,"rotation":0.0,"id":373,"width":23.0,"height":11.5,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":80,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Line","Line":{"strokeWidth":1.0,"strokeColor":"#3f51b5","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":0,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[6.0,2.0],[-26.0,-16.5]],"lockSegments":{},"ortho":false}},"linkMap":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":9.0,"y":31.0,"rotation":0.0,"id":371,"width":36.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.triangle","order":78,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.triangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#3f51b5","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":9.0,"y":6.5,"rotation":0.0,"id":368,"width":36.0,"height":34.0,"uid":"com.gliffy.shape.basic.basic_v1.default.triangle","order":76,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.triangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":0.0,"y":0.0,"rotation":0.0,"id":366,"width":54.0,"height":54.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":74,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":4.0,"strokeColor":"#ffffff","fillColor":"#3f51b5","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":210.0,"y":170.0,"rotation":0.0,"id":378,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":82,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"IP pool
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":488.9951905283833,"y":233.75,"rotation":29.999999999999996,"id":380,"width":100.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":83,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"203.0.113.2
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":340.0,"y":230.0,"rotation":29.999999999999996,"id":381,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":84,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"203.0.113.1
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":640.0,"y":230.0,"rotation":29.999999999999996,"id":382,"width":85.0,"height":19.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":85,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"203.0.113.3
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":582.8644243004547,"y":236.5,"rotation":0.0,"id":229,"width":88.0,"height":100.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":54,"lockAspectRatio":false,"lockShape":false,"children":[{"x":61.5,"y":56.0,"rotation":0.0,"id":199,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":53,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":52.5,"rotation":0.0,"id":200,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":51,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":61.5,"y":43.0,"rotation":0.0,"id":202,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":49,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":39.5,"rotation":0.0,"id":203,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":47,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-6.0,"y":6.0,"rotation":90.0,"id":204,"width":100.0,"height":88.0,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":45,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#3a5de9","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":599.3644243004546,"y":256.5,"rotation":0.0,"id":383,"width":53.00000000000006,"height":60.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":86,"lockAspectRatio":false,"lockShape":false,"children":[{"x":0.0,"y":5.5,"rotation":0.0,"id":384,"width":53.0,"height":49.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":90,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"N
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-3.599999999999966,"y":3.6000000000000227,"rotation":90.0,"id":385,"width":60.0,"height":52.8,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":88,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#00963a","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":289.5310909671214,"y":236.5,"rotation":0.0,"id":233,"width":88.0,"height":100.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":58,"lockAspectRatio":false,"lockShape":false,"children":[{"x":61.5,"y":56.0,"rotation":0.0,"id":161,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":33,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":52.5,"rotation":0.0,"id":162,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":31,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":61.5,"y":43.0,"rotation":0.0,"id":164,"width":3.0,"height":3.0,"uid":"com.gliffy.shape.basic.basic_v1.default.circle","order":29,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.ellipse.basic_v1","strokeWidth":2.0,"strokeColor":"#333333","fillColor":"#000000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":19.0,"y":39.5,"rotation":0.0,"id":165,"width":50.0,"height":10.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":27,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#FFFFFF","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-6.0,"y":6.0,"rotation":90.0,"id":166,"width":100.0,"height":88.0,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":25,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#3a5de9","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":306.0310909671214,"y":256.5,"rotation":0.0,"id":387,"width":53.00000000000006,"height":60.0,"uid":"com.gliffy.shape.basic.basic_v1.default.group","order":91,"lockAspectRatio":false,"lockShape":false,"children":[{"x":0.0,"y":5.5,"rotation":0.0,"id":388,"width":53.0,"height":49.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":95,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"N
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":-3.599999999999966,"y":3.6000000000000227,"rotation":90.0,"id":389,"width":60.0,"height":52.8,"uid":"com.gliffy.shape.basic.basic_v1.default.hexagon","order":93,"lockAspectRatio":true,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.hexagon.basic_v1","strokeWidth":0.0,"strokeColor":"#000000","fillColor":"#00963a","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":587.75,"y":778.75,"rotation":0.0,"id":393,"width":46.0,"height":91.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":96,"lockAspectRatio":false,"lockShape":false,"constraints":{"constraints":[],"startConstraint":{"type":"StartPositionConstraint","StartPositionConstraint":{"nodeId":399,"py":0.0,"px":0.5}},"endConstraint":{"type":"EndPositionConstraint","EndPositionConstraint":{"nodeId":394,"py":1.0,"px":0.5}}},"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#f50056","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":17,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[-57.75,-490.75],[-42.08333333333337,-490.75],[-26.41666666666663,-490.75],[-10.75,-490.75]],"lockSegments":{},"ortho":true}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":565.0,"y":280.0,"rotation":90.0,"id":394,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":0,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":419.0,"y":280.0,"rotation":90.0,"id":396,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":3,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":355.0,"y":280.0,"rotation":90.0,"id":398,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":2,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":502.0,"y":280.0,"rotation":90.0,"id":399,"width":40.0,"height":16.0,"uid":"com.gliffy.shape.basic.basic_v1.default.rectangle","order":1,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Shape","Shape":{"tid":"com.gliffy.stencil.rectangle.basic_v1","strokeWidth":0.0,"strokeColor":"#333333","fillColor":"#ff0000","gradient":false,"dashStyle":null,"dropShadow":false,"state":0,"opacity":1.0,"shadowX":0.0,"shadowY":0.0}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":597.75,"y":788.75,"rotation":0.0,"id":400,"width":46.0,"height":91.0,"uid":"com.gliffy.shape.basic.basic_v1.default.line","order":97,"lockAspectRatio":false,"lockShape":false,"constraints":{"constraints":[],"startConstraint":{"type":"StartPositionConstraint","StartPositionConstraint":{"nodeId":396,"py":1.0,"px":0.5}},"endConstraint":{"type":"EndPositionConstraint","EndPositionConstraint":{"nodeId":398,"py":0.0,"px":0.5}}},"graphic":{"type":"Line","Line":{"strokeWidth":2.0,"strokeColor":"#f50056","fillColor":"none","dashStyle":null,"startArrow":0,"endArrow":17,"startArrowRotation":"auto","endArrowRotation":"auto","interpolationType":"linear","cornerRadius":null,"controlPath":[[-166.75,-500.75],[-182.75,-500.75],[-198.75,-500.75],[-214.75,-500.75]],"lockSegments":{},"ortho":true}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"},{"x":314.94392033916137,"y":118.0,"rotation":0.0,"id":401,"width":60.0,"height":38.0,"uid":"com.gliffy.shape.basic.basic_v1.default.text","order":98,"lockAspectRatio":false,"lockShape":false,"graphic":{"type":"Text","Text":{"overflow":"none","paddingTop":2,"paddingRight":2,"paddingBottom":2,"paddingLeft":2,"outerPaddingTop":6,"outerPaddingRight":6,"outerPaddingBottom":2,"outerPaddingLeft":6,"type":"fixed","lineTValue":null,"linePerpValue":null,"cardinalityType":null,"html":"80/tcp
443/tcp
","tid":null,"valign":"middle","vposition":"none","hposition":"none"}},"linkMap":[],"children":[],"hidden":false,"layerId":"mcqBzoFMGdMI"}],"layers":[{"guid":"mcqBzoFMGdMI","order":0,"name":"Layer 0","active":true,"locked":false,"visible":true,"nodeIndex":99}],"shapeStyles":{},"lineStyles":{"global":{"stroke":"#f50056","strokeWidth":1,"orthoMode":0,"endArrow":17,"startArrow":0}},"textStyles":{"global":{"bold":false,"face":"Roboto Slab","size":"13px","color":"#676767"}}},"metadata":{"title":"untitled","revision":0,"exportBorder":false,"loadPosition":"default","autosaveDisabled":false,"lastSerialized":1536676662123,"libraries":["com.gliffy.libraries.basic.basic_v1.default","com.gliffy.libraries.flowchart.flowchart_v1.default","com.gliffy.libraries.swimlanes.swimlanes_v1.default","com.gliffy.libraries.uml.uml_v2.class","com.gliffy.libraries.uml.uml_v2.sequence","com.gliffy.libraries.uml.uml_v2.activity","com.gliffy.libraries.erd.erd_v1.default","com.gliffy.libraries.ui.ui_v3.containers_content","com.gliffy.libraries.ui.ui_v3.forms_controls","com.gliffy.libraries.android.android_v1.icons","com.gliffy.libraries.images"]},"embeddedResources":{"index":0,"resources":[]}} \ No newline at end of file diff --git a/images/baremetal/metallb.jpg b/images/baremetal/metallb.jpg new file mode 100644 index 000000000..28dad6dea Binary files /dev/null and b/images/baremetal/metallb.jpg differ diff --git a/search/search_index.json b/search/search_index.json index 93af905b9..476f6cecc 100644 --- a/search/search_index.json +++ b/search/search_index.json @@ -107,7 +107,7 @@ }, { "location": "/deploy/baremetal/", - "text": "Bare-metal considerations\n\u00b6\n\n\nIn traditional \ncloud\n environments, where network load balancers are available on-demand, a single Kubernetes manifest\nsuffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to\nany application running inside the cluster. \nBare-metal\n environments lack this commodity, requiring a slightly\ndifferent setup to offer the same kind of access to external consumers.\n\n\n\n\n\n\nThe rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a\nKubernetes cluster running on bare-metal.\n\n\nOver a NodePort Service\n\u00b6\n\n\nDue to its simplicity, this is the setup a user will deploy by default when following the steps described in the\n\ninstallation guide\n.\n\n\n\n\nInfo\n\n\nA Service of type \nNodePort\n exposes, via the \nkube-proxy\n component, the \nsame unprivileged\n port (default:\n30000-32767) on every Kubernetes node, masters included. For more information, see \nServices\n.\n\n\n\n\nIn this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to\nany port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client\nlocated outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports\n80 and 443. Instead, the external client must append the NodePort allocated to the \ningress-nginx\n Service to HTTP\nrequests.\n\n\n\n\n\n\nExample\n\n\nGiven the NodePort \n30100\n allocated to the \ningress-nginx\n Service\n\n\n$\n kubectl -n ingress-nginx get svc\n\nNAME TYPE CLUSTER-IP PORT(S)\n\n\ndefault-http-backend ClusterIP 10.0.64.249 80/TCP\n\n\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n\n\n\n\n\nand a Kubernetes node with the public IP address \n203.0.113.2\n (the external IP is added as an example, in most\nbare-metal environments this value is)\n\n\n$\n kubectl describe node \n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\na client would reach an Ingress with \nhost\n:\n \nmyapp\n.\nexample\n.\ncom\n at \nhttp://myapp.example.com:30100\n, where the\nmyapp.example.com subdomain resolves to the 203.0.113.2 IP address.\n\n\n\n\n\n\nImpact on the host system\n\n\nWhile it may sound tempting to reconfigure the NodePort range using the \n--service-node-port-range\n API server flag\nto include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues\nincluding (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant\n\nkube-proxy\n privileges it may otherwise not require.\n\n\nThis practice is therefore \ndiscouraged\n. See the other approaches proposed in this page for alternatives.\n\n\n\n\nThis approach has a few other limitations one ought to be aware of:\n\n\n\n\nSource IP address\n\n\n\n\nServices of type NodePort perform \nsource address translation\n by default. This means the source IP of a\nHTTP request is always \nthe IP address of the Kubernetes node that received the request\n from the perspective of\nNGINX.\n\n\nThe recommended way to preserve the source IP in a NodePort setup is to set the value of the \nexternalTrafficPolicy\n\nfield of the \ningress-nginx\n Service spec to \nLocal\n (\nexample\n).\n\n\n\n\nWarning\n\n\nThis setting effectively \ndrops packets\n sent to Kubernetes nodes which are not running any instance of the NGINX\nIngress controller. Consider \nassigning NGINX Pods to specific nodes\n in order to control on what nodes\nthe NGINX Ingress controller should be scheduled or not scheduled.\n\n\n\n\n\n\nExample\n\n\nIn a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments\nthis value is )\n\n\n$\n kubectl describe node \n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\nwith a \nnginx-ingress-controller\n Deployment composed of 2 replicas\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3\n\n\nnginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2\n\n\n\n\n\nRequests sent to \nhost-2\n and \nhost-3\n would be forwarded to NGINX and original client's IP would be preserved,\nwhile requests to \nhost-1\n would get dropped because there is no NGINX replica running on that node.\n\n\n\n\n\n\nIngress status\n\n\n\n\nBecause NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller \ndoes not\nupdate the status of Ingress objects it manages\n.\n\n\n$\n kubectl get ingress\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 80\n\n\n\n\n\nDespite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible\nto force the status update of all managed Ingress objects by setting the \nexternalIPs\n field of the \ningress-nginx\n\nService.\n\n\n\n\nExample\n\n\nGiven the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is )\n\n\n$\n kubectl describe node \n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\none could edit the \ningress-nginx\n Service and add the following field to the object spec\n\n\nspec\n:\n\n \nexternalIPs\n:\n\n \n-\n \n203.0.113.1\n\n \n-\n \n203.0.113.2\n\n \n-\n \n203.0.113.3\n\n\n\n\n\nwhich would in turn be reflected on Ingress objects as follows:\n\n\n$\n kubectl get ingress -o wide\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80\n\n\n\n\n\n\n\n\n\nRedirects\n\n\n\n\nAs NGINX is \nnot aware of the port translation operated by the NodePort Service\n, backend applications are responsible\nfor generating redirect URLs that take into account the URL used by external clients, including the NodePort.\n\n\n\n\nExample\n\n\nRedirects generated by NGINX, for instance HTTP to HTTPS or \ndomain\n to \nwww.domain\n, are generated without\nNodePort:\n\n\n$\n curl http://myapp.example.com:30100\n`\n\n\nHTTP/1.1 308 Permanent Redirect\n\n\nServer: nginx/1.15.2\n\n\nLocation: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect\n\n\n\n\n\n\n\nVia the host network\n\u00b6\n\n\nIn a setup where there is no external load balancer available but using NodePorts is not an option, one can configure\n\ningress-nginx\n Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of\nthis approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network\ninterfaces, without the extra network translation imposed by NodePort Services.\n\n\n\n\nNote\n\n\nThis approach does not leverage any Service object to expose the NGINX Ingress controller. If the \ningress-nginx\n\nService exists in the target cluster, it is \nrecommended to delete it\n.\n\n\n\n\nThis can be achieved by enabling the \nhostNetwork\n option in the Pods' spec.\n\n\ntemplate\n:\n\n \nspec\n:\n\n \nhostNetwork\n:\n \ntrue\n\n\n\n\n\n\n\nSecurity considerations\n\n\nEnabling this option \nexposes every system daemon to the NGINX Ingress controller\n on any network interface,\nincluding the host's loopback. Please evaluate the impact this may have on the security of your system carefully.\n\n\n\n\n\n\nExample\n\n\nConsider this \nnginx-ingress-controller\n Deployment composed of 2 replicas, NGINX Pods inherit from the IP address\nof their host instead of an internal Pod IP.\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\n\n\nnginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n\n\n\n\n\n\n\nOne major limitation of this deployment approach is that only \na single NGINX Ingress controller Pod\n may be scheduled\non each cluster node, because binding the same port multiple times on the same network interface is technically\nimpossible. Pods that are unschedulable due to such situation fail with the following event:\n\n\n$\n kubectl -n ingress-nginx describe pod \n\n...\n\n\nEvents:\n\n\n Type Reason From Message\n\n\n ---- ------ ---- -------\n\n\n Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n\n\n\n\n\nOne way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a \nDaemonSet\n instead\nof a traditional Deployment.\n\n\n\n\nInfo\n\n\nA DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to\n\nrepel those Pods\n. For more information, see \nDaemonSet\n.\n\n\n\n\nBecause most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the\nconfiguration of the corresponding manifest at the user's discretion.\n\n\n\n\nLike with NodePorts, this approach has a few quirks it is important to be aware of.\n\n\n\n\nDNS resolution\n\n\n\n\nPods configured with \nhostNetwork\n:\n \ntrue\n do not use the internal DNS resolver (i.e. \nkube-dns\n or \nCoreDNS\n), unless\ntheir \ndnsPolicy\n spec field is set to \nClusterFirstWithHostNet\n. Consider using this setting if NGINX is\nexpected to resolve internal names for any reason.\n\n\n\n\nIngress status\n\n\n\n\nBecause there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default\n\n--publish-service\n flag used in standard cloud setups \ndoes not apply\n and the status of all Ingress objects remains\nblank.\n\n\n$\n kubectl get ingress\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 80\n\n\n\n\n\nInstead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the\n\n--report-node-internal-ip-address\n flag, which sets the status of all Ingress objects to the internal IP\naddress of all nodes running the NGINX Ingress controller.\n\n\n\n\nExample\n\n\nGiven a \nnginx-ingress-controller\n DaemonSet composed of 2 replicas\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\n\n\nnginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n\n\n\n\n\nthe controller sets the status of all Ingress objects it manages to the following value:\n\n\n$\n kubectl get ingress -o wide\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 203.0.113.2,203.0.113.3 80\n\n\n\n\n\n\n\n\n\nNote\n\n\nAlternatively, it is possible to override the address written to Ingress objects using the\n\n--publish-status-address\n flag. See \nCommand line arguments\n.\n\n\n\n\nUsing a self-provisioned edge\n\u00b6\n\n\nSimilarly to cloud environments, this deployment approach requires an edge network component providing a public\nentrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software\n(e.g. \nHAproxy\n) and is usually managed outside of the Kubernetes landscape by operations teams.\n\n\nSuch deployment builds upon the NodePort Service described above in \nOver a NodePort Service\n,\nwith one significant difference: external clients do not access cluster nodes directly, only the edge component does.\nThis is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.\n\n\nOn the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes\nnodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort\non the target nodes as shown in the diagram below:", + "text": "Bare-metal considerations\n\u00b6\n\n\nIn traditional \ncloud\n environments, where network load balancers are available on-demand, a single Kubernetes manifest\nsuffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to\nany application running inside the cluster. \nBare-metal\n environments lack this commodity, requiring a slightly\ndifferent setup to offer the same kind of access to external consumers.\n\n\n\n\n\n\nThe rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a\nKubernetes cluster running on bare-metal.\n\n\nA pure software solution: MetalLB\n\u00b6\n\n\nMetalLB\n provides a network load-balancer implementation for Kubernetes clusters that do not run on a\nsupported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.\n\n\nThis section demonstrates how to use the \nLayer 2 configuration mode\n of MetalLB together with the NGINX\nIngress controller in a Kubernetes cluster that has \npublicly accessible nodes\n. In this mode, one node attracts all\nthe traffic for the \ningress-nginx\n Service IP. See \nTraffic policies\n for more details.\n\n\n\n\n\n\nNote\n\n\nThe description of other supported configuration modes is off-scope for this document.\n\n\n\n\n\n\nWarning\n\n\nMetalLB is currently in \nbeta\n. Read about the \nProject maturity\n and make sure you inform\nyourself by reading the official documentation thoroughly.\n\n\n\n\nMetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB\nwas deployed following the \nInstallation\n instructions.\n\n\nMetalLB requires a pool of IP addresses in order to be able to take ownership of the \ningress-nginx\n Service. This pool\ncan be defined in a ConfigMap named \nconfig\n located in the same namespace as the MetalLB controller. In the simplest\npossible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out\nby a DHCP server.\n\n\n\n\nExample\n\n\nGiven the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is )\n\n\n$\n kubectl describe node\n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\nAfter creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates\nthe \nloadBalancer\n IP field of the \ningress-nginx\n Service accordingly.\n\n\napiVersion\n:\n \nv1\n\n\nkind\n:\n \nConfigMap\n\n\nmetadata\n:\n\n \nnamespace\n:\n \nmetallb-system\n\n \nname\n:\n \nconfig\n\n\ndata\n:\n\n \nconfig\n:\n \n|\n\n \naddress-pools:\n\n \n- name: default\n\n \nprotocol: layer2\n\n \naddresses:\n\n \n- 203.0.113.2-203.0.113.3\n\n\n\n\n\n$\n kubectl -n ingress-nginx get svc\n\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)\n\n\ndefault-http-backend ClusterIP 10.0.64.249 80/TCP\n\n\ningress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP\n\n\n\n\n\n\n\nAs soon as MetalLB sets the external IP address of the \ningress-nginx\n LoadBalancer Service, the corresponding entries\nare created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on\nthe ports configured in the LoadBalancer Service:\n\n\n$\n curl -D- http://203.0.113.3 -H \n'Host: myapp.example.com'\n\n\nHTTP/1.1 200 OK\n\n\nServer: nginx/1.15.2\n\n\n\n\n\n\n\nTip\n\n\nIn order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the \nLocal\n\ntraffic policy. Traffic policies are described in more details in \nTraffic policies\n as\nwell as in the next section.\n\n\n\n\nOver a NodePort Service\n\u00b6\n\n\nDue to its simplicity, this is the setup a user will deploy by default when following the steps described in the\n\ninstallation guide\n.\n\n\n\n\nInfo\n\n\nA Service of type \nNodePort\n exposes, via the \nkube-proxy\n component, the \nsame unprivileged\n port (default:\n30000-32767) on every Kubernetes node, masters included. For more information, see \nServices\n.\n\n\n\n\nIn this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to\nany port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client\nlocated outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports\n80 and 443. Instead, the external client must append the NodePort allocated to the \ningress-nginx\n Service to HTTP\nrequests.\n\n\n\n\n\n\nExample\n\n\nGiven the NodePort \n30100\n allocated to the \ningress-nginx\n Service\n\n\n$\n kubectl -n ingress-nginx get svc\n\nNAME TYPE CLUSTER-IP PORT(S)\n\n\ndefault-http-backend ClusterIP 10.0.64.249 80/TCP\n\n\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n\n\n\n\n\nand a Kubernetes node with the public IP address \n203.0.113.2\n (the external IP is added as an example, in most\nbare-metal environments this value is )\n\n\n$\n kubectl describe node\n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\na client would reach an Ingress with \nhost\n:\n \nmyapp\n.\nexample\n.\ncom\n at \nhttp://myapp.example.com:30100\n, where the\nmyapp.example.com subdomain resolves to the 203.0.113.2 IP address.\n\n\n\n\n\n\nImpact on the host system\n\n\nWhile it may sound tempting to reconfigure the NodePort range using the \n--service-node-port-range\n API server flag\nto include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues\nincluding (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant\n\nkube-proxy\n privileges it may otherwise not require.\n\n\nThis practice is therefore \ndiscouraged\n. See the other approaches proposed in this page for alternatives.\n\n\n\n\nThis approach has a few other limitations one ought to be aware of:\n\n\n\n\nSource IP address\n\n\n\n\nServices of type NodePort perform \nsource address translation\n by default. This means the source IP of a\nHTTP request is always \nthe IP address of the Kubernetes node that received the request\n from the perspective of\nNGINX.\n\n\nThe recommended way to preserve the source IP in a NodePort setup is to set the value of the \nexternalTrafficPolicy\n\nfield of the \ningress-nginx\n Service spec to \nLocal\n (\nexample\n).\n\n\n\n\nWarning\n\n\nThis setting effectively \ndrops packets\n sent to Kubernetes nodes which are not running any instance of the NGINX\nIngress controller. Consider \nassigning NGINX Pods to specific nodes\n in order to control on what nodes\nthe NGINX Ingress controller should be scheduled or not scheduled.\n\n\n\n\n\n\nExample\n\n\nIn a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments\nthis value is )\n\n\n$\n kubectl describe node\n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\nwith a \nnginx-ingress-controller\n Deployment composed of 2 replicas\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3\n\n\nnginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2\n\n\n\n\n\nRequests sent to \nhost-2\n and \nhost-3\n would be forwarded to NGINX and original client's IP would be preserved,\nwhile requests to \nhost-1\n would get dropped because there is no NGINX replica running on that node.\n\n\n\n\n\n\nIngress status\n\n\n\n\nBecause NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller \ndoes not\nupdate the status of Ingress objects it manages\n.\n\n\n$\n kubectl get ingress\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 80\n\n\n\n\n\nDespite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible\nto force the status update of all managed Ingress objects by setting the \nexternalIPs\n field of the \ningress-nginx\n\nService.\n\n\n\n\nWarning\n\n\nThere is more to setting \nexternalIPs\n than just enabling the NGINX Ingress controller to update the status of\nIngress objects. Please read about this option in the \nServices\n page of official Kubernetes\ndocumentation as well as the section about \nExternal IPs\n in this document for more information.\n\n\n\n\n\n\nExample\n\n\nGiven the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is )\n\n\n$\n kubectl describe node\n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\none could edit the \ningress-nginx\n Service and add the following field to the object spec\n\n\nspec\n:\n\n \nexternalIPs\n:\n\n \n-\n \n203.0.113.1\n\n \n-\n \n203.0.113.2\n\n \n-\n \n203.0.113.3\n\n\n\n\n\nwhich would in turn be reflected on Ingress objects as follows:\n\n\n$\n kubectl get ingress -o wide\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80\n\n\n\n\n\n\n\n\n\nRedirects\n\n\n\n\nAs NGINX is \nnot aware of the port translation operated by the NodePort Service\n, backend applications are responsible\nfor generating redirect URLs that take into account the URL used by external clients, including the NodePort.\n\n\n\n\nExample\n\n\nRedirects generated by NGINX, for instance HTTP to HTTPS or \ndomain\n to \nwww.domain\n, are generated without\nNodePort:\n\n\n$\n curl -D- http://myapp.example.com:30100\n`\n\n\nHTTP/1.1 308 Permanent Redirect\n\n\nServer: nginx/1.15.2\n\n\nLocation: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect\n\n\n\n\n\n\n\nVia the host network\n\u00b6\n\n\nIn a setup where there is no external load balancer available but using NodePorts is not an option, one can configure\n\ningress-nginx\n Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of\nthis approach is that the NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes' network\ninterfaces, without the extra network translation imposed by NodePort Services.\n\n\n\n\nNote\n\n\nThis approach does not leverage any Service object to expose the NGINX Ingress controller. If the \ningress-nginx\n\nService exists in the target cluster, it is \nrecommended to delete it\n.\n\n\n\n\nThis can be achieved by enabling the \nhostNetwork\n option in the Pods' spec.\n\n\ntemplate\n:\n\n \nspec\n:\n\n \nhostNetwork\n:\n \ntrue\n\n\n\n\n\n\n\nSecurity considerations\n\n\nEnabling this option \nexposes every system daemon to the NGINX Ingress controller\n on any network interface,\nincluding the host's loopback. Please evaluate the impact this may have on the security of your system carefully.\n\n\n\n\n\n\nExample\n\n\nConsider this \nnginx-ingress-controller\n Deployment composed of 2 replicas, NGINX Pods inherit from the IP address\nof their host instead of an internal Pod IP.\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\n\n\nnginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n\n\n\n\n\n\n\nOne major limitation of this deployment approach is that only \na single NGINX Ingress controller Pod\n may be scheduled\non each cluster node, because binding the same port multiple times on the same network interface is technically\nimpossible. Pods that are unschedulable due to such situation fail with the following event:\n\n\n$\n kubectl -n ingress-nginx describe pod \n\n...\n\n\nEvents:\n\n\n Type Reason From Message\n\n\n ---- ------ ---- -------\n\n\n Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.\n\n\n\n\n\nOne way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a \nDaemonSet\n instead\nof a traditional Deployment.\n\n\n\n\nInfo\n\n\nA DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to\n\nrepel those Pods\n. For more information, see \nDaemonSet\n.\n\n\n\n\nBecause most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the\nconfiguration of the corresponding manifest at the user's discretion.\n\n\n\n\nLike with NodePorts, this approach has a few quirks it is important to be aware of.\n\n\n\n\nDNS resolution\n\n\n\n\nPods configured with \nhostNetwork\n:\n \ntrue\n do not use the internal DNS resolver (i.e. \nkube-dns\n or \nCoreDNS\n), unless\ntheir \ndnsPolicy\n spec field is set to \nClusterFirstWithHostNet\n. Consider using this setting if NGINX is\nexpected to resolve internal names for any reason.\n\n\n\n\nIngress status\n\n\n\n\nBecause there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default\n\n--publish-service\n flag used in standard cloud setups \ndoes not apply\n and the status of all Ingress objects remains\nblank.\n\n\n$\n kubectl get ingress\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 80\n\n\n\n\n\nInstead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the\n\n--report-node-internal-ip-address\n flag, which sets the status of all Ingress objects to the internal IP\naddress of all nodes running the NGINX Ingress controller.\n\n\n\n\nExample\n\n\nGiven a \nnginx-ingress-controller\n DaemonSet composed of 2 replicas\n\n\n$\n kubectl -n ingress-nginx get pod -o wide\n\nNAME READY STATUS IP NODE\n\n\ndefault-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2\n\n\nnginx-ingress-controller-5b4cf5fc6-7lg6c 1/1 Running 203.0.113.3 host-3\n\n\nnginx-ingress-controller-5b4cf5fc6-lzrls 1/1 Running 203.0.113.2 host-2\n\n\n\n\n\nthe controller sets the status of all Ingress objects it manages to the following value:\n\n\n$\n kubectl get ingress -o wide\n\nNAME HOSTS ADDRESS PORTS\n\n\ntest-ingress myapp.example.com 203.0.113.2,203.0.113.3 80\n\n\n\n\n\n\n\n\n\nNote\n\n\nAlternatively, it is possible to override the address written to Ingress objects using the\n\n--publish-status-address\n flag. See \nCommand line arguments\n.\n\n\n\n\nUsing a self-provisioned edge\n\u00b6\n\n\nSimilarly to cloud environments, this deployment approach requires an edge network component providing a public\nentrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software\n(e.g. \nHAproxy\n) and is usually managed outside of the Kubernetes landscape by operations teams.\n\n\nSuch deployment builds upon the NodePort Service described above in \nOver a NodePort Service\n,\nwith one significant difference: external clients do not access cluster nodes directly, only the edge component does.\nThis is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.\n\n\nOn the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes\nnodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort\non the target nodes as shown in the diagram below:\n\n\n\n\nExternal IPs\n\u00b6\n\n\n\n\nSource IP address\n\n\nThis method does not allow preserving the source IP of HTTP requests in any manner, it is therefore \nnot\nrecommended\n to use it despite its apparent simplicity.\n\n\n\n\nThe \nexternalIPs\n Service option was previously mentioned in the \nNodePort\n section.\n\n\nAs per the \nServices\n page of the official Kubernetes documentation, the \nexternalIPs\n option causes\n\nkube-proxy\n to route traffic sent to arbitrary IP addresses \nand on the Service ports\n to the endpoints of that\nService. These IP addresses \nmust belong to the target node\n.\n\n\n\n\nExample\n\n\nGiven the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is )\n\n\n$\n kubectl describe node\n\nNAME STATUS ROLES EXTERNAL-IP\n\n\nhost-1 Ready master 203.0.113.1\n\n\nhost-2 Ready node 203.0.113.2\n\n\nhost-3 Ready node 203.0.113.3\n\n\n\n\n\nand the following \ningress-nginx\n NodePort Service\n\n\n$\n kubectl -n ingress-nginx get svc\n\nNAME TYPE CLUSTER-IP PORT(S)\n\n\ningress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP\n\n\n\n\n\nOne could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort\nand the Service port:\n\n\nspec\n:\n\n \nexternalIPs\n:\n\n \n-\n \n203.0.113.2\n\n \n-\n \n203.0.113.3\n\n\n\n\n\n$\n curl -D- http://myapp.example.com:30100\n\nHTTP/1.1 200 OK\n\n\nServer: nginx/1.15.2\n\n\n\n$\n curl -D- http://myapp.example.com\n\nHTTP/1.1 200 OK\n\n\nServer: nginx/1.15.2\n\n\n\n\n\nWe assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.", "title": "Bare-metal considerations" }, { @@ -115,9 +115,14 @@ "text": "In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest\nsuffices to provide a single point of contact to the NGINX Ingress controller to external clients and, indirectly, to\nany application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly\ndifferent setup to offer the same kind of access to external consumers. The rest of this document describes a few recommended approaches to deploying the NGINX Ingress controller inside a\nKubernetes cluster running on bare-metal.", "title": "Bare-metal considerations" }, + { + "location": "/deploy/baremetal/#a-pure-software-solution-metallb", + "text": "MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a\nsupported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster. This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX\nIngress controller in a Kubernetes cluster that has publicly accessible nodes . In this mode, one node attracts all\nthe traffic for the ingress-nginx Service IP. See Traffic policies for more details. Note The description of other supported configuration modes is off-scope for this document. Warning MetalLB is currently in beta . Read about the Project maturity and make sure you inform\nyourself by reading the official documentation thoroughly. MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB\nwas deployed following the Installation instructions. MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool\ncan be defined in a ConfigMap named config located in the same namespace as the MetalLB controller. In the simplest\npossible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out\nby a DHCP server. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates\nthe loadBalancer IP field of the ingress-nginx Service accordingly. apiVersion : v1 kind : ConfigMap metadata : \n namespace : metallb-system \n name : config data : \n config : | \n address-pools: \n - name: default \n protocol: layer2 \n addresses: \n - 203.0.113.2-203.0.113.3 $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx LoadBalancer 10.0.220.217 203.0.113.3 80:30100/TCP,443:30101/TCP As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries\nare created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on\nthe ports configured in the LoadBalancer Service: $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com' HTTP/1.1 200 OK Server: nginx/1.15.2 Tip In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local \ntraffic policy. Traffic policies are described in more details in Traffic policies as\nwell as in the next section.", + "title": "A pure software solution: MetalLB" + }, { "location": "/deploy/baremetal/#over-a-nodeport-service", - "text": "Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default:\n30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to\nany port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client\nlocated outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports\n80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP\nrequests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most\nbare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the\nmyapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag\nto include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues\nincluding (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a\nHTTP request is always the IP address of the Kubernetes node that received the request from the perspective of\nNGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy \nfield of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX\nIngress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes\nthe NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments\nthis value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved,\nwhile requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not\nupdate the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible\nto force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx \nService. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : \n externalIPs : \n - 203.0.113.1 \n - 203.0.113.2 \n - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible\nfor generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without\nNodePort: $ curl http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect", + "text": "Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide . Info A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default:\n30000-32767) on every Kubernetes node, masters included. For more information, see Services . In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to\nany port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client\nlocated outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports\n80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP\nrequests. Example Given the NodePort 30100 allocated to the ingress-nginx Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) default-http-backend ClusterIP 10.0.64.249 80/TCP ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most\nbare-metal environments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 a client would reach an Ingress with host : myapp . example . com at http://myapp.example.com:30100 , where the\nmyapp.example.com subdomain resolves to the 203.0.113.2 IP address. Impact on the host system While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag\nto include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues\nincluding (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require. This practice is therefore discouraged . See the other approaches proposed in this page for alternatives. This approach has a few other limitations one ought to be aware of: Source IP address Services of type NodePort perform source address translation by default. This means the source IP of a\nHTTP request is always the IP address of the Kubernetes node that received the request from the perspective of\nNGINX. The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy \nfield of the ingress-nginx Service spec to Local ( example ). Warning This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX\nIngress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes\nthe NGINX Ingress controller should be scheduled or not scheduled. Example In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments\nthis value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 with a nginx-ingress-controller Deployment composed of 2 replicas $ kubectl -n ingress-nginx get pod -o wide NAME READY STATUS IP NODE default-http-backend-7c5bc89cc9-p86md 1/1 Running 172.17.1.1 host-2 nginx-ingress-controller-cf9ff8c96-8vvf8 1/1 Running 172.17.0.3 host-3 nginx-ingress-controller-cf9ff8c96-pxsds 1/1 Running 172.17.1.4 host-2 Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved,\nwhile requests to host-1 would get dropped because there is no NGINX replica running on that node. Ingress status Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not\nupdate the status of Ingress objects it manages . $ kubectl get ingress NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 80 Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible\nto force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx \nService. Warning There is more to setting externalIPs than just enabling the NGINX Ingress controller to update the status of\nIngress objects. Please read about this option in the Services page of official Kubernetes\ndocumentation as well as the section about External IPs in this document for more information. Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 one could edit the ingress-nginx Service and add the following field to the object spec spec : \n externalIPs : \n - 203.0.113.1 \n - 203.0.113.2 \n - 203.0.113.3 which would in turn be reflected on Ingress objects as follows: $ kubectl get ingress -o wide NAME HOSTS ADDRESS PORTS test-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80 Redirects As NGINX is not aware of the port translation operated by the NodePort Service , backend applications are responsible\nfor generating redirect URLs that take into account the URL used by external clients, including the NodePort. Example Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain , are generated without\nNodePort: $ curl -D- http://myapp.example.com:30100 ` HTTP/1.1 308 Permanent Redirect Server: nginx/1.15.2 Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect", "title": "Over a NodePort Service" }, { @@ -130,6 +135,11 @@ "text": "Similarly to cloud environments, this deployment approach requires an edge network component providing a public\nentrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software\n(e.g. HAproxy ) and is usually managed outside of the Kubernetes landscape by operations teams. Such deployment builds upon the NodePort Service described above in Over a NodePort Service ,\nwith one significant difference: external clients do not access cluster nodes directly, only the edge component does.\nThis is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address. On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes\nnodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort\non the target nodes as shown in the diagram below:", "title": "Using a self-provisioned edge" }, + { + "location": "/deploy/baremetal/#external-ips", + "text": "Source IP address This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore not\nrecommended to use it despite its apparent simplicity. The externalIPs Service option was previously mentioned in the NodePort section. As per the Services page of the official Kubernetes documentation, the externalIPs option causes kube-proxy to route traffic sent to arbitrary IP addresses and on the Service ports to the endpoints of that\nService. These IP addresses must belong to the target node . Example Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal\nenvironments this value is ) $ kubectl describe node NAME STATUS ROLES EXTERNAL-IP host-1 Ready master 203.0.113.1 host-2 Ready node 203.0.113.2 host-3 Ready node 203.0.113.3 and the following ingress-nginx NodePort Service $ kubectl -n ingress-nginx get svc NAME TYPE CLUSTER-IP PORT(S) ingress-nginx NodePort 10.0.220.217 80:30100/TCP,443:30101/TCP One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort\nand the Service port: spec : \n externalIPs : \n - 203.0.113.2 \n - 203.0.113.3 $ curl -D- http://myapp.example.com:30100 HTTP/1.1 200 OK Server: nginx/1.15.2 $ curl -D- http://myapp.example.com HTTP/1.1 200 OK Server: nginx/1.15.2 We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.", + "title": "External IPs" + }, { "location": "/deploy/rbac/", "text": "Role Based Access Control (RBAC)\n\u00b6\n\n\nOverview\n\u00b6\n\n\nThis example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.\n\n\nRole Based Access Control is comprised of four layers:\n\n\n\n\nClusterRole\n - permissions assigned to a role that apply to an entire cluster\n\n\nClusterRoleBinding\n - binding a ClusterRole to a specific account\n\n\nRole\n - permissions assigned to a role that apply to a specific namespace\n\n\nRoleBinding\n - binding a Role to a specific account\n\n\n\n\nIn order for RBAC to be applied to an nginx-ingress-controller, that controller\nshould be assigned to a \nServiceAccount\n. That \nServiceAccount\n should be\nbound to the \nRole\ns and \nClusterRole\ns defined for the nginx-ingress-controller.\n\n\nService Accounts created in this example\n\u00b6\n\n\nOne ServiceAccount is created in this example, \nnginx-ingress-serviceaccount\n.\n\n\nPermissions Granted in this example\n\u00b6\n\n\nThere are two sets of permissions defined in this example. Cluster-wide\npermissions defined by the \nClusterRole\n named \nnginx-ingress-clusterrole\n, and\nnamespace specific permissions defined by the \nRole\n named \nnginx-ingress-role\n.\n\n\nCluster Permissions\n\u00b6\n\n\nThese permissions are granted in order for the nginx-ingress-controller to be\nable to function as an ingress across the cluster. These permissions are\ngranted to the ClusterRole named \nnginx-ingress-clusterrole\n\n\n\n\nconfigmaps\n, \nendpoints\n, \nnodes\n, \npods\n, \nsecrets\n: list, watch\n\n\nnodes\n: get\n\n\nservices\n, \ningresses\n: get, list, watch\n\n\nevents\n: create, patch\n\n\ningresses/status\n: update\n\n\n\n\nNamespace Permissions\n\u00b6\n\n\nThese permissions are granted specific to the nginx-ingress namespace. These\npermissions are granted to the Role named \nnginx-ingress-role\n\n\n\n\nconfigmaps\n, \npods\n, \nsecrets\n: get\n\n\nendpoints\n: get\n\n\n\n\nFurthermore to support leader-election, the nginx-ingress-controller needs to\nhave access to a \nconfigmap\n using the resourceName \ningress-controller-leader-nginx\n\n\n\n\nNote that resourceNames can NOT be used to limit requests using the \u201ccreate\u201d\nverb because authorizers only have access to information that can be obtained\nfrom the request URL, method, and headers (resource names in a \u201ccreate\u201d request\nare part of the request body).\n\n\n\n\n\n\nconfigmaps\n: get, update (for resourceName \ningress-controller-leader-nginx\n)\n\n\nconfigmaps\n: create\n\n\n\n\nThis resourceName is the concatenation of the \nelection-id\n and the\n\ningress-class\n as defined by the ingress-controller, which defaults to:\n\n\n\n\nelection-id\n: \ningress-controller-leader\n\n\ningress-class\n: \nnginx\n\n\nresourceName\n : \n - \n\n\n\n\nPlease adapt accordingly if you overwrite either parameter when launching the\nnginx-ingress-controller.\n\n\nBindings\n\u00b6\n\n\nThe ServiceAccount \nnginx-ingress-serviceaccount\n is bound to the Role\n\nnginx-ingress-role\n and the ClusterRole \nnginx-ingress-clusterrole\n.\n\n\nThe serviceAccountName associated with the containers in the deployment must\nmatch the serviceAccount. The namespace references in the Deployment metadata, \ncontainer arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.", @@ -202,7 +212,7 @@ }, { "location": "/user-guide/nginx-configuration/annotations/", - "text": "Annotations\n\u00b6\n\n\nYou can add these Kubernetes annotations to specific Ingress objects to customize their behavior.\n\n\n\n\nTip\n\n\nAnnotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e. \n\"true\"\n, \n\"false\"\n, \n\"100\"\n.\n\n\n\n\n\n\nNote\n\n\nThe annotation prefix can be changed using the\n\n--annotations-prefix\n command line argument\n,\nbut the default is \nnginx.ingress.kubernetes.io\n, as described in the\ntable below.\n\n\n\n\n\n\n\n\n\n\nName\n\n\ntype\n\n\n\n\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/add-base-url\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/app-root\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/affinity\n\n\ncookie\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-realm\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-type\n\n\nbasic or digest\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-url\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/backend-protocol\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/base-url-scheme\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/default-backend\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-cors\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/force-ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/from-to-www-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/grpc-backend\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-rps\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect-code\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-cookie-domain\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-from\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-to\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-log\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-target\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-backends\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-alias\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/service-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-name\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-hash\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-passthrough\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/load-balance\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-vhost\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/whitelist-source-range\n\n\nCIDR\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-influxdb\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-measurement\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-port\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-host\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-server-name\n\n\nstring\n\n\n\n\n\n\n\n\nRewrite\n\u00b6\n\n\nIn some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.\nSet the annotation \nnginx.ingress.kubernetes.io/rewrite-target\n to the path expected by the service.\n\n\nIf the application contains relative links it is possible to add an additional annotation \nnginx.ingress.kubernetes.io/add-base-url\n that will prepend a \nbase\n tag\n in the header of the returned HTML from the backend.\n\n\nIf the scheme of \nbase\n tag\n need to be specific, set the annotation \nnginx.ingress.kubernetes.io/base-url-scheme\n to the scheme such as \nhttp\n and \nhttps\n.\n\n\nIf the Application Root is exposed in a different path and needs to be redirected, set the annotation \nnginx.ingress.kubernetes.io/app-root\n to redirect requests for \n/\n.\n\n\n\n\nExample\n\n\nPlease check the \nrewrite\n example.\n\n\n\n\nSession Affinity\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/affinity\n enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.\nThe only affinity type available for NGINX is \ncookie\n.\n\n\n\n\nExample\n\n\nPlease check the \naffinity\n example.\n\n\n\n\nCookie affinity\n\u00b6\n\n\nIf you use the \ncookie\n affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation \nnginx.ingress.kubernetes.io/session-cookie-name\n. The default is to create a cookie named 'INGRESSCOOKIE'.\n\n\nIn case of NGINX the annotation \nnginx.ingress.kubernetes.io/session-cookie-hash\n defines which algorithm will be used to hash the used upstream. Default value is \nmd5\n and possible values are \nmd5\n, \nsha1\n and \nindex\n.\n\n\n\n\nAttention\n\n\nThe \nindex\n option is not an actual hash; an in-memory index is used instead, which has less overhead.\nHowever, with \nindex\n, matching against a changing upstream server list is inconsistent.\nSo, at reload, if upstream servers have changed, index values are not guaranteed to correspond to the same server as before!\n\nUse \nindex\n with caution\n and only if you need to!\n\n\n\n\nIn NGINX this feature is implemented by the third party module \nnginx-sticky-module-ng\n. The workflow used to define which upstream server will be used is explained \nhere\n\n\nAuthentication\n\u00b6\n\n\nIs possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key \nauth\n.\n\n\nThe annotations are:\n\nnginx.ingress.kubernetes.io/auth-type: [basic|digest]\n\n\n\nIndicates the \nHTTP Authentication Type: Basic or Digest Access Authentication\n.\n\n\nnginx.ingress.kubernetes.io/auth-secret: secretName\n\n\n\n\nThe name of the Secret that contains the usernames and passwords which are granted access to the \npath\ns defined in the Ingress rules.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n\n\n\n\n\n\nExample\n\n\nPlease check the \nauth\n example.\n\n\n\n\nCustom NGINX upstream checks\n\u00b6\n\n\nNGINX exposes some flags in the \nupstream configuration\n that enable the configuration of each server in the upstream. The Ingress controller allows custom \nmax_fails\n and \nfail_timeout\n parameters in a global context using \nupstream-max-fails\n and \nupstream-fail-timeout\n in the NGINX ConfigMap or in a particular Ingress rule. \nupstream-max-fails\n defaults to 0. This means NGINX will respect the container's \nreadinessProbe\n if it is defined. If there is no probe and no values for \nupstream-max-fails\n NGINX will continue to send traffic to the container.\n\n\n\n\nTip\n\n\nWith the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**\n\n\n\n\nTo use custom values in an Ingress rule define these annotations:\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the \nupstream-fail-timeout\n parameter to consider the server unavailable.\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.\n\n\nIn NGINX, backend server pools are called \"\nupstreams\n\". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.\n\n\n\n\nAttention\n\n\nAll Ingress rules using the same service will use the same upstream.\n\nOnly one of the Ingress rules should define annotations to configure the upstream servers.\n\n\n\n\n\n\nExample\n\n\nPlease check the \ncustom upstream check\n example.\n\n\n\n\nCustom NGINX upstream hashing\n\u00b6\n\n\nNGINX supports load balancing by client-server mapping based on \nconsistent hashing\n for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The \nketama\n consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.\n\n\nTo enable consistent hashing for a backend:\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n: the nginx variable, text value or any combination thereof to use for consistent hashing. For example \nnginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\"\n to consistently hash upstream requests by the current request URI.\n\n\nCustom NGINX load balancing\n\u00b6\n\n\nThis is similar to (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance) but configures load balancing algorithm per ingress.\n\n\n\n\nNote that \nnginx.ingress.kubernetes.io/upstream-hash-by\n takes preference over this. If this and \nnginx.ingress.kubernetes.io/upstream-hash-by\n are not set then we fallback to using globally configured load balancing algorithm.\n\n\n\n\nCustom NGINX upstream vhost\n\u00b6\n\n\nThis configuration setting allows you to control the value for host in the following statement: \nproxy_set_header Host $host\n, which forms part of the location block. This is useful if you need to call the upstream server by something other than \n$host\n.\n\n\nClient Certificate Authentication\n\u00b6\n\n\nIt is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.\n\n\nThe annotations are:\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret: secretName\n:\n The name of the Secret that contains the full Certificate Authority chain \nca.crt\n that is enabled to authenticate against this Ingress.\n This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n:\n The validation depth between the provided client certificate and the Certification Authority chain.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n:\n Enables verification of client certificates.\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n:\n The URL/Page that user should be redirected in case of a Certificate Authentication Error\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n:\n Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled.\n\n\n\n\n\n\nExample\n\n\nPlease check the \nclient-certs\n example.\n\n\n\n\n\n\nAttention\n\n\nTLS with Client Authentication is \nnot\n possible in Cloudflare and might result in unexpected behavior.\n\n\nCloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: \nhttps://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/\n\n\nOnly Authenticated Origin Pulls are allowed and can be configured by following their tutorial: \nhttps://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls\n\n\n\n\nConfiguration snippet\n\u00b6\n\n\nUsing this annotation you can add additional configuration to the NGINX location. For example:\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n:\n \n|\n\n \nmore_set_headers \"Request-Id: $req_id\";\n\n\n\n\n\nDefault Backend\n\u00b6\n\n\nThe ingress controller requires a \ndefault backend\n.\nThis service handles the response when the service in the Ingress rule does not have endpoints.\nThis is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation \nnginx.ingress.kubernetes.io/default-backend: \n to specify a custom default backend.\n\n\nEnable CORS\n\u00b6\n\n\nTo enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation\n\nnginx.ingress.kubernetes.io/enable-cors: \"true\"\n. This will add a section in the server\nlocation enabling this functionality.\n\n\nCORS can be controlled with the following annotations:\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n controls which methods are accepted. This is a multi-valued field, separated by ',' and\n accepts only letters (upper and lower case).\n\n\nDefault: \nGET, PUT, POST, DELETE, PATCH, OPTIONS\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters,\n numbers, _ and -.\n\n\n\n\nDefault: \nDNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n controls what's the accepted Origin for CORS.\n This is a single field value, with the following format: \nhttp(s)://origin-site.com\n or \nhttp(s)://origin-site.com:port\n\n\n\n\nDefault: \n*\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n controls if credentials can be passed during CORS operations.\n\n\n\n\nDefault: \ntrue\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-credentials: \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n controls how long preflight requests can be cached.\n Default: \n1728000\n\n Example: \nnginx.ingress.kubernetes.io/cors-max-age: 600\n\n\n\n\n\n\n\n\nNote\n\n\nFor more information please see \nhttps://enable-cors.org\n \n\n\n\n\nServer Alias\n\u00b6\n\n\nTo add Server Aliases to an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/server-alias: \" \"\n.\nThis will create a server with the same configuration, but a different \nserver_name\n as the provided host.\n\n\n\n\nNote\n\n\nA server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored.\nIf a server-alias is created and later a new server with the same hostname is created,\nthe new server configuration will take place over the alias configuration.\n\n\n\n\nFor more information please see \nthe \nserver_name\n documentation\n.\n\n\nServer snippet\n\u00b6\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/server-snippet\n it is possible to add custom configuration in the server configuration block.\n\n\napiVersion\n:\n \nextensions/v1beta1\n\n\nkind\n:\n \nIngress\n\n\nmetadata\n:\n\n \nannotations\n:\n\n \nnginx.ingress.kubernetes.io/server-snippet\n:\n \n|\n\n\nset $agentflag 0;\n\n\n\nif ($http_user_agent ~* \"(Mobile)\" ){\n\n \nset $agentflag 1;\n\n\n}\n\n\n\nif ( $agentflag = 1 ) {\n\n \nreturn 301 https://m.example.com;\n\n\n}\n\n\n\n\n\n\n\nAttention\n\n\nThis annotation can be used only once per host.\n\n\n\n\nClient Body Buffer Size\n\u00b6\n\n\nSets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.\n\n\n\n\nNote\n\n\nThe annotation value must be given in a format understood by Nginx.\n\n\n\n\n\n\nExample\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"\n # 1000 bytes\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1k\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1K\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1m\n # 1 megabyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1M\n # 1 megabyte\n\n\n\n\n\n\nFor more information please see \nhttp://nginx.org\n\n\nExternal Authentication\n\u00b6\n\n\nTo use an existing service that provides authentication the Ingress rule can be annotated with \nnginx.ingress.kubernetes.io/auth-url\n to indicate the URL where the HTTP request should be sent.\n\n\nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"URL\n \nto\n \nthe\n \nauthentication\n \nservice\"\n\n\n\n\n\nAdditionally it is possible to set:\n\n\n\n\nnginx.ingress.kubernetes.io/auth-method\n:\n \n \n to specify the HTTP method to use.\n\n\nnginx.ingress.kubernetes.io/auth-signin\n:\n \n \n to specify the location of the error page.\n\n\nnginx.ingress.kubernetes.io/auth-response-headers\n:\n \n \n to specify headers to pass to backend once authentication request completes.\n\n\nnginx.ingress.kubernetes.io/auth-request-redirect\n:\n \n \n to specify the X-Auth-Request-Redirect header value.\n\n\n\n\n\n\nExample\n\n\nPlease check the \nexternal-auth\n example.\n\n\n\n\nRate limiting\n\u00b6\n\n\nThese annotations define a limit on the connections that can be opened by a single client IP address.\nThis can be used to mitigate \nDDoS Attacks\n.\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n: number of concurrent connections allowed from a single IP address.\n\n\nnginx.ingress.kubernetes.io/limit-rps\n: number of connections that may be accepted from a given IP each second.\n\n\nnginx.ingress.kubernetes.io/limit-rpm\n: number of connections that may be accepted from a given IP each minute.\n\n\nnginx.ingress.kubernetes.io/limit-rate-after\n: sets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nnginx.ingress.kubernetes.io/limit-rate\n: rate of request that accepted from a client each second.\n\n\n\n\nYou can specify the client IP source ranges to be excluded from rate-limiting through the \nnginx.ingress.kubernetes.io/limit-whitelist\n annotation. The value is a comma separated list of CIDRs.\n\n\nIf you specify multiple annotations in a single Ingress rule, \nlimit-rpm\n, and then \nlimit-rps\n takes precedence.\n\n\nThe annotation \nnginx.ingress.kubernetes.io/limit-rate\n, \nnginx.ingress.kubernetes.io/limit-rate-after\n define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nTo configure this setting globally for all Ingress rules, the \nlimit-rate-after\n and \nlimit-rate\n value may be set in the \nNGINX ConfigMap\n. if you set the value in ingress annotation will cover global setting.\n\n\nPermanent Redirect\n\u00b6\n\n\nThis annotation allows to return a permanent redirect instead of sending data to the upstream. For example \nnginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com\n would redirect everything to Google.\n\n\nPermanent Redirect Code\n\u00b6\n\n\nThis annotation allows you to modify the status code used for permanent redirects. For example \nnginx.ingress.kubernetes.io/permanent-redirect-code: '308'\n would return your permanent-redirect with a 308.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n allows to configure TLS termination in the pod and not in NGINX.\n\n\n\n\nAttention\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n invalidates all the other available annotations.\nThis is because SSL Passthrough works on level 4 of the OSI stack (TCP), not on the HTTP/HTTPS level.\n\n\n\n\n\n\nAttention\n\n\nThe use of this annotation requires the flag \n--enable-ssl-passthrough\n (By default it is disabled).\n\n\n\n\nSecure backends DEPRECATED (since 0.18.0)\n\u00b6\n\n\nPlease use \nnginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\"\n\n\nBy default NGINX uses plain HTTP to reach the services.\nAdding the annotation \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n in the Ingress rule changes the protocol to HTTPS.\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation \nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n.\n\n\n\n\nAttention\n\n\nNote that if an invalid or non-existent secret is given,\nthe ingress controller will ignore the \nsecure-backends\n annotation.\n\n\n\n\nService Upstream\n\u00b6\n\n\nBy default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.\n\n\nThe \nnginx.ingress.kubernetes.io/service-upstream\n annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.\n\n\nThis can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue \n#257\n.\n\n\nKnown Issues\n\u00b6\n\n\nIf the \nservice-upstream\n annotation is specified the following things should be taken into consideration:\n\n\n\n\nSticky Sessions will not work as only round-robin load balancing is supported.\n\n\nThe \nproxy_next_upstream\n directive will not have any effect meaning on error the request will not be dispatched to another upstream.\n\n\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects (308) to HTTPS if TLS is enabled for that ingress.\nIf you want to disable this behavior globally, you can use \nssl-redirect: \"false\"\n in the NGINX \nconfig map\n.\n\n\nTo configure this feature for specific ingress resources, you can use the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n\nannotation in the particular resource.\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS\neven when there is no TLS certificate available.\nThis can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n annotation in the particular resource.\n\n\nRedirect from/to www.\n\u00b6\n\n\nIn some scenarios is required to redirect from \nwww.domain.com\n to \ndomain.com\n or vice versa.\nTo enable this feature use the annotation \nnginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"\n\n\n\n\nAttention\n\n\nIf at some point a new Ingress is created with a host equal to one of the options (like \ndomain.com\n) the annotation will be omitted.\n\n\n\n\nWhitelist source range\n\u00b6\n\n\nYou can specify allowed client IP source ranges through the \nnginx.ingress.kubernetes.io/whitelist-source-range\n annotation.\nThe value is a comma separated list of \nCIDRs\n, e.g. \n10.0.0.0/24,172.10.0.1\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nwhitelist-source-range\n value may be set in the \nNGINX ConfigMap\n.\n\n\n\n\nNote\n\n\nAdding an annotation to an Ingress rule overrides any global restriction.\n\n\n\n\nCustom timeouts\n\u00b6\n\n\nUsing the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.\nIn some scenarios is required to have different values. To allow this we provide annotations that allows this customization:\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\n\n\nProxy redirect\n\u00b6\n\n\nWith the annotations \nnginx.ingress.kubernetes.io/proxy-redirect-from\n and \nnginx.ingress.kubernetes.io/proxy-redirect-to\n it is possible to\nset the text that should be changed in the \nLocation\n and \nRefresh\n header fields of a proxied server response (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)\n\n\nSetting \"off\" or \"default\" in the annotation \nnginx.ingress.kubernetes.io/proxy-redirect-from\n disables \nnginx.ingress.kubernetes.io/proxy-redirect-to\n,\notherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.\n\n\nBy default the value of each annotation is \"off\".\n\n\nCustom max body size\n\u00b6\n\n\nFor NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter \nclient_max_body_size\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-body-size\n value may be set in the \nNGINX ConfigMap\n.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n:\n \n8m\n\n\n\n\n\nProxy cookie domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \"Set-Cookie\" header fields of a proxied server response.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-cookie-domain\n value may be set in the \nNGINX ConfigMap\n.\n\n\nProxy buffering\n\u00b6\n\n\nEnable or disable proxy buffering \nproxy_buffering\n.\nBy default proxy buffering is disabled in the NGINX config.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-buffering\n value may be set in the \nNGINX ConfigMap\n.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n:\n \n\"on\"\n\n\n\n\n\nProxy buffer size\n\u00b6\n\n\nSets the size of the buffer \nproxy_buffer_size\n used for reading the first part of the response received from the proxied server.\nBy default proxy buffer size is set as \"4k\"\n\n\nTo configure this setting globally, set \nproxy-buffer-size\n in \nNGINX ConfigMap\n. To use custom values in an Ingress rule, define this annotation:\n\nnginx.ingress.kubernetes.io/proxy-buffer-size\n:\n \n\"8k\"\n\n\n\n\nSSL ciphers\n\u00b6\n\n\nSpecifies the \nenabled ciphers\n.\n\n\nUsing this annotation will set the \nssl_ciphers\n directive at the server level. This configuration is active for all the paths in the host.\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n:\n \n\"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n\n\n\n\n\nConnection proxy header\n\u00b6\n\n\nUsing this annotation will override the default connection header set by NGINX.\nTo use custom values in an Ingress rule, define the annotation:\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n:\n \n\"keep-alive\"\n\n\n\n\n\nEnable Access Log\n\u00b6\n\n\nAccess logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given\ningress. To do this, use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n:\n \n\"false\"\n\n\n\n\n\nEnable Rewrite Log\n\u00b6\n\n\nRewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs.\nNote that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-rewrite-log\n:\n \n\"true\"\n\n\n\n\n\nLua Resty WAF\n\u00b6\n\n\nUsing \nlua-resty-waf-*\n annotations we can enable and control the \nlua-resty-waf\n\nWeb Application Firewall per location.\n\n\nFollowing configuration will enable the WAF for the paths defined in the corresponding ingress:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n:\n \n\"active\"\n\n\n\n\n\nIn order to run it in debugging mode you can set \nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n to \n\"true\"\n in addition to the above configuration.\nThe other possible values for \nnginx.ingress.kubernetes.io/lua-resty-waf\n are \ninactive\n and \nsimulate\n.\nIn \ninactive\n mode WAF won't do anything, whereas in \nsimulate\n mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.\n\n\nlua-resty-waf\n comes with predefined set of rules \nhttps://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules\n that covers ModSecurity CRS.\nYou can use \nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n to ignore a subset of those rulesets. For an example:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n:\n \n\"41000_sqli,\n \n42000_xss\"\n\n\n\n\n\nwill ignore the two mentioned rulesets.\n\n\nIt is also possible to configure custom WAF rules per ingress using the \nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word \nfoo\n:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n:\n \n'[=[\n \n{\n \n\"access\":\n \n[\n \n{\n \n\"actions\":\n \n{\n \n\"disrupt\"\n \n:\n \n\"DENY\"\n \n},\n \n\"id\":\n \n10001,\n \n\"msg\":\n \n\"my\n \ncustom\n \nrule\",\n \n\"operator\":\n \n\"STR_CONTAINS\",\n \n\"pattern\":\n \n\"foo\",\n \n\"vars\":\n \n[\n \n{\n \n\"parse\":\n \n[\n \n\"values\",\n \n1\n \n],\n \n\"type\":\n \n\"REQUEST_ARGS\"\n \n}\n \n]\n \n}\n \n],\n \n\"body_filter\":\n \n[],\n \n\"header_filter\":[]\n \n}\n \n]=]'\n\n\n\n\n\nFor details on how to write WAF rules, please refer to \nhttps://github.com/p0pr0ck5/lua-resty-waf\n.\n\n\ngRPC backend DEPRECATED (since 0.18.0)\n\u00b6\n\n\nPlease use \nnginx.ingress.kubernetes.io/backend-protocol: \"GRPC\"\n or \nnginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\"\n\n\nSince NGINX 1.13.10 it is possible to expose \ngRPC services natively\n\n\nYou only need to add the annotation \nnginx.ingress.kubernetes.io/grpc-backend: \"true\"\n to enable this feature.\nAdditionally, if the gRPC service requires TLS, add \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n.\n\n\n\n\nAttention\n\n\nThis feature requires HTTP2 to work which means we need to expose this service using HTTPS.\nExposing a gRPC service using HTTP is not supported.\n\n\n\n\nInfluxDB\n\u00b6\n\n\nUsing \ninfluxdb-*\n annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket\nusing the \nnginx-influxdb-module\n.\n\n\nnginx.ingress.kubernetes.io/enable-influxdb\n:\n \n\"true\"\n\n\nnginx.ingress.kubernetes.io/influxdb-measurement\n:\n \n\"nginx-reqs\"\n\n\nnginx.ingress.kubernetes.io/influxdb-port\n:\n \n\"8089\"\n\n\nnginx.ingress.kubernetes.io/influxdb-host\n:\n \n\"127.0.0.1\"\n\n\nnginx.ingress.kubernetes.io/influxdb-server-name\n:\n \n\"nginx-ingress\"\n\n\n\n\n\nFor the \ninfluxdb-host\n parameter you have two options:\n\n\n\n\nUse an InfluxDB server configured with the \nUDP protocol\n enabled. \n\n\nDeploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the \nsocket listener input\n and to write using\nanyone of the \noutputs plugins\n like InfluxDB, Apache Kafka,\nPrometheus, etc.. (recommended)\n\n\n\n\nIt's important to remember that there's no DNS resolver at this stage so you will have to configure\nan ip address to \nnginx.ingress.kubernetes.io/influxdb-host\n. If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use \n127.0.0.1\n.\n\n\nBackend Protocol\n\u00b6\n\n\nUsing \nbackend-protocol\n annotations is possible to indicate how NGINX should communicate with the backend service.\nValid Values: HTTP, HTTPS, GRPC, GRPCS and AJP\n\n\nBy default NGINX uses \nHTTP\n.\n\n\nExample:\n\n\nnginx.ingress.kubernetes.io/backend-protocol\n:\n \n\"HTTPS\"", + "text": "Annotations\n\u00b6\n\n\nYou can add these Kubernetes annotations to specific Ingress objects to customize their behavior.\n\n\n\n\nTip\n\n\nAnnotation keys and values can only be strings.\nOther types, such as boolean or numeric values must be quoted,\ni.e. \n\"true\"\n, \n\"false\"\n, \n\"100\"\n.\n\n\n\n\n\n\nNote\n\n\nThe annotation prefix can be changed using the\n\n--annotations-prefix\n command line argument\n,\nbut the default is \nnginx.ingress.kubernetes.io\n, as described in the\ntable below.\n\n\n\n\n\n\n\n\n\n\nName\n\n\ntype\n\n\n\n\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/add-base-url\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/app-root\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/affinity\n\n\ncookie\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-realm\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-type\n\n\nbasic or digest\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/auth-url\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/backend-protocol\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/base-url-scheme\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/default-backend\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-cors\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/force-ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/from-to-www-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/grpc-backend\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/limit-rps\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/permanent-redirect-code\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-cookie-domain\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-from\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-redirect-to\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-log\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/rewrite-target\n\n\nURI\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-backends\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-alias\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/server-snippet\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/service-upstream\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-name\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/session-cookie-hash\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-redirect\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-passthrough\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n\n\nnumber\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/load-balance\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/upstream-vhost\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/whitelist-source-range\n\n\nCIDR\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-buffer-size\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/enable-influxdb\n\n\n\"true\" or \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-measurement\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-port\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-host\n\n\nstring\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/influxdb-server-name\n\n\nstring\n\n\n\n\n\n\n\n\nRewrite\n\u00b6\n\n\nIn some scenarios the exposed URL in the backend service differs from the specified path in the Ingress rule. Without a rewrite any request will return 404.\nSet the annotation \nnginx.ingress.kubernetes.io/rewrite-target\n to the path expected by the service.\n\n\nIf the application contains relative links it is possible to add an additional annotation \nnginx.ingress.kubernetes.io/add-base-url\n that will prepend a \nbase\n tag\n in the header of the returned HTML from the backend.\n\n\nIf the scheme of \nbase\n tag\n need to be specific, set the annotation \nnginx.ingress.kubernetes.io/base-url-scheme\n to the scheme such as \nhttp\n and \nhttps\n.\n\n\nIf the Application Root is exposed in a different path and needs to be redirected, set the annotation \nnginx.ingress.kubernetes.io/app-root\n to redirect requests for \n/\n.\n\n\n\n\nExample\n\n\nPlease check the \nrewrite\n example.\n\n\n\n\nSession Affinity\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/affinity\n enables and sets the affinity type in all Upstreams of an Ingress. This way, a request will always be directed to the same upstream server.\nThe only affinity type available for NGINX is \ncookie\n.\n\n\n\n\nExample\n\n\nPlease check the \naffinity\n example.\n\n\n\n\nCookie affinity\n\u00b6\n\n\nIf you use the \ncookie\n affinity type you can also specify the name of the cookie that will be used to route the requests with the annotation \nnginx.ingress.kubernetes.io/session-cookie-name\n. The default is to create a cookie named 'INGRESSCOOKIE'.\n\n\nIn case of NGINX the annotation \nnginx.ingress.kubernetes.io/session-cookie-hash\n defines which algorithm will be used to hash the used upstream. Default value is \nmd5\n and possible values are \nmd5\n, \nsha1\n and \nindex\n.\n\n\n\n\nAttention\n\n\nThe \nindex\n option is not an actual hash; an in-memory index is used instead, which has less overhead.\nHowever, with \nindex\n, matching against a changing upstream server list is inconsistent.\nSo, at reload, if upstream servers have changed, index values are not guaranteed to correspond to the same server as before!\n\nUse \nindex\n with caution\n and only if you need to!\n\n\n\n\nIn NGINX this feature is implemented by the third party module \nnginx-sticky-module-ng\n. The workflow used to define which upstream server will be used is explained \nhere\n\n\nAuthentication\n\u00b6\n\n\nIs possible to add authentication adding additional annotations in the Ingress rule. The source of the authentication is a secret that contains usernames and passwords inside the key \nauth\n.\n\n\nThe annotations are:\n\nnginx.ingress.kubernetes.io/auth-type: [basic|digest]\n\n\n\nIndicates the \nHTTP Authentication Type: Basic or Digest Access Authentication\n.\n\n\nnginx.ingress.kubernetes.io/auth-secret: secretName\n\n\n\n\nThe name of the Secret that contains the usernames and passwords which are granted access to the \npath\ns defined in the Ingress rules.\nThis annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-realm: \"realm string\"\n\n\n\n\n\n\nExample\n\n\nPlease check the \nauth\n example.\n\n\n\n\nCustom NGINX upstream checks\n\u00b6\n\n\nNGINX exposes some flags in the \nupstream configuration\n that enable the configuration of each server in the upstream. The Ingress controller allows custom \nmax_fails\n and \nfail_timeout\n parameters in a global context using \nupstream-max-fails\n and \nupstream-fail-timeout\n in the NGINX ConfigMap or in a particular Ingress rule. \nupstream-max-fails\n defaults to 0. This means NGINX will respect the container's \nreadinessProbe\n if it is defined. If there is no probe and no values for \nupstream-max-fails\n NGINX will continue to send traffic to the container.\n\n\n\n\nTip\n\n\nWith the default configuration NGINX will not health check your backends. Whenever the endpoints controller notices a readiness probe failure, that pod's IP will be removed from the list of endpoints. This will trigger the NGINX controller to also remove it from the upstreams.**\n\n\n\n\nTo use custom values in an Ingress rule define these annotations:\n\n\nnginx.ingress.kubernetes.io/upstream-max-fails\n: number of unsuccessful attempts to communicate with the server that should occur in the duration set by the \nupstream-fail-timeout\n parameter to consider the server unavailable.\n\n\nnginx.ingress.kubernetes.io/upstream-fail-timeout\n: time in seconds during which the specified number of unsuccessful attempts to communicate with the server should occur to consider the server unavailable. This is also the period of time the server will be considered unavailable.\n\n\nIn NGINX, backend server pools are called \"\nupstreams\n\". Each upstream contains the endpoints for a service. An upstream is created for each service that has Ingress rules defined.\n\n\n\n\nAttention\n\n\nAll Ingress rules using the same service will use the same upstream.\n\nOnly one of the Ingress rules should define annotations to configure the upstream servers.\n\n\n\n\n\n\nExample\n\n\nPlease check the \ncustom upstream check\n example.\n\n\n\n\nCustom NGINX upstream hashing\n\u00b6\n\n\nNGINX supports load balancing by client-server mapping based on \nconsistent hashing\n for a given key. The key can contain text, variables or any combination thereof. This feature allows for request stickiness other than client IP or cookies. The \nketama\n consistent hashing method will be used which ensures only a few keys would be remapped to different servers on upstream group changes.\n\n\nTo enable consistent hashing for a backend:\n\n\nnginx.ingress.kubernetes.io/upstream-hash-by\n: the nginx variable, text value or any combination thereof to use for consistent hashing. For example \nnginx.ingress.kubernetes.io/upstream-hash-by: \"$request_uri\"\n to consistently hash upstream requests by the current request URI.\n\n\nCustom NGINX load balancing\n\u00b6\n\n\nThis is similar to (https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#load-balance) but configures load balancing algorithm per ingress.\n\n\n\n\nNote that \nnginx.ingress.kubernetes.io/upstream-hash-by\n takes preference over this. If this and \nnginx.ingress.kubernetes.io/upstream-hash-by\n are not set then we fallback to using globally configured load balancing algorithm.\n\n\n\n\nCustom NGINX upstream vhost\n\u00b6\n\n\nThis configuration setting allows you to control the value for host in the following statement: \nproxy_set_header Host $host\n, which forms part of the location block. This is useful if you need to call the upstream server by something other than \n$host\n.\n\n\nClient Certificate Authentication\n\u00b6\n\n\nIt is possible to enable Client Certificate Authentication using additional annotations in Ingress Rule.\n\n\nThe annotations are:\n\n\n\n\nnginx.ingress.kubernetes.io/auth-tls-secret: secretName\n:\n The name of the Secret that contains the full Certificate Authority chain \nca.crt\n that is enabled to authenticate against this Ingress.\n This annotation also accepts the alternative form \"namespace/secretName\", in which case the Secret lookup is performed in the referenced namespace instead of the Ingress namespace.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-depth\n:\n The validation depth between the provided client certificate and the Certification Authority chain.\n\n\nnginx.ingress.kubernetes.io/auth-tls-verify-client\n:\n Enables verification of client certificates.\n\n\nnginx.ingress.kubernetes.io/auth-tls-error-page\n:\n The URL/Page that user should be redirected in case of a Certificate Authentication Error\n\n\nnginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream\n:\n Indicates if the received certificates should be passed or not to the upstream server. By default this is disabled.\n\n\n\n\n\n\nExample\n\n\nPlease check the \nclient-certs\n example.\n\n\n\n\n\n\nAttention\n\n\nTLS with Client Authentication is \nnot\n possible in Cloudflare and might result in unexpected behavior.\n\n\nCloudflare only allows Authenticated Origin Pulls and is required to use their own certificate: \nhttps://blog.cloudflare.com/protecting-the-origin-with-tls-authenticated-origin-pulls/\n\n\nOnly Authenticated Origin Pulls are allowed and can be configured by following their tutorial: \nhttps://support.cloudflare.com/hc/en-us/articles/204494148-Setting-up-NGINX-to-use-TLS-Authenticated-Origin-Pulls\n\n\n\n\nConfiguration snippet\n\u00b6\n\n\nUsing this annotation you can add additional configuration to the NGINX location. For example:\n\n\nnginx.ingress.kubernetes.io/configuration-snippet\n:\n \n|\n\n \nmore_set_headers \"Request-Id: $req_id\";\n\n\n\n\n\nDefault Backend\n\u00b6\n\n\nThe ingress controller requires a \ndefault backend\n.\nThis service handles the response when the service in the Ingress rule does not have endpoints.\nThis is a global configuration for the ingress controller. In some cases could be required to return a custom content or format. In this scenario we can use the annotation \nnginx.ingress.kubernetes.io/default-backend: \n to specify a custom default backend.\n\n\nEnable CORS\n\u00b6\n\n\nTo enable Cross-Origin Resource Sharing (CORS) in an Ingress rule, add the annotation\n\nnginx.ingress.kubernetes.io/enable-cors: \"true\"\n. This will add a section in the server\nlocation enabling this functionality.\n\n\nCORS can be controlled with the following annotations:\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-methods\n\n controls which methods are accepted. This is a multi-valued field, separated by ',' and\n accepts only letters (upper and lower case).\n\n\nDefault: \nGET, PUT, POST, DELETE, PATCH, OPTIONS\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-methods: \"PUT, GET, POST, OPTIONS\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-headers\n\n controls which headers are accepted. This is a multi-valued field, separated by ',' and accepts letters,\n numbers, _ and -.\n\n\n\n\nDefault: \nDNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Authorization\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-headers: \"X-Forwarded-For, X-app123-XPTO\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-origin\n\n controls what's the accepted Origin for CORS.\n This is a single field value, with the following format: \nhttp(s)://origin-site.com\n or \nhttp(s)://origin-site.com:port\n\n\n\n\nDefault: \n*\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-origin: \"https://origin-site.com:4443\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-allow-credentials\n\n controls if credentials can be passed during CORS operations.\n\n\n\n\nDefault: \ntrue\n\n\n\n\nExample: \nnginx.ingress.kubernetes.io/cors-allow-credentials: \"false\"\n\n\n\n\n\n\nnginx.ingress.kubernetes.io/cors-max-age\n\n controls how long preflight requests can be cached.\n Default: \n1728000\n\n Example: \nnginx.ingress.kubernetes.io/cors-max-age: 600\n\n\n\n\n\n\n\n\nNote\n\n\nFor more information please see \nhttps://enable-cors.org\n \n\n\n\n\nServer Alias\n\u00b6\n\n\nTo add Server Aliases to an Ingress rule add the annotation \nnginx.ingress.kubernetes.io/server-alias: \" \"\n.\nThis will create a server with the same configuration, but a different \nserver_name\n as the provided host.\n\n\n\n\nNote\n\n\nA server-alias name cannot conflict with the hostname of an existing server. If it does the server-alias annotation will be ignored.\nIf a server-alias is created and later a new server with the same hostname is created,\nthe new server configuration will take place over the alias configuration.\n\n\n\n\nFor more information please see \nthe \nserver_name\n documentation\n.\n\n\nServer snippet\n\u00b6\n\n\nUsing the annotation \nnginx.ingress.kubernetes.io/server-snippet\n it is possible to add custom configuration in the server configuration block.\n\n\napiVersion\n:\n \nextensions/v1beta1\n\n\nkind\n:\n \nIngress\n\n\nmetadata\n:\n\n \nannotations\n:\n\n \nnginx.ingress.kubernetes.io/server-snippet\n:\n \n|\n\n\nset $agentflag 0;\n\n\n\nif ($http_user_agent ~* \"(Mobile)\" ){\n\n \nset $agentflag 1;\n\n\n}\n\n\n\nif ( $agentflag = 1 ) {\n\n \nreturn 301 https://m.example.com;\n\n\n}\n\n\n\n\n\n\n\nAttention\n\n\nThis annotation can be used only once per host.\n\n\n\n\nClient Body Buffer Size\n\u00b6\n\n\nSets buffer size for reading client request body per location. In case the request body is larger than the buffer,\nthe whole body or only its part is written to a temporary file. By default, buffer size is equal to two memory pages.\nThis is 8K on x86, other 32-bit platforms, and x86-64. It is usually 16K on other 64-bit platforms. This annotation is\napplied to each location provided in the ingress rule.\n\n\n\n\nNote\n\n\nThe annotation value must be given in a format understood by Nginx.\n\n\n\n\n\n\nExample\n\n\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: \"1000\"\n # 1000 bytes\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1k\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1K\n # 1 kilobyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1m\n # 1 megabyte\n\n\nnginx.ingress.kubernetes.io/client-body-buffer-size: 1M\n # 1 megabyte\n\n\n\n\n\n\nFor more information please see \nhttp://nginx.org\n\n\nExternal Authentication\n\u00b6\n\n\nTo use an existing service that provides authentication the Ingress rule can be annotated with \nnginx.ingress.kubernetes.io/auth-url\n to indicate the URL where the HTTP request should be sent.\n\n\nnginx.ingress.kubernetes.io/auth-url\n:\n \n\"URL\n \nto\n \nthe\n \nauthentication\n \nservice\"\n\n\n\n\n\nAdditionally it is possible to set:\n\n\n\n\nnginx.ingress.kubernetes.io/auth-method\n:\n \n \n to specify the HTTP method to use.\n\n\nnginx.ingress.kubernetes.io/auth-signin\n:\n \n \n to specify the location of the error page.\n\n\nnginx.ingress.kubernetes.io/auth-response-headers\n:\n \n \n to specify headers to pass to backend once authentication request completes.\n\n\nnginx.ingress.kubernetes.io/auth-request-redirect\n:\n \n \n to specify the X-Auth-Request-Redirect header value.\n\n\n\n\n\n\nExample\n\n\nPlease check the \nexternal-auth\n example.\n\n\n\n\nRate limiting\n\u00b6\n\n\nThese annotations define a limit on the connections that can be opened by a single client IP address.\nThis can be used to mitigate \nDDoS Attacks\n.\n\n\n\n\nnginx.ingress.kubernetes.io/limit-connections\n: number of concurrent connections allowed from a single IP address.\n\n\nnginx.ingress.kubernetes.io/limit-rps\n: number of connections that may be accepted from a given IP each second.\n\n\nnginx.ingress.kubernetes.io/limit-rpm\n: number of connections that may be accepted from a given IP each minute.\n\n\nnginx.ingress.kubernetes.io/limit-rate-after\n: sets the initial amount after which the further transmission of a response to a client will be rate limited.\n\n\nnginx.ingress.kubernetes.io/limit-rate\n: rate of request that accepted from a client each second.\n\n\n\n\nYou can specify the client IP source ranges to be excluded from rate-limiting through the \nnginx.ingress.kubernetes.io/limit-whitelist\n annotation. The value is a comma separated list of CIDRs.\n\n\nIf you specify multiple annotations in a single Ingress rule, \nlimit-rpm\n, and then \nlimit-rps\n takes precedence.\n\n\nThe annotation \nnginx.ingress.kubernetes.io/limit-rate\n, \nnginx.ingress.kubernetes.io/limit-rate-after\n define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.\n\n\nTo configure this setting globally for all Ingress rules, the \nlimit-rate-after\n and \nlimit-rate\n value may be set in the \nNGINX ConfigMap\n. if you set the value in ingress annotation will cover global setting.\n\n\nPermanent Redirect\n\u00b6\n\n\nThis annotation allows to return a permanent redirect instead of sending data to the upstream. For example \nnginx.ingress.kubernetes.io/permanent-redirect: https://www.google.com\n would redirect everything to Google.\n\n\nPermanent Redirect Code\n\u00b6\n\n\nThis annotation allows you to modify the status code used for permanent redirects. For example \nnginx.ingress.kubernetes.io/permanent-redirect-code: '308'\n would return your permanent-redirect with a 308.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe annotation \nnginx.ingress.kubernetes.io/ssl-passthrough\n instructs the controller to send TLS connections directly\nto the backend instead of letting NGINX decrypt the communication. See also \nTLS/HTTPS\n in\nthe User guide.\n\n\n\n\nNote\n\n\nSSL Passthrough is \ndisabled by default\n and requires starting the controller with the\n\n--enable-ssl-passthrough\n flag.\n\n\n\n\n\n\nAttention\n\n\nBecause SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough\ninvalidates all the other annotations set on an Ingress object.\n\n\n\n\nSecure backends DEPRECATED (since 0.18.0)\n\u00b6\n\n\nPlease use \nnginx.ingress.kubernetes.io/backend-protocol: \"HTTPS\"\n\n\nBy default NGINX uses plain HTTP to reach the services.\nAdding the annotation \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n in the Ingress rule changes the protocol to HTTPS.\nIf you want to validate the upstream against a specific certificate, you can create a secret with it and reference the secret with the annotation \nnginx.ingress.kubernetes.io/secure-verify-ca-secret\n.\n\n\n\n\nAttention\n\n\nNote that if an invalid or non-existent secret is given,\nthe ingress controller will ignore the \nsecure-backends\n annotation.\n\n\n\n\nService Upstream\n\u00b6\n\n\nBy default the NGINX ingress controller uses a list of all endpoints (Pod IP/port) in the NGINX upstream configuration.\n\n\nThe \nnginx.ingress.kubernetes.io/service-upstream\n annotation disables that behavior and instead uses a single upstream in NGINX, the service's Cluster IP and port.\n\n\nThis can be desirable for things like zero-downtime deployments as it reduces the need to reload NGINX configuration when Pods come up and down. See issue \n#257\n.\n\n\nKnown Issues\n\u00b6\n\n\nIf the \nservice-upstream\n annotation is specified the following things should be taken into consideration:\n\n\n\n\nSticky Sessions will not work as only round-robin load balancing is supported.\n\n\nThe \nproxy_next_upstream\n directive will not have any effect meaning on error the request will not be dispatched to another upstream.\n\n\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects (308) to HTTPS if TLS is enabled for that ingress.\nIf you want to disable this behavior globally, you can use \nssl-redirect: \"false\"\n in the NGINX \nconfig map\n.\n\n\nTo configure this feature for specific ingress resources, you can use the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n\nannotation in the particular resource.\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a redirect to HTTPS\neven when there is no TLS certificate available.\nThis can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n annotation in the particular resource.\n\n\nRedirect from/to www.\n\u00b6\n\n\nIn some scenarios is required to redirect from \nwww.domain.com\n to \ndomain.com\n or vice versa.\nTo enable this feature use the annotation \nnginx.ingress.kubernetes.io/from-to-www-redirect: \"true\"\n\n\n\n\nAttention\n\n\nIf at some point a new Ingress is created with a host equal to one of the options (like \ndomain.com\n) the annotation will be omitted.\n\n\n\n\nWhitelist source range\n\u00b6\n\n\nYou can specify allowed client IP source ranges through the \nnginx.ingress.kubernetes.io/whitelist-source-range\n annotation.\nThe value is a comma separated list of \nCIDRs\n, e.g. \n10.0.0.0/24,172.10.0.1\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nwhitelist-source-range\n value may be set in the \nNGINX ConfigMap\n.\n\n\n\n\nNote\n\n\nAdding an annotation to an Ingress rule overrides any global restriction.\n\n\n\n\nCustom timeouts\n\u00b6\n\n\nUsing the configuration configmap it is possible to set the default global timeout for connections to the upstream servers.\nIn some scenarios is required to have different values. To allow this we provide annotations that allows this customization:\n\n\n\n\nnginx.ingress.kubernetes.io/proxy-connect-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-send-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-read-timeout\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream\n\n\nnginx.ingress.kubernetes.io/proxy-next-upstream-tries\n\n\nnginx.ingress.kubernetes.io/proxy-request-buffering\n\n\n\n\nProxy redirect\n\u00b6\n\n\nWith the annotations \nnginx.ingress.kubernetes.io/proxy-redirect-from\n and \nnginx.ingress.kubernetes.io/proxy-redirect-to\n it is possible to\nset the text that should be changed in the \nLocation\n and \nRefresh\n header fields of a proxied server response (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_redirect)\n\n\nSetting \"off\" or \"default\" in the annotation \nnginx.ingress.kubernetes.io/proxy-redirect-from\n disables \nnginx.ingress.kubernetes.io/proxy-redirect-to\n,\notherwise, both annotations must be used in unison. Note that each annotation must be a string without spaces.\n\n\nBy default the value of each annotation is \"off\".\n\n\nCustom max body size\n\u00b6\n\n\nFor NGINX, an 413 error will be returned to the client when the size in a request exceeds the maximum allowed size of the client request body. This size can be configured by the parameter \nclient_max_body_size\n.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-body-size\n value may be set in the \nNGINX ConfigMap\n.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-body-size\n:\n \n8m\n\n\n\n\n\nProxy cookie domain\n\u00b6\n\n\nSets a text that \nshould be changed in the domain attribute\n of the \"Set-Cookie\" header fields of a proxied server response.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-cookie-domain\n value may be set in the \nNGINX ConfigMap\n.\n\n\nProxy buffering\n\u00b6\n\n\nEnable or disable proxy buffering \nproxy_buffering\n.\nBy default proxy buffering is disabled in the NGINX config.\n\n\nTo configure this setting globally for all Ingress rules, the \nproxy-buffering\n value may be set in the \nNGINX ConfigMap\n.\nTo use custom values in an Ingress rule define these annotation:\n\n\nnginx.ingress.kubernetes.io/proxy-buffering\n:\n \n\"on\"\n\n\n\n\n\nProxy buffer size\n\u00b6\n\n\nSets the size of the buffer \nproxy_buffer_size\n used for reading the first part of the response received from the proxied server.\nBy default proxy buffer size is set as \"4k\"\n\n\nTo configure this setting globally, set \nproxy-buffer-size\n in \nNGINX ConfigMap\n. To use custom values in an Ingress rule, define this annotation:\n\nnginx.ingress.kubernetes.io/proxy-buffer-size\n:\n \n\"8k\"\n\n\n\n\nSSL ciphers\n\u00b6\n\n\nSpecifies the \nenabled ciphers\n.\n\n\nUsing this annotation will set the \nssl_ciphers\n directive at the server level. This configuration is active for all the paths in the host.\n\n\nnginx.ingress.kubernetes.io/ssl-ciphers\n:\n \n\"ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP\"\n\n\n\n\n\nConnection proxy header\n\u00b6\n\n\nUsing this annotation will override the default connection header set by NGINX.\nTo use custom values in an Ingress rule, define the annotation:\n\n\nnginx.ingress.kubernetes.io/connection-proxy-header\n:\n \n\"keep-alive\"\n\n\n\n\n\nEnable Access Log\n\u00b6\n\n\nAccess logs are enabled by default, but in some scenarios access logs might be required to be disabled for a given\ningress. To do this, use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-access-log\n:\n \n\"false\"\n\n\n\n\n\nEnable Rewrite Log\n\u00b6\n\n\nRewrite logs are not enabled by default. In some scenarios it could be required to enable NGINX rewrite logs.\nNote that rewrite logs are sent to the error_log file at the notice level. To enable this feature use the annotation:\n\n\nnginx.ingress.kubernetes.io/enable-rewrite-log\n:\n \n\"true\"\n\n\n\n\n\nLua Resty WAF\n\u00b6\n\n\nUsing \nlua-resty-waf-*\n annotations we can enable and control the \nlua-resty-waf\n\nWeb Application Firewall per location.\n\n\nFollowing configuration will enable the WAF for the paths defined in the corresponding ingress:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf\n:\n \n\"active\"\n\n\n\n\n\nIn order to run it in debugging mode you can set \nnginx.ingress.kubernetes.io/lua-resty-waf-debug\n to \n\"true\"\n in addition to the above configuration.\nThe other possible values for \nnginx.ingress.kubernetes.io/lua-resty-waf\n are \ninactive\n and \nsimulate\n.\nIn \ninactive\n mode WAF won't do anything, whereas in \nsimulate\n mode it will log a warning message if there's a matching WAF rule for given request. This is useful to debug a rule and eliminate possible false positives before fully deploying it.\n\n\nlua-resty-waf\n comes with predefined set of rules \nhttps://github.com/p0pr0ck5/lua-resty-waf/tree/84b4f40362500dd0cb98b9e71b5875cb1a40f1ad/rules\n that covers ModSecurity CRS.\nYou can use \nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n to ignore a subset of those rulesets. For an example:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-ignore-rulesets\n:\n \n\"41000_sqli,\n \n42000_xss\"\n\n\n\n\n\nwill ignore the two mentioned rulesets.\n\n\nIt is also possible to configure custom WAF rules per ingress using the \nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n annotation. For an example the following snippet will configure a WAF rule to deny requests with query string value that contains word \nfoo\n:\n\n\nnginx.ingress.kubernetes.io/lua-resty-waf-extra-rules\n:\n \n'[=[\n \n{\n \n\"access\":\n \n[\n \n{\n \n\"actions\":\n \n{\n \n\"disrupt\"\n \n:\n \n\"DENY\"\n \n},\n \n\"id\":\n \n10001,\n \n\"msg\":\n \n\"my\n \ncustom\n \nrule\",\n \n\"operator\":\n \n\"STR_CONTAINS\",\n \n\"pattern\":\n \n\"foo\",\n \n\"vars\":\n \n[\n \n{\n \n\"parse\":\n \n[\n \n\"values\",\n \n1\n \n],\n \n\"type\":\n \n\"REQUEST_ARGS\"\n \n}\n \n]\n \n}\n \n],\n \n\"body_filter\":\n \n[],\n \n\"header_filter\":[]\n \n}\n \n]=]'\n\n\n\n\n\nFor details on how to write WAF rules, please refer to \nhttps://github.com/p0pr0ck5/lua-resty-waf\n.\n\n\ngRPC backend DEPRECATED (since 0.18.0)\n\u00b6\n\n\nPlease use \nnginx.ingress.kubernetes.io/backend-protocol: \"GRPC\"\n or \nnginx.ingress.kubernetes.io/backend-protocol: \"GRPCS\"\n\n\nSince NGINX 1.13.10 it is possible to expose \ngRPC services natively\n\n\nYou only need to add the annotation \nnginx.ingress.kubernetes.io/grpc-backend: \"true\"\n to enable this feature.\nAdditionally, if the gRPC service requires TLS, add \nnginx.ingress.kubernetes.io/secure-backends: \"true\"\n.\n\n\n\n\nAttention\n\n\nThis feature requires HTTP2 to work which means we need to expose this service using HTTPS.\nExposing a gRPC service using HTTP is not supported.\n\n\n\n\nInfluxDB\n\u00b6\n\n\nUsing \ninfluxdb-*\n annotations we can monitor requests passing through a Location by sending them to an InfluxDB backend exposing the UDP socket\nusing the \nnginx-influxdb-module\n.\n\n\nnginx.ingress.kubernetes.io/enable-influxdb\n:\n \n\"true\"\n\n\nnginx.ingress.kubernetes.io/influxdb-measurement\n:\n \n\"nginx-reqs\"\n\n\nnginx.ingress.kubernetes.io/influxdb-port\n:\n \n\"8089\"\n\n\nnginx.ingress.kubernetes.io/influxdb-host\n:\n \n\"127.0.0.1\"\n\n\nnginx.ingress.kubernetes.io/influxdb-server-name\n:\n \n\"nginx-ingress\"\n\n\n\n\n\nFor the \ninfluxdb-host\n parameter you have two options:\n\n\n\n\nUse an InfluxDB server configured with the \nUDP protocol\n enabled. \n\n\nDeploy Telegraf as a sidecar proxy to the Ingress controller configured to listen UDP with the \nsocket listener input\n and to write using\nanyone of the \noutputs plugins\n like InfluxDB, Apache Kafka,\nPrometheus, etc.. (recommended)\n\n\n\n\nIt's important to remember that there's no DNS resolver at this stage so you will have to configure\nan ip address to \nnginx.ingress.kubernetes.io/influxdb-host\n. If you deploy Influx or Telegraf as sidecar (another container in the same pod) this becomes straightforward since you can directly use \n127.0.0.1\n.\n\n\nBackend Protocol\n\u00b6\n\n\nUsing \nbackend-protocol\n annotations is possible to indicate how NGINX should communicate with the backend service.\nValid Values: HTTP, HTTPS, GRPC, GRPCS and AJP\n\n\nBy default NGINX uses \nHTTP\n.\n\n\nExample:\n\n\nnginx.ingress.kubernetes.io/backend-protocol\n:\n \n\"HTTPS\"", "title": "Annotations" }, { @@ -307,7 +317,7 @@ }, { "location": "/user-guide/nginx-configuration/annotations/#ssl-passthrough", - "text": "The annotation nginx.ingress.kubernetes.io/ssl-passthrough allows to configure TLS termination in the pod and not in NGINX. Attention Using the annotation nginx.ingress.kubernetes.io/ssl-passthrough invalidates all the other available annotations.\nThis is because SSL Passthrough works on level 4 of the OSI stack (TCP), not on the HTTP/HTTPS level. Attention The use of this annotation requires the flag --enable-ssl-passthrough (By default it is disabled).", + "text": "The annotation nginx.ingress.kubernetes.io/ssl-passthrough instructs the controller to send TLS connections directly\nto the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in\nthe User guide. Note SSL Passthrough is disabled by default and requires starting the controller with the --enable-ssl-passthrough flag. Attention Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough\ninvalidates all the other annotations set on an Ingress object.", "title": "SSL Passthrough" }, { @@ -1162,7 +1172,7 @@ }, { "location": "/user-guide/tls/", - "text": "TLS/HTTPS\n\u00b6\n\n\nTLS Secrets\n\u00b6\n\n\nAnytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.\n\n\nYou can generate a self-signed certificate and private key with with:\n\n\n$ openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout \n${\nKEY_FILE\n}\n -out \n${\nCERT_FILE\n}\n -subj \n\"/CN=\n${\nHOST\n}\n/O=\n${\nHOST\n}\n\"\n`\n\n\n\n\n\nThen create the secret in the cluster via:\n\n\nkubectl create secret tls \n${\nCERT_NAME\n}\n --key \n${\nKEY_FILE\n}\n --cert \n${\nCERT_FILE\n}\n\n\n\n\n\nThe resulting secret will be of type \nkubernetes.io/tls\n.\n\n\nDefault SSL Certificate\n\u00b6\n\n\nNGINX provides the option to configure a server as a catch-all with\n\nserver_name\n\nfor requests that do not match any of the configured server names.\nThis configuration works without out-of-the-box for HTTP traffic.\nFor HTTPS, a certificate is naturally required.\n\n\nFor this reason the Ingress controller provides the flag \n--default-ssl-certificate\n.\nThe secret referred to by this flag contains the default certificate to be used when\naccessing the catch-all server.\nIf this flag is not provided NGINX will use a self-signed certificate.\n\n\nFor instance, if you have a TLS secret \nfoo-tls\n in the \ndefault\n namespace,\nadd \n--default-ssl-certificate=default/foo-tls\n in the \nnginx-controller\n deployment.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe flag \n--enable-ssl-passthrough\n enables the SSL passthrough feature.\nBy default this feature is disabled.\n\n\nThis is required to enable passthrough backends in Ingress configurations.\n\n\nTODO: Improve this documentation.\n\n\nHTTP Strict Transport Security\n\u00b6\n\n\nHTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified\nthrough the use of a special response header. Once a supported browser receives\nthis header that browser will prevent any communications from being sent over\nHTTP to the specified domain and will instead send all communications over HTTPS.\n\n\nHSTS is enabled by default.\n\n\nTo disable this behavior use \nhsts: \"false\"\n in the configuration \nConfigMap\n.\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects HTTP clients to the HTTPS port\n443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.\n\n\nThis can be disabled globally using \nssl-redirect: \"false\"\n in the NGINX \nconfig map\n,\nor per-Ingress with the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n\nannotation in the particular resource.\n\n\n\n\nTip\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a\nredirect to HTTPS even when there is no TLS certificate available.\nThis can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n\nannotation in the particular resource.\n\n\n\n\nAutomated Certificate Management with Kube-Lego\n\u00b6\n\n\n\n\nTip\n\n\nKube-Lego has reached end-of-life and is being\nreplaced by \ncert-manager\n.\n\n\n\n\nKube-Lego\n automatically requests missing or expired certificates from \nLet's Encrypt\n\nby monitoring ingress resources and their referenced secrets.\n\n\nTo enable this for an ingress resource you have to add an annotation:\n\n\nkubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\"\n\n\n\n\n\nTo setup Kube-Lego you can take a look at this \nfull example\n.\nThe first version to fully support Kube-Lego is Nginx Ingress controller 0.8.\n\n\nDefault TLS Version and Ciphers\n\u00b6\n\n\nTo provide the most secure baseline configuration possible,\n\n\nnginx-ingress defaults to using TLS 1.2 only and a \nsecure set of TLS ciphers\n.\n\n\nLegacy TLS\n\u00b6\n\n\nThe default configuration, though secure, does not support some older browsers and operating systems.\n\n\nFor instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing,\nMay 2018, \napproximately 15% of Android devices\n\nare not compatible with nginx-ingress's default configuration.\n\n\nTo change this default behavior, use a \nConfigMap\n.\n\n\nA sample ConfigMap fragment to allow these older clients to connect could look something like the following:\n\n\nkind\n:\n \nConfigMap\n\n\napiVersion\n:\n \nv1\n\n\nmetadata\n:\n\n \nname\n:\n \nnginx\n-\nconfig\n\n\ndata\n:\n\n \nssl\n-\nciphers\n:\n \n\"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\"\n\n \nssl\n-\nprotocols\n:\n \n\"TLSv1 TLSv1.1 TLSv1.2\"", + "text": "TLS/HTTPS\n\u00b6\n\n\nTLS Secrets\n\u00b6\n\n\nAnytime we reference a TLS secret, we mean a PEM-encoded X.509, RSA (2048) secret.\n\n\nYou can generate a self-signed certificate and private key with with:\n\n\n$ openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout \n${\nKEY_FILE\n}\n -out \n${\nCERT_FILE\n}\n -subj \n\"/CN=\n${\nHOST\n}\n/O=\n${\nHOST\n}\n\"\n`\n\n\n\n\n\nThen create the secret in the cluster via:\n\n\nkubectl create secret tls \n${\nCERT_NAME\n}\n --key \n${\nKEY_FILE\n}\n --cert \n${\nCERT_FILE\n}\n\n\n\n\n\nThe resulting secret will be of type \nkubernetes.io/tls\n.\n\n\nDefault SSL Certificate\n\u00b6\n\n\nNGINX provides the option to configure a server as a catch-all with\n\nserver_name\n\nfor requests that do not match any of the configured server names.\nThis configuration works without out-of-the-box for HTTP traffic.\nFor HTTPS, a certificate is naturally required.\n\n\nFor this reason the Ingress controller provides the flag \n--default-ssl-certificate\n.\nThe secret referred to by this flag contains the default certificate to be used when\naccessing the catch-all server.\nIf this flag is not provided NGINX will use a self-signed certificate.\n\n\nFor instance, if you have a TLS secret \nfoo-tls\n in the \ndefault\n namespace,\nadd \n--default-ssl-certificate=default/foo-tls\n in the \nnginx-controller\n deployment.\n\n\nSSL Passthrough\n\u00b6\n\n\nThe \n--enable-ssl-passthrough\n flag enables the SSL Passthrough feature, which is disabled by\ndefault. This is required to enable passthrough backends in Ingress objects.\n\n\n\n\nWarning\n\n\nThis feature is implemented by intercepting \nall traffic\n on the configured HTTPS port (default: 443) and handing\nit over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.\n\n\n\n\nSSL Passthrough leverages \nSNI\n and reads the virtual domain from the TLS negotiation, which requires compatible\nclients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back\nand forth between the backend and the client.\n\n\nIf there is no hostname matching the requested host name, the request is handed over to NGINX on the configured\npassthrough proxy port (default: 442), which proxies the request to the default backend.\n\n\n\n\nNote\n\n\nUnlike HTTP backends, traffic to Passthrough backends is sent to the \nclusterIP\n of the backing Service instead of\nindividual Endpoints.\n\n\n\n\nHTTP Strict Transport Security\n\u00b6\n\n\nHTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified\nthrough the use of a special response header. Once a supported browser receives\nthis header that browser will prevent any communications from being sent over\nHTTP to the specified domain and will instead send all communications over HTTPS.\n\n\nHSTS is enabled by default.\n\n\nTo disable this behavior use \nhsts: \"false\"\n in the configuration \nConfigMap\n.\n\n\nServer-side HTTPS enforcement through redirect\n\u00b6\n\n\nBy default the controller redirects HTTP clients to the HTTPS port\n443 using a 308 Permanent Redirect response if TLS is enabled for that Ingress.\n\n\nThis can be disabled globally using \nssl-redirect: \"false\"\n in the NGINX \nconfig map\n,\nor per-Ingress with the \nnginx.ingress.kubernetes.io/ssl-redirect: \"false\"\n\nannotation in the particular resource.\n\n\n\n\nTip\n\n\nWhen using SSL offloading outside of cluster (e.g. AWS ELB) it may be useful to enforce a\nredirect to HTTPS even when there is no TLS certificate available.\nThis can be achieved by using the \nnginx.ingress.kubernetes.io/force-ssl-redirect: \"true\"\n\nannotation in the particular resource.\n\n\n\n\nAutomated Certificate Management with Kube-Lego\n\u00b6\n\n\n\n\nTip\n\n\nKube-Lego has reached end-of-life and is being\nreplaced by \ncert-manager\n.\n\n\n\n\nKube-Lego\n automatically requests missing or expired certificates from \nLet's Encrypt\n\nby monitoring ingress resources and their referenced secrets.\n\n\nTo enable this for an ingress resource you have to add an annotation:\n\n\nkubectl annotate ing ingress-demo kubernetes.io/tls-acme=\"true\"\n\n\n\n\n\nTo setup Kube-Lego you can take a look at this \nfull example\n.\nThe first version to fully support Kube-Lego is Nginx Ingress controller 0.8.\n\n\nDefault TLS Version and Ciphers\n\u00b6\n\n\nTo provide the most secure baseline configuration possible,\n\n\nnginx-ingress defaults to using TLS 1.2 only and a \nsecure set of TLS ciphers\n.\n\n\nLegacy TLS\n\u00b6\n\n\nThe default configuration, though secure, does not support some older browsers and operating systems.\n\n\nFor instance, TLS 1.1+ is only enabled by default from Android 5.0 on. At the time of writing,\nMay 2018, \napproximately 15% of Android devices\n\nare not compatible with nginx-ingress's default configuration.\n\n\nTo change this default behavior, use a \nConfigMap\n.\n\n\nA sample ConfigMap fragment to allow these older clients to connect could look something like the following:\n\n\nkind\n:\n \nConfigMap\n\n\napiVersion\n:\n \nv1\n\n\nmetadata\n:\n\n \nname\n:\n \nnginx\n-\nconfig\n\n\ndata\n:\n\n \nssl\n-\nciphers\n:\n \n\"ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA\"\n\n \nssl\n-\nprotocols\n:\n \n\"TLSv1 TLSv1.1 TLSv1.2\"", "title": "TLS/HTTPS" }, { @@ -1182,7 +1192,7 @@ }, { "location": "/user-guide/tls/#ssl-passthrough", - "text": "The flag --enable-ssl-passthrough enables the SSL passthrough feature.\nBy default this feature is disabled. This is required to enable passthrough backends in Ingress configurations. TODO: Improve this documentation.", + "text": "The --enable-ssl-passthrough flag enables the SSL Passthrough feature, which is disabled by\ndefault. This is required to enable passthrough backends in Ingress objects. Warning This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing\nit over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty. SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible\nclients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back\nand forth between the backend and the client. If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured\npassthrough proxy port (default: 442), which proxies the request to the default backend. Note Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of\nindividual Endpoints.", "title": "SSL Passthrough" }, { @@ -1262,7 +1272,7 @@ }, { "location": "/examples/PREREQUISITES/", - "text": "Prerequisites\n\u00b6\n\n\nMany of the examples in this directory have common prerequisites.\n\n\nTLS certificates\n\u00b6\n\n\nUnless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA\nkey/cert pair with an arbitrarily chosen hostname, created as follows\n\n\n$\n openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \n\"/CN=nginxsvc/O=nginxsvc\"\n\n\nGenerating a 2048 bit RSA private key\n\n\n................+++\n\n\n................+++\n\n\nwriting new private key to 'tls.key'\n\n\n-----\n\n\n\n$\n kubectl create secret tls tls-secret --key tls.key --cert tls.crt\n\nsecret \"tls-secret\" created\n\n\n\n\n\nCA Authentication\n\u00b6\n\n\nYou can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.\n\n\nThese instructions are based on CoreOS OpenSSL. \nSee live doc.\n\n\nGenerating a CA\n\u00b6\n\n\nFirst of all, you've to generate a CA. This is going to be the one who will sign your client certificates.\nIn real production world, you may face CAs with intermediate certificates, as the following:\n\n\n$\n openssl s_client -connect www.google.com:443\n\n[...]\n\n\n---\n\n\nCertificate chain\n\n\n 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com\n\n\n i:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority\n\n\n\n\n\nTo generate our CA Certificate, we've to run the following commands:\n\n\n$\n openssl genrsa -out ca.key \n2048\n\n\n$\n openssl req -x509 -new -nodes -key ca.key -days \n10000\n -out ca.crt -subj \n\"/CN=example-ca\"\n\n\n\n\n\nThis will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days.\nThe ca.crt can be used later in the step of creation of CA authentication secret.\n\n\nGenerating the client certificate\n\u00b6\n\n\nThe following steps generate a client certificate signed by the CA generated above. This client can be\nused to authenticate in a tls-auth configured ingress.\n\n\nFirst, we need to generate an 'openssl.cnf' file that will be used while signing the keys:\n\n\n[req]\n\n\nreq_extensions = v3_req\n\n\ndistinguished_name = req_distinguished_name\n\n\n[req_distinguished_name]\n\n\n[ v3_req ]\n\n\nbasicConstraints = CA:FALSE\n\n\nkeyUsage = nonRepudiation, digitalSignature, keyEncipherment\n\n\n\n\n\nThen, a user generates his very own private key (that he needs to keep secret)\nand a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate.\n\n\n$\n openssl genrsa -out client1.key \n2048\n\n\n$\n openssl req -new -key client1.key -out client1.csr -subj \n\"/CN=client1\"\n -config openssl.cnf\n\n\n\n\nAs the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate:\n\n\n$\n openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days \n365\n -extensions v3_req -extfile openssl.cnf\n\n\n\n\nThen, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).\n\n\nCreating the CA Authentication secret\n\u00b6\n\n\nIf you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):\n\n\n-----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----\n\n\n\n\nYou can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:\n\n\n$\n openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n\n\n\n\nThen, you've to concatenate them all in only one file, named 'ca.crt' as the following:\n\n\n$\n cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n\n\n\n\nThe final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n \n\n\n\n\nNote:\n You can also generate the CA Authentication Secret along with the TLS Secret by using:\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n --from-file\n=\ntls.crt\n=\n --from-file\n=\ntls.key\n=\n \n\n\n\nTest HTTP Service\n\u00b6\n\n\nAll examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows\n\n\n$\n kubectl create -f http-svc.yaml\n\nservice \"http-svc\" created\n\n\nreplicationcontroller \"http-svc\" created\n\n\n\n$\n kubectl get po\n\nNAME READY STATUS RESTARTS AGE\n\n\nhttp-svc-p1t3t 1/1 Running 0 1d\n\n\n\n$\n kubectl get svc\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n\n\nhttp-svc 10.0.122.116 80:30301/TCP 1d\n\n\n\n\n\nYou can test that the HTTP Service works by exposing it temporarily\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\n\n\"http-svc\" patched\n\n\n\n$\n kubectl get svc http-svc\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n\n\nhttp-svc 10.0.122.116 80:30301/TCP 1d\n\n\n\n$\n kubectl describe svc http-svc\n\nName: http-svc\n\n\nNamespace: default\n\n\nLabels: app=http-svc\n\n\nSelector: app=http-svc\n\n\nType: LoadBalancer\n\n\nIP: 10.0.122.116\n\n\nLoadBalancer Ingress: 108.59.87.136\n\n\nPort: http 80/TCP\n\n\nNodePort: http 30301/TCP\n\n\nEndpoints: 10.180.1.6:8080\n\n\nSession Affinity: None\n\n\nEvents:\n\n\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n\n\n --------- -------- ----- ---- ------------- -------- ------ -------\n\n\n 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer\n\n\n 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer\n\n\n 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer\n\n\n\n$\n curl \n108\n.59.87.126\n\nCLIENT VALUES:\n\n\nclient_address=10.240.0.3\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://108.59.87.136:8080/\n\n\n\nSERVER VALUES:\n\n\nserver_version=nginx: 1.9.11 - lua: 10001\n\n\n\nHEADERS RECEIVED:\n\n\naccept=*/*\n\n\nhost=108.59.87.136\n\n\nuser-agent=curl/7.46.0\n\n\nBODY:\n\n\n-no body in request-\n\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"NodePort\"}}'\n\n\n\"http-svc\" patched", + "text": "Prerequisites\n\u00b6\n\n\nMany of the examples in this directory have common prerequisites.\n\n\nTLS certificates\n\u00b6\n\n\nUnless otherwise mentioned, the TLS secret used in examples is a 2048 bit RSA\nkey/cert pair with an arbitrarily chosen hostname, created as follows\n\n\n$\n openssl req -x509 -nodes -days \n365\n -newkey rsa:2048 -keyout tls.key -out tls.crt -subj \n\"/CN=nginxsvc/O=nginxsvc\"\n\n\nGenerating a 2048 bit RSA private key\n\n\n................+++\n\n\n................+++\n\n\nwriting new private key to 'tls.key'\n\n\n-----\n\n\n\n$\n kubectl create secret tls tls-secret --key tls.key --cert tls.crt\n\nsecret \"tls-secret\" created\n\n\n\n\n\nCA Authentication\n\u00b6\n\n\nYou can act as your very own CA, or use an existing one. As an exercise / learning, we're going to generate our\nown CA, and also generate a client certificate.\n\n\nThese instructions are based on CoreOS OpenSSL. \nSee live doc.\n\n\nGenerating a CA\n\u00b6\n\n\nFirst of all, you've to generate a CA. This is going to be the one who will sign your client certificates.\nIn real production world, you may face CAs with intermediate certificates, as the following:\n\n\n$\n openssl s_client -connect www.google.com:443\n\n[...]\n\n\n---\n\n\nCertificate chain\n\n\n 0 s:/C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com\n\n\n i:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n 1 s:/C=US/O=Google Inc/CN=Google Internet Authority G2\n\n\n i:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n 2 s:/C=US/O=GeoTrust Inc./CN=GeoTrust Global CA\n\n\n i:/C=US/O=Equifax/OU=Equifax Secure Certificate Authority\n\n\n\n\n\nTo generate our CA Certificate, we've to run the following commands:\n\n\n$\n openssl genrsa -out ca.key \n2048\n\n\n$\n openssl req -x509 -new -nodes -key ca.key -days \n10000\n -out ca.crt -subj \n\"/CN=example-ca\"\n\n\n\n\n\nThis will generate two files: A private key (ca.key) and a public key (ca.crt). This CA is valid for 10000 days.\nThe ca.crt can be used later in the step of creation of CA authentication secret.\n\n\nGenerating the client certificate\n\u00b6\n\n\nThe following steps generate a client certificate signed by the CA generated above. This client can be\nused to authenticate in a tls-auth configured ingress.\n\n\nFirst, we need to generate an 'openssl.cnf' file that will be used while signing the keys:\n\n\n[req]\n\n\nreq_extensions = v3_req\n\n\ndistinguished_name = req_distinguished_name\n\n\n[req_distinguished_name]\n\n\n[ v3_req ]\n\n\nbasicConstraints = CA:FALSE\n\n\nkeyUsage = nonRepudiation, digitalSignature, keyEncipherment\n\n\n\n\n\nThen, a user generates his very own private key (that he needs to keep secret)\nand a CSR (Certificate Signing Request) that will be sent to the CA to sign and generate a certificate.\n\n\n$\n openssl genrsa -out client1.key \n2048\n\n\n$\n openssl req -new -key client1.key -out client1.csr -subj \n\"/CN=client1\"\n -config openssl.cnf\n\n\n\n\nAs the CA receives the generated 'client1.csr' file, it signs it and generates a client.crt certificate:\n\n\n$\n openssl x509 -req -in client1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out client1.crt -days \n365\n -extensions v3_req -extfile openssl.cnf\n\n\n\n\nThen, you'll have 3 files: the client.key (user's private key), client.crt (user's public key) and client.csr (disposable CSR).\n\n\nCreating the CA Authentication secret\n\u00b6\n\n\nIf you're using the CA Authentication feature, you need to generate a secret containing \nall the authorized CAs. You must download them from your CA site in PEM format (like the following):\n\n\n-----BEGIN CERTIFICATE-----\n[....]\n-----END CERTIFICATE-----\n\n\n\n\nYou can have as many certificates as you want. If they're in the binary DER format, \nyou can convert them as the following:\n\n\n$\n openssl x509 -in certificate.der -inform der -out certificate.crt -outform pem\n\n\n\n\nThen, you've to concatenate them all in only one file, named 'ca.crt' as the following:\n\n\n$\n cat certificate1.crt certificate2.crt certificate3.crt >> ca.crt\n\n\n\n\nThe final step is to create a secret with the content of this file. This secret is going to be used in \nthe TLS Auth directive:\n\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n \n\n\n\n\nNote:\n You can also generate the CA Authentication Secret along with the TLS Secret by using:\n\n$\n kubectl create secret generic caingress --namespace\n=\ndefault --from-file\n=\nca.crt\n=\n --from-file\n=\ntls.crt\n=\n --from-file\n=\ntls.key\n=\n \n\n\n\nTest HTTP Service\n\u00b6\n\n\nAll examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows\n\n\n$\n kubectl create -f http-svc.yaml\n\nservice \"http-svc\" created\n\n\nreplicationcontroller \"http-svc\" created\n\n\n\n$\n kubectl get po\n\nNAME READY STATUS RESTARTS AGE\n\n\nhttp-svc-p1t3t 1/1 Running 0 1d\n\n\n\n$\n kubectl get svc\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n\n\nhttp-svc 10.0.122.116 80:30301/TCP 1d\n\n\n\n\n\nYou can test that the HTTP Service works by exposing it temporarily\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"LoadBalancer\"}}'\n\n\n\"http-svc\" patched\n\n\n\n$\n kubectl get svc http-svc\n\nNAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE\n\n\nhttp-svc 10.0.122.116 80:30301/TCP 1d\n\n\n\n$\n kubectl describe svc http-svc\n\nName: http-svc\n\n\nNamespace: default\n\n\nLabels: app=http-svc\n\n\nSelector: app=http-svc\n\n\nType: LoadBalancer\n\n\nIP: 10.0.122.116\n\n\nLoadBalancer Ingress: 108.59.87.136\n\n\nPort: http 80/TCP\n\n\nNodePort: http 30301/TCP\n\n\nEndpoints: 10.180.1.6:8080\n\n\nSession Affinity: None\n\n\nEvents:\n\n\n FirstSeen LastSeen Count From SubObjectPath Type Reason Message\n\n\n --------- -------- ----- ---- ------------- -------- ------ -------\n\n\n 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer\n\n\n 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer\n\n\n 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer\n\n\n\n$\n curl \n108\n.59.87.136\n\nCLIENT VALUES:\n\n\nclient_address=10.240.0.3\n\n\ncommand=GET\n\n\nreal path=/\n\n\nquery=nil\n\n\nrequest_version=1.1\n\n\nrequest_uri=http://108.59.87.136:8080/\n\n\n\nSERVER VALUES:\n\n\nserver_version=nginx: 1.9.11 - lua: 10001\n\n\n\nHEADERS RECEIVED:\n\n\naccept=*/*\n\n\nhost=108.59.87.136\n\n\nuser-agent=curl/7.46.0\n\n\nBODY:\n\n\n-no body in request-\n\n\n\n$\n kubectl patch svc http-svc -p \n'{\"spec\":{\"type\": \"NodePort\"}}'\n\n\n\"http-svc\" patched", "title": "Prerequisites" }, { @@ -1297,7 +1307,7 @@ }, { "location": "/examples/PREREQUISITES/#test-http-service", - "text": "All examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.126 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched", + "text": "All examples that require a test HTTP Service use the standard http-svc pod,\nwhich you can deploy as follows $ kubectl create -f http-svc.yaml service \"http-svc\" created replicationcontroller \"http-svc\" created $ kubectl get po NAME READY STATUS RESTARTS AGE http-svc-p1t3t 1/1 Running 0 1d $ kubectl get svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d You can test that the HTTP Service works by exposing it temporarily $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"LoadBalancer\"}}' \"http-svc\" patched $ kubectl get svc http-svc NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc 10.0.122.116 80:30301/TCP 1d $ kubectl describe svc http-svc Name: http-svc Namespace: default Labels: app=http-svc Selector: app=http-svc Type: LoadBalancer IP: 10.0.122.116 LoadBalancer Ingress: 108.59.87.136 Port: http 80/TCP NodePort: http 30301/TCP Endpoints: 10.180.1.6:8080 Session Affinity: None Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 1m 1m 1 {service-controller } Normal Type ClusterIP -> LoadBalancer 1m 1m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer 16s 16s 1 {service-controller } Normal CreatedLoadBalancer Created load balancer $ curl 108 .59.87.136 CLIENT VALUES: client_address=10.240.0.3 command=GET real path=/ query=nil request_version=1.1 request_uri=http://108.59.87.136:8080/ SERVER VALUES: server_version=nginx: 1.9.11 - lua: 10001 HEADERS RECEIVED: accept=*/* host=108.59.87.136 user-agent=curl/7.46.0 BODY: -no body in request- $ kubectl patch svc http-svc -p '{\"spec\":{\"type\": \"NodePort\"}}' \"http-svc\" patched", "title": "Test HTTP Service" }, { diff --git a/sitemap.xml b/sitemap.xml index 5af13e186..008dd2059 100644 --- a/sitemap.xml +++ b/sitemap.xml @@ -2,227 +2,227 @@ \ No newline at end of file diff --git a/user-guide/nginx-configuration/annotations/index.html b/user-guide/nginx-configuration/annotations/index.html index b124484e8..b623def6d 100644 --- a/user-guide/nginx-configuration/annotations/index.html +++ b/user-guide/nginx-configuration/annotations/index.html @@ -2238,15 +2238,18 @@ This can be used to mitigate Permanent Redirect Code¶ / -2018-09-06 +2018-09-12 daily /deploy/ -2018-09-06 +2018-09-12 daily /deploy/baremetal/ -2018-09-06 +2018-09-12 daily /deploy/rbac/ -2018-09-06 +2018-09-12 daily /deploy/upgrade/ -2018-09-06 +2018-09-12 daily /user-guide/nginx-configuration/ -2018-09-06 +2018-09-12 daily /user-guide/nginx-configuration/annotations/ -2018-09-06 +2018-09-12 daily /user-guide/nginx-configuration/configmap/ -2018-09-06 +2018-09-12 daily /user-guide/nginx-configuration/custom-template/ -2018-09-06 +2018-09-12 daily /user-guide/nginx-configuration/log-format/ -2018-09-06 +2018-09-12 daily /user-guide/cli-arguments/ -2018-09-06 +2018-09-12 daily /user-guide/custom-errors/ -2018-09-06 +2018-09-12 daily /user-guide/default-backend/ -2018-09-06 +2018-09-12 daily /user-guide/exposing-tcp-udp-services/ -2018-09-06 +2018-09-12 daily /user-guide/external-articles/ -2018-09-06 +2018-09-12 daily /user-guide/miscellaneous/ -2018-09-06 +2018-09-12 daily /user-guide/monitoring/ -2018-09-06 +2018-09-12 daily /user-guide/multiple-ingress/ -2018-09-06 +2018-09-12 daily /user-guide/tls/ -2018-09-06 +2018-09-12 daily /user-guide/third-party-addons/modsecurity/ -2018-09-06 +2018-09-12 daily /user-guide/third-party-addons/opentracing/ -2018-09-06 +2018-09-12 daily /examples/ -2018-09-06 +2018-09-12 daily /examples/PREREQUISITES/ -2018-09-06 +2018-09-12 daily /examples/affinity/cookie/README/ -2018-09-06 +2018-09-12 daily /examples/auth/basic/README/ -2018-09-06 +2018-09-12 daily /examples/auth/client-certs/README/ -2018-09-06 +2018-09-12 daily /examples/auth/external-auth/README/ -2018-09-06 +2018-09-12 daily /examples/auth/oauth-external-auth/README/ -2018-09-06 +2018-09-12 daily /examples/customization/configuration-snippets/README/ -2018-09-06 +2018-09-12 daily /examples/customization/custom-configuration/README/ -2018-09-06 +2018-09-12 daily /examples/customization/custom-errors/README/ -2018-09-06 +2018-09-12 daily /examples/customization/custom-headers/README/ -2018-09-06 +2018-09-12 daily /examples/customization/custom-upstream-check/README/ -2018-09-06 +2018-09-12 daily /examples/customization/external-auth-headers/README/ -2018-09-06 +2018-09-12 daily /examples/customization/ssl-dh-param/README/ -2018-09-06 +2018-09-12 daily /examples/customization/sysctl/README/ -2018-09-06 +2018-09-12 daily /examples/docker-registry/README/ -2018-09-06 +2018-09-12 daily /examples/grpc/README/ -2018-09-06 +2018-09-12 daily /examples/multi-tls/README/ -2018-09-06 +2018-09-12 daily /examples/rewrite/README/ -2018-09-06 +2018-09-12 daily /examples/static-ip/README/ -2018-09-06 +2018-09-12 daily /examples/tls-termination/README/ -2018-09-06 +2018-09-12 daily /development/ -2018-09-06 +2018-09-12 daily /how-it-works/ -2018-09-06 +2018-09-12 daily /troubleshooting/ -2018-09-06 +2018-09-12 daily This annotation allows you to modify the status code used for permanent redirects. For example
nginx.ingress.kubernetes.io/permanent-redirect-code: '308'
would return your permanent-redirect with a 308.SSL Passthrough¶
-The annotation
-nginx.ingress.kubernetes.io/ssl-passthrough
allows to configure TLS termination in the pod and not in NGINX.-Attention
-Using the annotation
+nginx.ingress.kubernetes.io/ssl-passthrough
invalidates all the other available annotations. -This is because SSL Passthrough works on level 4 of the OSI stack (TCP), not on the HTTP/HTTPS level.The annotation
+nginx.ingress.kubernetes.io/ssl-passthrough
instructs the controller to send TLS connections directly +to the backend instead of letting NGINX decrypt the communication. See also TLS/HTTPS in +the User guide.+Note
+SSL Passthrough is disabled by default and requires starting the controller with the +
--enable-ssl-passthrough
flag.Attention
-The use of this annotation requires the flag
+--enable-ssl-passthrough
(By default it is disabled).Because SSL Passthrough works on layer 4 of the OSI model (TCP) and not on the layer 7 (HTTP), using SSL Passthrough +invalidates all the other annotations set on an Ingress object.
Secure backends DEPRECATED (since 0.18.0)¶
Please use
diff --git a/user-guide/tls/index.html b/user-guide/tls/index.html index ea8f0d515..f932be5a1 100644 --- a/user-guide/tls/index.html +++ b/user-guide/tls/index.html @@ -1239,10 +1239,23 @@ If this flag is not provided NGINX will use a self-signed certificate.nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
For instance, if you have a TLS secret
foo-tls
in thedefault
namespace, add--default-ssl-certificate=default/foo-tls
in thenginx-controller
deployment.SSL Passthrough¶
-The flag
---enable-ssl-passthrough
enables the SSL passthrough feature. -By default this feature is disabled.This is required to enable passthrough backends in Ingress configurations.
-TODO: Improve this documentation.
+The
+--enable-ssl-passthrough
flag enables the SSL Passthrough feature, which is disabled by +default. This is required to enable passthrough backends in Ingress objects.++Warning
+This feature is implemented by intercepting all traffic on the configured HTTPS port (default: 443) and handing +it over to a local TCP proxy. This bypasses NGINX completely and introduces a non-negligible performance penalty.
+SSL Passthrough leverages SNI and reads the virtual domain from the TLS negotiation, which requires compatible +clients. After a connection has been accepted by the TLS listener, it is handled by the controller itself and piped back +and forth between the backend and the client.
+If there is no hostname matching the requested host name, the request is handed over to NGINX on the configured +passthrough proxy port (default: 442), which proxies the request to the default backend.
++Note
+Unlike HTTP backends, traffic to Passthrough backends is sent to the clusterIP of the backing Service instead of +individual Endpoints.
+HTTP Strict Transport Security¶
HTTP Strict Transport Security (HSTS) is an opt-in security enhancement specified through the use of a special response header. Once a supported browser receives