</code></pre></div><p>After creating the following objects, MetalLB takes ownership of one of the IP addresses in the pool and updates the <em>loadBalancer</em> IP field of the <code>ingress-nginx</code> Service accordingly.</p><divclass=highlight><pre><span></span><code><spanclass=nn>---</span>
</code></pre></div></div><p>As soon as MetalLB sets the external IP address of the <code>ingress-nginx</code> LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>curl<spanclass=w></span>-D-<spanclass=w></span>http://203.0.113.10<spanclass=w></span>-H<spanclass=w></span><spanclass=s1>'Host: myapp.example.com'</span>
</code></pre></div><divclass="admonition tip"><pclass=admonition-title>Tip</p><p>In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the <code>Local</code> traffic policy. Traffic policies are described in more details in <ahref=https://metallb.universe.tf/usage/#traffic-policies>Traffic policies</a> as well as in the next section.</p></div><h2id=over-a-nodeport-service>Over a NodePort Service<aclass=headerlinkhref=#over-a-nodeport-servicetitle="Permanent link"> ¶</a></h2><p>Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the <ahref=../#bare-metal>installation guide</a>.</p><divclass="admonition info"><pclass=admonition-title>Info</p><p>A Service of type <code>NodePort</code> exposes, via the <code>kube-proxy</code> component, the <strong>same unprivileged</strong> port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see <ahref=https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport>Services</a>.</p></div><p>In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the <code>ingress-nginx</code> Service to HTTP requests.</p><p><imgalt="NodePort request flow"src=../../images/baremetal/nodeport.jpg></p><p>You can <strong>customize the exposed node port numbers</strong> by setting the <code>controller.service.nodePorts.*</code> Helm values, but they still have to be in the 30000-32767 range.</p><divclass="admonition example"><pclass=admonition-title>Example</p><p>Given the NodePort <code>30100</code> allocated to the <code>ingress-nginx</code> Service</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>get<spanclass=w></span>svc
</code></pre></div><p>and a Kubernetes node with the public IP address <code>203.0.113.2</code> (the external IP is added as an example, in most bare-metal environments this value is <None>)</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>node
</code></pre></div><p>a client would reach an Ingress with <code>host: myapp.example.com</code> at <code>http://myapp.example.com:30100</code>, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.</p></div><divclass="admonition danger"><pclass=admonition-title>Impact on the host system</p><p>While it may sound tempting to reconfigure the NodePort range using the <code>--service-node-port-range</code> API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant <code>kube-proxy</code> privileges it may otherwise not require.</p><p>This practice is therefore <strong>discouraged</strong>. See the other approaches proposed in this page for alternatives.</p></div><p>This approach has a few other limitations one ought to be aware of:</p><h3id=source-ip-address>Source IP address<aclass=headerlinkhref=#source-ip-addresstitle="Permanent link"> ¶</a></h3><p>Services of type NodePort perform <ahref=https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport>source address translation</a> by default. This means the source IP of a HTTP request is always <strong>the IP address of the Kubernetes node that received the request</strong> from the perspective of NGINX.</p><p>The recommended way to preserve the source IP in a NodePort setup is to set the value of the <code>externalTrafficPolicy</code> field of the <code>ingress-nginx</code> Service spec to <code>Local</code> (<ahref=https://github.com/kubernetes/ingress-nginx/blob/ingress-nginx-3.15.2/deploy/static/provider/aws/deploy.yaml#L290>example</a>).</p><divclass="admonition warning"><pclass=admonition-title>Warning</p><p>This setting effectively <strong>drops packets</strong> sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider <ahref=https://kubernetes.io/docs/concepts/configuration/assign-pod-node/>assigning NGINX Pods to specific nodes</a> in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.</p></div><divclass="admonition example"><pclass=admonition-title>Example</p><p>In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>node
</code></pre></div><p>with a <code>ingress-nginx-controller</code> Deployment composed of 2 replicas</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>get<spanclass=w></span>pod<spanclass=w></span>-o<spanclass=w></span>wide
</code></pre></div><p>Requests sent to <code>host-2</code> and <code>host-3</code> would be forwarded to NGINX and original client's IP would be preserved, while requests to <code>host-1</code> would get dropped because there is no NGINX replica running on that node.</p></div><p>Other ways to preserve the source IP in a NodePort setup are described here: <ahref=https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#source-ip-address>Source IP address</a>.</p><h3id=ingress-status>Ingress status<aclass=headerlinkhref=#ingress-statustitle="Permanent link"> ¶</a></h3><p>Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller <strong>does not update the status of Ingress objects it manages</strong>.</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>ingress
</code></pre></div><p>Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the <code>externalIPs</code> field of the <code>ingress-nginx</code> Service.</p><divclass="admonition warning"><pclass=admonition-title>Warning</p><p>There is more to setting <code>externalIPs</code> than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the <ahref=https://kubernetes.io/docs/concepts/services-networking/service/#external-ips>Services</a> page of official Kubernetes documentation as well as the section about <ahref=#external-ips>External IPs</a> in this document for more information.</p></div><divclass="admonition example"><pclass=admonition-title>Example</p><p>Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>node
</code></pre></div><p>one could edit the <code>ingress-nginx</code> Service and add the following field to the object spec</p><divclass=highlight><pre><span></span><code><spanclass=nt>spec</span><spanclass=p>:</span>
</code></pre></div><p>which would in turn be reflected on Ingress objects as follows:</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>ingress<spanclass=w></span>-o<spanclass=w></span>wide
</code></pre></div></div><h3id=redirects>Redirects<aclass=headerlinkhref=#redirectstitle="Permanent link"> ¶</a></h3><p>As NGINX is <strong>not aware of the port translation operated by the NodePort Service</strong>, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.</p><divclass="admonition example"><pclass=admonition-title>Example</p><p>Redirects generated by NGINX, for instance HTTP to HTTPS or <code>domain</code> to <code>www.domain</code>, are generated without NodePort:</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>curl<spanclass=w></span>-D-<spanclass=w></span>http://myapp.example.com:30100<spanclass=sb>`</span>
</code></pre></div></div><h2id=via-the-host-network>Via the host network<aclass=headerlinkhref=#via-the-host-networktitle="Permanent link"> ¶</a></h2><p>In a setup where there is no external load balancer available but using NodePorts is not an option, one can configure <code>ingress-nginx</code> Pods to use the network of the host they run on instead of a dedicated network namespace. The benefit of this approach is that the Ingress-Nginx Controller can bind ports 80 and 443 directly to Kubernetes nodes' network interfaces, without the extra network translation imposed by NodePort Services.</p><divclass="admonition note"><pclass=admonition-title>Note</p><p>This approach does not leverage any Service object to expose the Ingress-Nginx Controller. If the <code>ingress-nginx</code> Service exists in the target cluster, it is <strong>recommended to delete it</strong>.</p></div><p>This can be achieved by enabling the <code>hostNetwork</code> option in the Pods' spec.</p><divclass=highlight><pre><span></span><code><spanclass=nt>template</span><spanclass=p>:</span>
</code></pre></div><divclass="admonition danger"><pclass=admonition-title>Security considerations</p><p>Enabling this option <strong>exposes every system daemon to the Ingress-Nginx Controller</strong> on any network interface, including the host's loopback. Please evaluate the impact this may have on the security of your system carefully.</p></div><divclass="admonition example"><pclass=admonition-title>Example</p><p>Consider this <code>ingress-nginx-controller</code> Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>get<spanclass=w></span>pod<spanclass=w></span>-o<spanclass=w></span>wide
</code></pre></div></div><p>One major limitation of this deployment approach is that only <strong>a single Ingress-Nginx Controller Pod</strong> may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>describe<spanclass=w></span>pod<spanclass=w></span><unschedulable-ingress-nginx-controller-pod>
<spanclass=go> Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.</span>
</code></pre></div><p>One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a <em>DaemonSet</em> instead of a traditional Deployment.</p><divclass="admonition info"><pclass=admonition-title>Info</p><p>A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to <ahref=https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/>repel those Pods</a>. For more information, see <ahref=https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/>DaemonSet</a>.</p></div><p>Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.</p><p><imgalt="DaemonSet with hostNetwork flow"src=../../images/baremetal/hostnetwork.jpg></p><p>Like with NodePorts, this approach has a few quirks it is important to be aware of.</p><h3id=dns-resolution>DNS resolution<aclass=headerlinkhref=#dns-resolutiontitle="Permanent link"> ¶</a></h3><p>Pods configured with <code>hostNetwork: true</code> do not use the internal DNS resolver (i.e. <em>kube-dns</em> or <em>CoreDNS</em>), unless their <code>dnsPolicy</code> spec field is set to <ahref=https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy><code>ClusterFirstWithHostNet</code></a>. Consider using this setting if NGINX is expected to resolve internal names for any reason.</p><h3id=ingress-status_1>Ingress status<aclass=headerlinkhref=#ingress-status_1title="Permanent link"> ¶</a></h3><p>Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default <code>--publish-service</code> flag used in standard cloud setups <strong>does not apply</strong> and the status of all Ingress objects remains blank.</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>ingress
</code></pre></div><p>Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the <ahref=../../user-guide/cli-arguments/><code>--report-node-internal-ip-address</code></a> flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.</p><divclass="admonition example"><pclass=admonition-title>Example</p><p>Given a <code>ingress-nginx-controller</code> DaemonSet composed of 2 replicas</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>get<spanclass=w></span>pod<spanclass=w></span>-o<spanclass=w></span>wide
</code></pre></div><p>the controller sets the status of all Ingress objects it manages to the following value:</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>ingress<spanclass=w></span>-o<spanclass=w></span>wide
</code></pre></div></div><divclass="admonition note"><pclass=admonition-title>Note</p><p>Alternatively, it is possible to override the address written to Ingress objects using the <code>--publish-status-address</code> flag. See <ahref=../../user-guide/cli-arguments/>Command line arguments</a>.</p></div><h2id=using-a-self-provisioned-edge>Using a self-provisioned edge<aclass=headerlinkhref=#using-a-self-provisioned-edgetitle="Permanent link"> ¶</a></h2><p>Similarly to cloud environments, this deployment approach requires an edge network component providing a public entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software (e.g. <em>HAproxy</em>) and is usually managed outside of the Kubernetes landscape by operations teams.</p><p>Such deployment builds upon the NodePort Service described above in <ahref=#over-a-nodeport-service>Over a NodePort Service</a>, with one significant difference: external clients do not access cluster nodes directly, only the edge component does. This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.</p><p>On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort on the target nodes as shown in the diagram below:</p><p><imgalt="User edge"src=../../images/baremetal/user_edge.jpg></p><h2id=external-ips>External IPs<aclass=headerlinkhref=#external-ipstitle="Permanent link"> ¶</a></h2><divclass="admonition danger"><pclass=admonition-title>Source IP address</p><p>This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore <strong>not recommended</strong> to use it despite its apparent simplicity.</p></div><p>The <code>externalIPs</code> Service option was previously mentioned in the <ahref=#over-a-nodeport-service>NodePort</a> section.</p><p>As per the <ahref=https://kubernetes.io/docs/concepts/services-networking/service/#external-ips>Services</a> page of the official Kubernetes documentation, the <code>externalIPs</code> option causes <code>kube-proxy</code> to route traffic sent to arbitrary IP addresses <strong>and on the Service ports</strong> to the endpoints of that Service. These IP addresses <strong>must belong to the target node</strong>.</p><divclass="admonition example"><pclass=admonition-title>Example</p><p>Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>get<spanclass=w></span>node
</code></pre></div><p>and the following <code>ingress-nginx</code> NodePort Service</p><divclass=highlight><pre><span></span><code><spanclass=gp>$ </span>kubectl<spanclass=w></span>-n<spanclass=w></span>ingress-nginx<spanclass=w></span>get<spanclass=w></span>svc
</code></pre></div><p>One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:</p><divclass=highlight><pre><span></span><code><spanclass=nt>spec</span><spanclass=p>:</span>
</code></pre></div><p>We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.</p></div></article></div></div></main><footerclass=md-footer><divclass="md-footer-meta md-typeset"><divclass="md-footer-meta__inner md-grid"><divclass=md-copyright> Made with <ahref=https://squidfunk.github.io/mkdocs-material/target=_blankrel=noopener> Material for MkDocs </a></div></div></div></footer></div><divclass=md-dialogdata-md-component=dialog><divclass="md-dialog__inner md-typeset"></div></div><scriptid=__configtype=application/json>{"base":"../..","features":["navigation.tabs","navigation.tabs.sticky","navigation.instant","navigation.sections"],"search":"../../assets/javascripts/workers/search.f886a092.min.js","translations":{"clipboard.copied":"Copied to clipboard","clipboard.copy":"Copy to clipboard","search.result.more.one":"1 more on this page","search.result.more.other":"# more on this page","search.result.none":"No matching documents","search.result.one":"1 matching document","search.result.other":"# matching documents","search.result.placeholder":"Type to start searching","search.result.term.missing":"Missing","select.version":"Select version"}}</script><scriptsrc=../../assets/javascripts/bundle.aecac24b.min.js></script></body></html>