Deploy GitHub Pages
This commit is contained in:
parent
ed94aa9319
commit
34c4174895
3 changed files with 9 additions and 9 deletions
|
@ -19,21 +19,21 @@
|
|||
<span class=go> --rule www.demo.io/=demo:80</span>
|
||||
</code></pre></div></p> <p>You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public web site hosted on a Kubernetes cluster! 🎉</p> <h2 id=environment-specific-instructions>Environment-specific instructions<a class=headerlink href=#environment-specific-instructions title="Permanent link"> ¶</a></h2> <h3 id=local-development-clusters>Local development clusters<a class=headerlink href=#local-development-clusters title="Permanent link"> ¶</a></h3> <h4 id=minikube>minikube<a class=headerlink href=#minikube title="Permanent link"> ¶</a></h4> <p>The ingress controller can be installed through minikube's addons system:</p> <div class=highlight><pre><span></span><code><span class=go>minikube addons enable ingress</span>
|
||||
</code></pre></div> <h4 id=microk8s>MicroK8s<a class=headerlink href=#microk8s title="Permanent link"> ¶</a></h4> <p>The ingress controller can be installed through MicroK8s's addons system:</p> <div class=highlight><pre><span></span><code><span class=go>microk8s enable ingress</span>
|
||||
</code></pre></div> <p>Please check the MicroK8s <a href=https://microk8s.io/docs/addon-ingress>documentation page</a> for details.</p> <h4 id=docker-desktop>Docker Desktop<a class=headerlink href=#docker-desktop title="Permanent link"> ¶</a></h4> <p>Kubernetes is available in Docker Desktop:</p> <ul> <li>Mac, from <a href=https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018>version 18.06.0-ce</a></li> <li>Windows, from <a href=https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25>version 18.06.0-ce</a></li> </ul> <p>First, make sure that Kubernetes is enabled in the Docker settings. The command <code>kubectl get nodes</code> should show a single node called <code>docker-desktop</code>.</p> <p>The ingress controller can be installed on Docker Desktop using the default <a href=#quick-start>quick start</a> instructions.</p> <p>On most systems, if you don't have any other service of type <code>LoadBalancer</code> bound to port 80, the ingress controller will be assigned the <code>EXTERNAL-IP</code> of <code>localhost</code>, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the <code>kubectl port-forward</code> method described in the <a href=#local-testing>local testing section</a>.</p> <h3 id=cloud-deployments>Cloud deployments<a class=headerlink href=#cloud-deployments title="Permanent link"> ¶</a></h3> <p>If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the <code>externalTrafficPolicy</code> of the ingress controller Service to <code>Local</code> (instead of the default <code>Cluster</code>) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding <code>--set controller.service.externalTrafficPolicy=Local</code> to the <code>helm install</code> or <code>helm upgrade</code> command.</p> <p>Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. <code>--set controller.config.use-proxy-protocol=true</code>) and in the cloud provider's load balancer configuration to function correctly.</p> <p>In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.</p> <h4 id=aws>AWS<a class=headerlink href=#aws title="Permanent link"> ¶</a></h4> <p>In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of <code>Type=LoadBalancer</code>.</p> <div class="admonition info"> <p class=admonition-title>Info</p> <p>The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use <a href=https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html>Network load balancing on Amazon EKS</a> with <a href=https://github.com/kubernetes-sigs/aws-load-balancer-controller>AWS Load Balancer Controller</a>.</p> </div> <h5 id=network-load-balancer-nlb>Network Load Balancer (NLB)<a class=headerlink href=#network-load-balancer-nlb title="Permanent link"> ¶</a></h5> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/aws/deploy.yaml</span>
|
||||
</code></pre></div> <h5 id=tls-termination-in-aws-load-balancer-nlb>TLS termination in AWS Load Balancer (NLB)<a class=headerlink href=#tls-termination-in-aws-load-balancer-nlb title="Permanent link"> ¶</a></h5> <p>By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.</p> <ol> <li>Download the <a href=https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml>deploy.yaml</a> template</li> </ol> <div class=highlight><pre><span></span><code><span class=go>wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml</span>
|
||||
</code></pre></div> <p>Please check the MicroK8s <a href=https://microk8s.io/docs/addon-ingress>documentation page</a> for details.</p> <h4 id=docker-desktop>Docker Desktop<a class=headerlink href=#docker-desktop title="Permanent link"> ¶</a></h4> <p>Kubernetes is available in Docker Desktop:</p> <ul> <li>Mac, from <a href=https://docs.docker.com/docker-for-mac/release-notes/#stable-releases-of-2018>version 18.06.0-ce</a></li> <li>Windows, from <a href=https://docs.docker.com/docker-for-windows/release-notes/#docker-community-edition-18060-ce-win70-2018-07-25>version 18.06.0-ce</a></li> </ul> <p>First, make sure that Kubernetes is enabled in the Docker settings. The command <code>kubectl get nodes</code> should show a single node called <code>docker-desktop</code>.</p> <p>The ingress controller can be installed on Docker Desktop using the default <a href=#quick-start>quick start</a> instructions.</p> <p>On most systems, if you don't have any other service of type <code>LoadBalancer</code> bound to port 80, the ingress controller will be assigned the <code>EXTERNAL-IP</code> of <code>localhost</code>, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the <code>kubectl port-forward</code> method described in the <a href=#local-testing>local testing section</a>.</p> <h3 id=cloud-deployments>Cloud deployments<a class=headerlink href=#cloud-deployments title="Permanent link"> ¶</a></h3> <p>If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the <code>externalTrafficPolicy</code> of the ingress controller Service to <code>Local</code> (instead of the default <code>Cluster</code>) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding <code>--set controller.service.externalTrafficPolicy=Local</code> to the <code>helm install</code> or <code>helm upgrade</code> command.</p> <p>Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. <code>--set controller.config.use-proxy-protocol=true</code>) and in the cloud provider's load balancer configuration to function correctly.</p> <p>In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.</p> <h4 id=aws>AWS<a class=headerlink href=#aws title="Permanent link"> ¶</a></h4> <p>In AWS we use a Network load balancer (NLB) to expose the NGINX Ingress controller behind a Service of <code>Type=LoadBalancer</code>.</p> <div class="admonition info"> <p class=admonition-title>Info</p> <p>The provided templates illustrate the setup for legacy in-tree service load balancer for AWS NLB. AWS provides the documentation on how to use <a href=https://docs.aws.amazon.com/eks/latest/userguide/network-load-balancing.html>Network load balancing on Amazon EKS</a> with <a href=https://github.com/kubernetes-sigs/aws-load-balancer-controller>AWS Load Balancer Controller</a>.</p> </div> <h5 id=network-load-balancer-nlb>Network Load Balancer (NLB)<a class=headerlink href=#network-load-balancer-nlb title="Permanent link"> ¶</a></h5> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/aws/deploy.yaml</span>
|
||||
</code></pre></div> <h5 id=tls-termination-in-aws-load-balancer-nlb>TLS termination in AWS Load Balancer (NLB)<a class=headerlink href=#tls-termination-in-aws-load-balancer-nlb title="Permanent link"> ¶</a></h5> <p>By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.</p> <ol> <li>Download the <a href=https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml>deploy.yaml</a> template</li> </ol> <div class=highlight><pre><span></span><code><span class=go>wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/aws/nlb-with-tls-termination/deploy.yaml</span>
|
||||
</code></pre></div> <ol> <li> <p>Edit the file and change the VPC CIDR in use for the Kubernetes cluster: <div class=highlight><pre><span></span><code>proxy-real-ip-cidr: XXX.XXX.XXX/XX
|
||||
</code></pre></div></p> </li> <li> <p>Change the AWS Certificate Manager (ACM) ID as well: <div class=highlight><pre><span></span><code>arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX
|
||||
</code></pre></div></p> </li> <li> <p>Deploy the manifest: <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f deploy.yaml</span>
|
||||
</code></pre></div></p> </li> </ol> <h5 id=nlb-idle-timeouts>NLB Idle Timeouts<a class=headerlink href=#nlb-idle-timeouts title="Permanent link"> ¶</a></h5> <p>Idle timeout value for TCP flows is 350 seconds and <a href=https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout>cannot be modified</a>.</p> <p>For this reason, you need to ensure the <a href=http://nginx.org/en/docs/http/ngx_http_core_module.html#keepalive_timeout>keepalive_timeout</a> value is configured less than 350 seconds to work as expected.</p> <p>By default NGINX <code>keepalive_timeout</code> is set to <code>75s</code>.</p> <p>More information with regards to timeouts can be found in the <a href=https://docs.aws.amazon.com/elasticloadbalancing/latest/network/network-load-balancers.html#connection-idle-timeout>official AWS documentation</a></p> <h4 id=gce-gke>GCE-GKE<a class=headerlink href=#gce-gke title="Permanent link"> ¶</a></h4> <p>First, your user needs to have <code>cluster-admin</code> permissions on the cluster. This can be done with the following command:</p> <div class=highlight><pre><span></span><code><span class=go>kubectl create clusterrolebinding cluster-admin-binding \</span>
|
||||
<span class=go> --clusterrole cluster-admin \</span>
|
||||
<span class=go> --user $(gcloud config get-value account)</span>
|
||||
</code></pre></div> <p>Then, the ingress controller can be installed like this:</p> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port <code>8443/tcp</code> on worker nodes, or change the existing rule that allows access to ports <code>80/tcp</code>, <code>443/tcp</code> and <code>10254/tcp</code> to also allow access to port <code>8443/tcp</code>.</p> <p>See the <a href=https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules>GKE documentation</a> on adding rules and the <a href=https://github.com/kubernetes/kubernetes/issues/79739>Kubernetes issue</a> for more detail.</p> </div> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>Proxy protocol is not supported in GCE/GKE.</p> </div> <h4 id=azure>Azure<a class=headerlink href=#azure title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <p>More information with regards to Azure annotations for ingress controller can be found in the <a href=https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller>official AKS documentation</a>.</p> <h4 id=digital-ocean>Digital Ocean<a class=headerlink href=#digital-ocean title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/do/deploy.yaml</span>
|
||||
</code></pre></div> <h4 id=scaleway>Scaleway<a class=headerlink href=#scaleway title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/scw/deploy.yaml</span>
|
||||
</code></pre></div> <p>Then, the ingress controller can be installed like this:</p> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port <code>8443/tcp</code> on worker nodes, or change the existing rule that allows access to ports <code>80/tcp</code>, <code>443/tcp</code> and <code>10254/tcp</code> to also allow access to port <code>8443/tcp</code>.</p> <p>See the <a href=https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules>GKE documentation</a> on adding rules and the <a href=https://github.com/kubernetes/kubernetes/issues/79739>Kubernetes issue</a> for more detail.</p> </div> <div class="admonition warning"> <p class=admonition-title>Warning</p> <p>Proxy protocol is not supported in GCE/GKE.</p> </div> <h4 id=azure>Azure<a class=headerlink href=#azure title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <p>More information with regards to Azure annotations for ingress controller can be found in the <a href=https://docs.microsoft.com/en-us/azure/aks/ingress-internal-ip#create-an-ingress-controller>official AKS documentation</a>.</p> <h4 id=digital-ocean>Digital Ocean<a class=headerlink href=#digital-ocean title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/do/deploy.yaml</span>
|
||||
</code></pre></div> <h4 id=scaleway>Scaleway<a class=headerlink href=#scaleway title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/scw/deploy.yaml</span>
|
||||
</code></pre></div> <h4 id=exoscale>Exoscale<a class=headerlink href=#exoscale title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml</span>
|
||||
</code></pre></div> <p>The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager <a href=https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md>documentation</a>.</p> <h4 id=oracle-cloud-infrastructure>Oracle Cloud Infrastructure<a class=headerlink href=#oracle-cloud-infrastructure title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <p>A <a href=https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md>complete list of available annotations for Oracle Cloud Infrastructure</a> can be found in the <a href=https://github.com/oracle/oci-cloud-controller-manager>OCI Cloud Controller Manager</a> documentation.</p> <h3 id=bare-metal-clusters>Bare metal clusters<a class=headerlink href=#bare-metal-clusters title="Permanent link"> ¶</a></h3> <p>This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)</p> <p>For quick testing, you can use a <a href=https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport>NodePort</a>. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.</p> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.2/deploy/static/provider/baremetal/deploy.yaml</span>
|
||||
</code></pre></div> <p>The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager <a href=https://github.com/exoscale/exoscale-cloud-controller-manager/blob/master/docs/service-loadbalancer.md>documentation</a>.</p> <h4 id=oracle-cloud-infrastructure>Oracle Cloud Infrastructure<a class=headerlink href=#oracle-cloud-infrastructure title="Permanent link"> ¶</a></h4> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/cloud/deploy.yaml</span>
|
||||
</code></pre></div> <p>A <a href=https://github.com/oracle/oci-cloud-controller-manager/blob/master/docs/load-balancer-annotations.md>complete list of available annotations for Oracle Cloud Infrastructure</a> can be found in the <a href=https://github.com/oracle/oci-cloud-controller-manager>OCI Cloud Controller Manager</a> documentation.</p> <h3 id=bare-metal-clusters>Bare metal clusters<a class=headerlink href=#bare-metal-clusters title="Permanent link"> ¶</a></h3> <p>This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)</p> <p>For quick testing, you can use a <a href=https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport>NodePort</a>. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.</p> <div class=highlight><pre><span></span><code><span class=go>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/baremetal/deploy.yaml</span>
|
||||
</code></pre></div> <p>For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see <a href=baremetal/ >bare-metal considerations</a>.</p> <h2 id=miscellaneous>Miscellaneous<a class=headerlink href=#miscellaneous title="Permanent link"> ¶</a></h2> <h3 id=checking-ingress-controller-version>Checking ingress controller version<a class=headerlink href=#checking-ingress-controller-version title="Permanent link"> ¶</a></h3> <p>Run <code>/nginx-ingress-controller --version</code> within the pod, for instance with <code>kubectl exec</code>:</p> <div class=highlight><pre><span></span><code><span class=go>POD_NAMESPACE=ingress-nginx</span>
|
||||
<span class=go>POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)</span>
|
||||
<span class=go>kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version</span>
|
||||
|
|
File diff suppressed because one or more lines are too long
BIN
sitemap.xml.gz
BIN
sitemap.xml.gz
Binary file not shown.
Loading…
Reference in a new issue