Fix YAML linting
Signed-off-by: Scott Rigby <scott@r6by.com>
This commit is contained in:
parent
ae136cac92
commit
58c1ca6176
1 changed files with 9 additions and 7 deletions
|
@ -13,9 +13,9 @@ This chart bootstraps an ingress-nginx deployment on a [Kubernetes](http://kuber
|
|||
## Get Repo Info
|
||||
|
||||
```console
|
||||
$ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/
|
||||
$ helm repo update
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
|
||||
helm repo update
|
||||
```
|
||||
|
||||
## Install Chart
|
||||
|
@ -101,8 +101,9 @@ You can add Prometheus annotations to the metrics service using `controller.metr
|
|||
### ingress-nginx nginx\_status page/stats server
|
||||
|
||||
Previous versions of this chart had a `controller.stats.*` configuration block, which is now obsolete due to the following changes in nginx ingress controller:
|
||||
* in [0.16.1](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0161), the vts (virtual host traffic status) dashboard was removed
|
||||
* in [0.23.0](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0230), the status page at port 18080 is now a unix socket webserver only available at localhost.
|
||||
|
||||
- In [0.16.1](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0161), the vts (virtual host traffic status) dashboard was removed
|
||||
- In [0.23.0](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0230), the status page at port 18080 is now a unix socket webserver only available at localhost.
|
||||
You can use `curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status` inside the controller container to access it locally, or use the snippet from [nginx-ingress changelog](https://github.com/kubernetes/ingress-nginx/blob/master/Changelog.md#0230) to re-enable the http server
|
||||
|
||||
### ExternalDNS Service Configuration
|
||||
|
@ -162,6 +163,7 @@ If one of them is missing the internal load balancer will not be deployed. Examp
|
|||
`controller.service.internal.annotations` varies with the cloud service you're using.
|
||||
|
||||
Example for AWS:
|
||||
|
||||
```yaml
|
||||
controller:
|
||||
service:
|
||||
|
@ -174,6 +176,7 @@ controller:
|
|||
```
|
||||
|
||||
Example for GCE:
|
||||
|
||||
```yaml
|
||||
controller:
|
||||
service:
|
||||
|
@ -187,7 +190,6 @@ controller:
|
|||
|
||||
An use case for this scenario is having a split-view DNS setup where the public zone CNAME records point to the external balancer URL while the private zone CNAME records point to the internal balancer URL. This way, you only need one ingress kubernetes object.
|
||||
|
||||
|
||||
### Ingress Admission Webhooks
|
||||
|
||||
With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the `validatingwebhookconfiguration` Kubernetes feature to prevent bad ingress from being added to the cluster.
|
||||
|
@ -199,7 +201,7 @@ With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fi
|
|||
|
||||
If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:
|
||||
|
||||
```
|
||||
```console
|
||||
Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable
|
||||
```
|
||||
|
||||
|
|
Loading…
Reference in a new issue