commit
76056c10b0
1 changed files with 3 additions and 3 deletions
|
@ -245,7 +245,7 @@ spec:
|
|||
app: nginxtest
|
||||
```
|
||||
|
||||
Running kubectl create against this manifest will given you a service with multiple endpoints:
|
||||
Running kubectl create against this manifest will give you a service with multiple endpoints:
|
||||
```shell
|
||||
$ kubectl get svc nginxtest -o yaml | grep -i nodeport:
|
||||
nodePort: 30404
|
||||
|
@ -300,7 +300,7 @@ As before, wait a while for the update to take effect, and try accessing `loadba
|
|||
|
||||
#### Deletion
|
||||
|
||||
Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources alltogether. Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
|
||||
Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources altogether. Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
|
||||
```shell
|
||||
$ kubectl delete ing echomap
|
||||
$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
|
||||
|
|
Loading…
Reference in a new issue