diff --git a/deploy/index.html b/deploy/index.html index e40fb27ba..4aa4155de 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -30,7 +30,7 @@

Exoscale

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/exoscale/deploy.yaml
 

The full list of annotations supported by Exoscale is available in the Exoscale Cloud Controller Manager documentation.

Oracle Cloud Infrastructure

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
 

A complete list of available annotations for Oracle Cloud Infrastructure can be found in the OCI Cloud Controller Manager documentation.

Bare metal clusters

Using NodePort:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/baremetal/deploy.yaml
-

Tip

Applicable on kubernetes clusters deployed on bare-metal with generic Linux distro(Such as CentOs, Ubuntu ...).

Info

For extended notes regarding deployments on bare-metal, see Bare-metal considerations.

Miscellaneous

Checking ingress controller version

Run nginx-ingress-controller --version within the pod, for instance with kubectl exec:

POD_NAMESPACE=ingress-nginx
+

Tip

Applicable on kubernetes clusters deployed on bare-metal with generic Linux distro(Such as CentOs, Ubuntu ...).

Info

For extended notes regarding deployments on bare-metal, see Bare-metal considerations.

Miscellaneous

Checking ingress controller version

Run ingress-nginx-controller --version within the pod, for instance with kubectl exec:

POD_NAMESPACE=ingress-nginx
 POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
 kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
 

Scope

By default, the controller watches Ingress objects from all namespaces. If you want to change this behavior, use the flag --watch-namespace or check the Helm chart value controller.scope to limit the controller to a single namespace.

See also “How to easily install multiple instances of the Ingress NGINX controller in the same cluster” for more details.

Webhook network access

Warning

The controller uses an admission webhook to validate Ingress definitions. Make sure that you don't have Network policies or additional firewalls preventing connections from the API server to the ingress-nginx-controller-admission service.

Certificate generation

Attention

The first time the ingress controller starts, two Jobs create the SSL Certificate used by the admission webhook.

THis can cause an initial delay of up to two minutes until it is possible to create and validate Ingress definitions.

You can wait until it is ready to run the next command:

 kubectl wait --namespace ingress-nginx \
diff --git a/deploy/rbac/index.html b/deploy/rbac/index.html
index dbbc07a18..e27046aed 100644
--- a/deploy/rbac/index.html
+++ b/deploy/rbac/index.html
@@ -1,4 +1,4 @@
- Role Based Access Control (RBAC) - NGINX Ingress Controller     
Skip to content

Role Based Access Control (RBAC)

Overview

This example applies to nginx-ingress-controllers being deployed in an environment with RBAC enabled.

Role Based Access Control is comprised of four layers:

  1. ClusterRole - permissions assigned to a role that apply to an entire cluster
  2. ClusterRoleBinding - binding a ClusterRole to a specific account
  3. Role - permissions assigned to a role that apply to a specific namespace
  4. RoleBinding - binding a Role to a specific account

In order for RBAC to be applied to an nginx-ingress-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the nginx-ingress-controller.

Service Accounts created in this example

One ServiceAccount is created in this example, nginx-ingress-serviceaccount.

Permissions Granted in this example

There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named nginx-ingress-clusterrole, and namespace specific permissions defined by the Role named nginx-ingress-role.

Cluster Permissions

These permissions are granted in order for the nginx-ingress-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named nginx-ingress-clusterrole

  • configmaps, endpoints, nodes, pods, secrets: list, watch
  • nodes: get
  • services, ingresses: get, list, watch
  • events: create, patch
  • ingresses/status: update

Namespace Permissions

These permissions are granted specific to the nginx-ingress namespace. These permissions are granted to the Role named nginx-ingress-role

  • configmaps, pods, secrets: get
  • endpoints: get

Furthermore to support leader-election, the nginx-ingress-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx

Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).

  • configmaps: get, update (for resourceName ingress-controller-leader-nginx)
  • configmaps: create

This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to:

  • election-id: ingress-controller-leader
  • ingress-class: nginx
  • resourceName : <election-id>-<ingress-class>

Please adapt accordingly if you overwrite either parameter when launching the nginx-ingress-controller.

Bindings

The ServiceAccount nginx-ingress-serviceaccount is bound to the Role nginx-ingress-role and the ClusterRole nginx-ingress-clusterrole.

The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the nginx-ingress namespace.

Skip to content

Role Based Access Control (RBAC)

Overview

This example applies to ingress-nginx-controllers being deployed in an environment with RBAC enabled.

Role Based Access Control is comprised of four layers:

  1. ClusterRole - permissions assigned to a role that apply to an entire cluster
  2. ClusterRoleBinding - binding a ClusterRole to a specific account
  3. Role - permissions assigned to a role that apply to a specific namespace
  4. RoleBinding - binding a Role to a specific account

In order for RBAC to be applied to an ingress-nginx-controller, that controller should be assigned to a ServiceAccount. That ServiceAccount should be bound to the Roles and ClusterRoles defined for the ingress-nginx-controller.

Service Accounts created in this example

One ServiceAccount is created in this example, ingress-nginx.

Permissions Granted in this example

There are two sets of permissions defined in this example. Cluster-wide permissions defined by the ClusterRole named ingress-nginx, and namespace specific permissions defined by the Role named ingress-nginx.

Cluster Permissions

These permissions are granted in order for the ingress-nginx-controller to be able to function as an ingress across the cluster. These permissions are granted to the ClusterRole named ingress-nginx

  • configmaps, endpoints, nodes, pods, secrets: list, watch
  • nodes: get
  • services, ingresses: get, list, watch
  • events: create, patch
  • ingresses/status: update

Namespace Permissions

These permissions are granted specific to the ingress-nginx namespace. These permissions are granted to the Role named ingress-nginx

  • configmaps, pods, secrets: get
  • endpoints: get

Furthermore to support leader-election, the ingress-nginx-controller needs to have access to a configmap using the resourceName ingress-controller-leader-nginx

Note that resourceNames can NOT be used to limit requests using the “create” verb because authorizers only have access to information that can be obtained from the request URL, method, and headers (resource names in a “create” request are part of the request body).

  • configmaps: get, update (for resourceName ingress-controller-leader-nginx)
  • configmaps: create

This resourceName is the concatenation of the election-id and the ingress-class as defined by the ingress-controller, which defaults to:

  • election-id: ingress-controller-leader
  • ingress-class: nginx
  • resourceName : <election-id>-<ingress-class>

Please adapt accordingly if you overwrite either parameter when launching the ingress-nginx-controller.

Bindings

The ServiceAccount ingress-nginx is bound to the Role ingress-nginx and the ClusterRole ingress-nginx.

The serviceAccountName associated with the containers in the deployment must match the serviceAccount. The namespace references in the Deployment metadata, container arguments, and POD_NAMESPACE should be in the ingress-nginx namespace.

Skip to content

Upgrading

Important

No matter the method you use for upgrading, if you use template overrides, make sure your templates are compatible with the new version of ingress-nginx.

Without Helm

To upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

I.e. if your deployment resource looks like (partial example):

kind: Deployment
 metadata:
-  name: nginx-ingress-controller
+  name: ingress-nginx-controller
   namespace: ingress-nginx
 spec:
   replicas: 1
@@ -9,13 +9,13 @@
     metadata: ...
     spec:
       containers:
-        - name: nginx-ingress-controller
-          image: k8s.gcr.io/ingress-nginx/controller:v0.34.0@sha256:56633bd00dab33d92ba14c6e709126a762d54a75a6e72437adefeaaca0abb069
+        - name: ingress-nginx-controller
+          image: k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef
           args: ...
-

simply change the 0.34.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):

kubectl set image deployment/nginx-ingress-controller \
-  nginx-ingress-controller=k8s.gcr.io/ingress-nginx/controller:v0.34.1@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20 \
+

simply change the v1.0.4 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):

kubectl set image deployment/ingress-nginx-controller \
+  controller=k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d \
   -n ingress-nginx
-

For interactive editing, use kubectl edit deployment nginx-ingress-controller -n ingress-nginx.

With Helm

If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx, you should be able to upgrade using

helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx
+

For interactive editing, use kubectl edit deployment ingress-nginx-controller -n ingress-nginx.

With Helm

If you installed ingress-nginx using the Helm command in the deployment docs so its name is ingress-nginx, you should be able to upgrade using

helm upgrade --reuse-values ingress-nginx ingress-nginx/ingress-nginx
 

Migrating from stable/nginx-ingress

See detailed steps in the upgrading section of the ingress-nginx chart README.

Skip to content

External OAUTH Authentication

Overview

The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

Important

This annotation requires nginx-ingress-controller v0.9.0 or greater.)

Key Detail

This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

Sample:

...
+ External OAUTH Authentication - NGINX Ingress Controller      

External OAUTH Authentication

Overview

The auth-url and auth-signin annotations allow you to use an external authentication provider to protect your Ingress resources.

Important

This annotation requires ingress-nginx-controller v0.9.0 or greater.)

Key Detail

This functionality is enabled by deploying multiple Ingress objects for a single host. One Ingress object has no special annotations and handles authentication.

Other Ingress objects can then be annotated in such a way that require the user to authenticate against the first Ingress's endpoint, and can redirect 401s to the same endpoint.

Sample:

...
 metadata:
   name: application
   annotations:
diff --git a/examples/customization/sysctl/index.html b/examples/customization/sysctl/index.html
index a92283c76..b172b0dac 100644
--- a/examples/customization/sysctl/index.html
+++ b/examples/customization/sysctl/index.html
@@ -1,4 +1,4 @@
- Sysctl tuning - NGINX Ingress Controller      

Sysctl tuning

This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch

kubectl patch deployment -n ingress-nginx nginx-ingress-controller \
+ Sysctl tuning - NGINX Ingress Controller      

Sysctl tuning

This example aims to demonstrate the use of an Init Container to adjust sysctl default values using kubectl patch

kubectl patch deployment -n ingress-nginx ingress-nginx-controller \
     --patch="$(curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/docs/examples/customization/sysctl/patch.json)"
 

Changes:

  • Backlog Queue setting net.core.somaxconn from 128 to 32768
  • Ephemeral Ports setting net.ipv4.ip_local_port_range from 32768 60999 to 1024 65000

In a post from the NGINX blog, it is possible to see an explanation for the changes.

gRPC

This example demonstrates how to route traffic to a gRPC service through the nginx controller.

Prerequisites

  1. You have a kubernetes cluster running.
  2. You have a domain name such as example.com that is configured to route traffic to the ingress controller.
  3. You have the nginx-ingress controller installed as per docs.
  4. You have a backend application running a gRPC server and listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
  5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.

Step 1: Create a Kubernetes Deployment for gRPC app

  • Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
    $ kubectl get po -A -o wide | grep go-grpc-greeter-server
    + gRPC - NGINX Ingress Controller      

    gRPC

    This example demonstrates how to route traffic to a gRPC service through the nginx controller.

    Prerequisites

    1. You have a kubernetes cluster running.
    2. You have a domain name such as example.com that is configured to route traffic to the ingress controller.
    3. You have the ingress-nginx-controller installed as per docs.
    4. You have a backend application running a gRPC server and listening for TCP traffic. If you want, you can use https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go as an example.
    5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type tls, in the same namespace as the gRPC application.

    Step 1: Create a Kubernetes Deployment for gRPC app

    • Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below:
      $ kubectl get po -A -o wide | grep go-grpc-greeter-server
       
    • If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below.

    • As an example gRPC application, we can use this app https://github.com/grpc/grpc-go/blob/91e0aeb192456225adf27966d04ada4cf8599915/examples/features/reflection/server/main.go .

    • To create a container image for this app, you can use this Dockerfile.

    • If you use the Dockerfile mentioned above, to create a image, then given below is an example of a Kubernetes manifest, to create a deployment resource, that uses that image. If needed, then edit this manifest to suit your needs. Assuming the name of this yaml file is deployment.go-grpc-greeter-server.yaml ;

    cat <<EOF | kubectl apply -f -
     apiVersion: apps/v1
     kind: Deployment
    @@ -81,7 +81,7 @@ EOF
     {
       "message": "
     }
    -

    Debugging Hints

    1. Obviously, watch the logs on your app.
    2. Watch the logs for the nginx-ingress-controller (increasing verbosity as needed).
    3. Double-check your address and ports.
    4. Set the GODEBUG=http2debug=2 environment variable to get detailed http/2 logging on the client and/or server.
    5. Study RFC 7540 (http/2) https://tools.ietf.org/html/rfc7540.

    If you are developing public gRPC endpoints, check out https://proto.stack.build, a protocol buffer / gRPC build service that can use to help make it easier for your users to consume your API.

    See also the specific GRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html

    Notes on using response/request streams

    1. If your server does only response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the grpc_read_timeout to accommodate for this.
    2. If your service does only request streaming and you expect a stream to be open longer than 60 seconds, you have to change the grpc_send_timeout and the client_body_timeout.
    3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: grpc_read_timeout, grpc_send_timeout and client_body_timeout.

    Values for the timeouts must be specified as e.g. "1200s".

    On the most recent versions of nginx-ingress, changing these timeouts requires using the nginx.ingress.kubernetes.io/server-snippet annotation. There are plans for future releases to allow using the Kubernetes annotations to define each timeout separately.

    Multi TLS certificate termination

    This example uses 2 different certificates to terminate SSL for 2 hostnames.

    1. Deploy the controller by creating the rc in the parent dir
    2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
    3. Create multi-tls.yaml

    This should generate a segment like:

    $ kubectl exec -it nginx-ingress-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
    + Multi TLS certificate termination - NGINX Ingress Controller      

    Multi TLS certificate termination

    This example uses 2 different certificates to terminate SSL for 2 hostnames.

    1. Deploy the controller by creating the rc in the parent dir
    2. Create tls secrets for foo.bar.com and bar.baz.com as indicated in the yaml
    3. Create multi-tls.yaml

    This should generate a segment like:

    $ kubectl exec -it ingress-nginx-controller-6vwd1 -- cat /etc/nginx/nginx.conf | grep "foo.bar.com" -B 7 -A 35
         server {
             listen 80;
             listen 443 ssl http2;
    diff --git a/examples/static-ip/index.html b/examples/static-ip/index.html
    index 8220b279d..9cc6d2d5d 100644
    --- a/examples/static-ip/index.html
    +++ b/examples/static-ip/index.html
    @@ -1,17 +1,17 @@
    - Static IPs - NGINX Ingress Controller      

    Static IPs

    This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.

    Prerequisites

    You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

    Acquiring an IP

    Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade.

    To acquire a static IP for the nginx ingress controller, simply put it behind a Service of Type=LoadBalancer.

    First, create a loadbalancer Service and wait for it to acquire an IP

    $ kubectl create -f static-ip-svc.yaml
    -service "nginx-ingress-lb" created
    + Static IPs - NGINX Ingress Controller      

    Static IPs

    This example demonstrates how to assign a static-ip to an Ingress on through the Nginx controller.

    Prerequisites

    You need a TLS cert and a test HTTP service for this example. You will also need to make sure your Ingress targets exactly one Ingress controller by specifying the ingress.class annotation, and that you have an ingress controller running in your cluster.

    Acquiring an IP

    Since instances of the nginx controller actually run on nodes in your cluster, by default nginx Ingresses will only get static IPs if your cloudprovider supports static IP assignments to nodes. On GKE/GCE for example, even though nodes get static IPs, the IPs are not retained across upgrade.

    To acquire a static IP for the ingress-nginx-controller, simply put it behind a Service of Type=LoadBalancer.

    First, create a loadbalancer Service and wait for it to acquire an IP

    $ kubectl create -f static-ip-svc.yaml
    +service "ingress-nginx-lb" created
     
    -$ kubectl get svc nginx-ingress-lb
    +$ kubectl get svc ingress-nginx-lb
     NAME               CLUSTER-IP     EXTERNAL-IP       PORT(S)                      AGE
    -nginx-ingress-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m
    -

    then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to "nginx-ingress-lb").

    $ kubectl create -f nginx-ingress-controller.yaml
    -deployment "nginx-ingress-controller" created
    -

    Assigning the IP to an Ingress

    From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step

    $ kubectl create -f nginx-ingress.yaml
    -ingress "nginx-ingress" created
    +ingress-nginx-lb   10.0.138.113   104.154.109.191   80:31457/TCP,443:32240/TCP   15m
    +

    then, update the ingress controller so it adopts the static IP of the Service by passing the --publish-service flag (the example yaml used in the next step already has it set to "ingress-nginx-lb").

    $ kubectl create -f ingress-nginx-controller.yaml
    +deployment "ingress-nginx-controller" created
    +

    Assigning the IP to an Ingress

    From here on every Ingress created with the ingress.class annotation set to nginx will get the IP allocated in the previous step

    $ kubectl create -f ingress-nginx.yaml
    +ingress "ingress-nginx" created
     
     $ kubectl get ing ingress-nginx
     NAME            HOSTS     ADDRESS           PORTS     AGE
    -nginx-ingress   *         104.154.109.191   80, 443   13m
    +ingress-nginx   *         104.154.109.191   80, 443   13m
     
     $ curl 104.154.109.191 -kL
     CLIENT VALUES:
    @@ -22,28 +22,28 @@
     request_version=1.1
     request_uri=http://104.154.109.191:8080/
     ...
    -

    Retaining the IP

    You can test retention by deleting the Ingress

    $ kubectl delete ing nginx-ingress
    -ingress "nginx-ingress" deleted
    +

    Retaining the IP

    You can test retention by deleting the Ingress

    $ kubectl delete ing ingress-nginx
    +ingress "ingress-nginx" deleted
     
    -$ kubectl create -f nginx-ingress.yaml
    -ingress "nginx-ingress" created
    +$ kubectl create -f ingress-nginx.yaml
    +ingress "ingress-nginx" created
     
    -$ kubectl get ing nginx-ingress
    +$ kubectl get ing ingress-nginx
     NAME            HOSTS     ADDRESS           PORTS     AGE
    -nginx-ingress   *         104.154.109.191   80, 443   13m
    -

    Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.

    Promote ephemeral to static IP

    To promote the allocated IP to static, you can update the Service manifest

    $ kubectl patch svc nginx-ingress-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
    -"nginx-ingress-lb" patched
    -

    and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) `

    $ gcloud compute addresses create nginx-ingress-lb --addresses 104.154.109.191 --region us-central1
    -Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb].
    +ingress-nginx   *         104.154.109.191   80, 443   13m
    +

    Note that unlike the GCE Ingress, the same loadbalancer IP is shared amongst all Ingresses, because all requests are proxied through the same set of nginx controllers.

    Promote ephemeral to static IP

    To promote the allocated IP to static, you can update the Service manifest

    $ kubectl patch svc ingress-nginx-lb -p '{"spec": {"loadBalancerIP": "104.154.109.191"}}'
    +"ingress-nginx-lb" patched
    +

    and promote the IP to static (promotion works differently for cloudproviders, provided example is for GKE/GCE) `

    $ gcloud compute addresses create ingress-nginx-lb --addresses 104.154.109.191 --region us-central1
    +Created [https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb].
     ---
     address: 104.154.109.191
     creationTimestamp: '2017-01-31T16:34:50.089-08:00'
     description: ''
     id: '5208037144487826373'
     kind: compute#address
    -name: nginx-ingress-lb
    +name: ingress-nginx-lb
     region: us-central1
    -selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/nginx-ingress-lb
    +selfLink: https://www.googleapis.com/compute/v1/projects/kubernetesdev/regions/us-central1/addresses/ingress-nginx-lb
     status: IN_USE
     users:
     - us-central1/forwardingRules/a09f6913ae80e11e6a8c542010af0000
    diff --git a/examples/static-ip/nginx-ingress-controller.yaml b/examples/static-ip/nginx-ingress-controller.yaml
    index 30885ec54..61c3a8f7f 100644
    --- a/examples/static-ip/nginx-ingress-controller.yaml
    +++ b/examples/static-ip/nginx-ingress-controller.yaml
    @@ -1,7 +1,7 @@
     apiVersion: apps/v1
     kind: Deployment
     metadata:
    -  name: nginx-ingress-controller
    +  name: ingress-nginx-controller
       labels:
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
    @@ -18,14 +18,14 @@ spec:
             app.kubernetes.io/part-of: ingress-nginx
         spec:
           # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration
    -      # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host
    +      # however, it is not a hard dependency of the ingress-nginx-controller itself and it may cause issues if port 10254 already is taken on the host
           # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used
           # like with kubeadm
           # hostNetwork: true
           terminationGracePeriodSeconds: 60
           containers:
    -      - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.33.0
    -        name: nginx-ingress-controller
    +      - image: k8s.gcr.io/ingress-nginx/controller:v1.0.5
    +        name: controller
             readinessProbe:
               httpGet:
                 path: /healthz
    @@ -54,4 +54,4 @@ spec:
                     fieldPath: metadata.namespace
             args:
             - /nginx-ingress-controller
    -        - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
    +        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-lb
    diff --git a/examples/static-ip/static-ip-svc.yaml b/examples/static-ip/static-ip-svc.yaml
    index b64cf96cb..ee803951f 100644
    --- a/examples/static-ip/static-ip-svc.yaml
    +++ b/examples/static-ip/static-ip-svc.yaml
    @@ -2,7 +2,7 @@
     apiVersion: v1
     kind: Service
     metadata:
    -  name: nginx-ingress-lb
    +  name: ingress-nginx-lb
       labels:
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
    @@ -18,6 +18,6 @@ spec:
         name: https
         targetPort: 443
       selector:
    -    # Selects nginx-ingress-controller pods
    +    # Selects ingress-nginx-controller pods
         app.kubernetes.io/name: ingress-nginx
         app.kubernetes.io/part-of: ingress-nginx
    diff --git a/examples/tls-termination/index.html b/examples/tls-termination/index.html
    index 35ec158b0..d1a80cbbb 100644
    --- a/examples/tls-termination/index.html
    +++ b/examples/tls-termination/index.html
    @@ -39,10 +39,10 @@
     Events:
       FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason  Message
       --------- --------    -----   ----                -------------   --------    ------  -------
    -  7s        7s      1   {nginx-ingress-controller }         Normal      CREATE  default/nginx-test
    -  7s        7s      1   {nginx-ingress-controller }         Normal      UPDATE  default/nginx-test
    -  7s        7s      1   {nginx-ingress-controller }         Normal      CREATE  ip: 104.198.183.6
    -  7s        7s      1   {nginx-ingress-controller }         Warning     MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming /
    +  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  default/nginx-test
    +  7s        7s      1   {ingress-nginx-controller }         Normal      UPDATE  default/nginx-test
    +  7s        7s      1   {ingress-nginx-controller }         Normal      CREATE  ip: 104.198.183.6
    +  7s        7s      1   {ingress-nginx-controller }         Warning     MAPPING Ingress rule 'default/nginx-test' contains no path definition. Assuming /
     
     $ curl 104.198.183.6 -L
     curl: (60) SSL certificate problem: self signed certificate
    diff --git a/kubectl-plugin/index.html b/kubectl-plugin/index.html
    index 616a90a34..0ce56f460 100644
    --- a/kubectl-plugin/index.html
    +++ b/kubectl-plugin/index.html
    @@ -44,7 +44,7 @@ Do not move it without providing redirects.
           --user string                    The name of the kubeconfig user to use
     
     Use "ingress-nginx [command] --help" for more information about a command.
    -

    Common Flags

    • Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
    • Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment> and --pod <pod> flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to nginx-ingress-controller.
    • Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.

    Subcommands

    Note that backends, general, certs, and conf require ingress-nginx version 0.23.0 or higher.

    backends

    Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about:

    $ kubectl ingress-nginx backends -n ingress-nginx
    +

    Common Flags

    • Every subcommand supports the basic kubectl configuration flags like --namespace, --context, --client-key and so on.
    • Subcommands that act on a particular ingress-nginx pod (backends, certs, conf, exec, general, logs, ssh), support the --deployment <deployment> and --pod <pod> flags to select either a pod from a deployment with the given name, or a pod with the given name. The --deployment flag defaults to ingress-nginx-controller.
    • Subcommands that inspect resources (ingresses, lint) support the --all-namespaces flag, which causes them to inspect resources in every namespace.

    Subcommands

    Note that backends, general, certs, and conf require ingress-nginx version 0.23.0 or higher.

    backends

    Run kubectl ingress-nginx backends to get a JSON array of the backends that an ingress-nginx controller currently knows about:

    $ kubectl ingress-nginx backends -n ingress-nginx
     [
       {
         "name": "default-apple-service-5678",
    @@ -174,7 +174,7 @@ Do not move it without providing redirects.
           https://github.com/kubernetes/ingress-nginx/issues/3174
     
     Checking deployments...
    -✗ namespace2/nginx-ingress-controller
    +✗ namespace2/ingress-nginx-controller
       - Uses removed config flag --sort-backends
           Lint added for version 0.22.0
           https://github.com/kubernetes/ingress-nginx/issues/3655
    @@ -189,7 +189,7 @@ Do not move it without providing redirects.
            https://github.com/kubernetes/ingress-nginx/issues/3743
     
     Checking deployments...
    -✗ namespace2/nginx-ingress-controller
    +✗ namespace2/ingress-nginx-controller
       - Uses removed config flag --enable-dynamic-certificates
           Lint added for version 0.24.0
           https://github.com/kubernetes/ingress-nginx/issues/3808
    @@ -210,7 +210,7 @@ Do not move it without providing redirects.
     I0405 16:53:46.193913       7 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"82258915-563e-11e9-9c52-025000000001", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
     ...
     

    ssh

    kubectl ingress-nginx ssh is exactly the same as kubectl ingress-nginx exec -it -- /bin/bash. Use it when you want to quickly be dropped into a shell inside a running ingress-nginx container.

    $ kubectl ingress-nginx ssh -n ingress-nginx
    -www-data@nginx-ingress-controller-7cbf77c976-wx5pn:/etc/nginx$
    +www-data@ingress-nginx-controller-7cbf77c976-wx5pn:/etc/nginx$