diff --git a/404.html b/404.html index 4376d9939..94ef55bc6 100644 --- a/404.html +++ b/404.html @@ -590,8 +590,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -908,18 +908,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication diff --git a/deploy/baremetal/index.html b/deploy/baremetal/index.html index 900e622ec..5b21c5611 100644 --- a/deploy/baremetal/index.html +++ b/deploy/baremetal/index.html @@ -664,8 +664,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -982,18 +982,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication @@ -1216,32 +1204,16 @@ by a DHCP server.

    Example

    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl describe node
    +
    $ kubectl describe node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
     host-3   Ready    node     203.0.113.3
     
    -

    After creating the following ConfigMap, MetalLB takes ownership of one of the IP addresses in the pool and updates the loadBalancer IP field of the ingress-nginx Service accordingly.

    -
     1
    - 2
    - 3
    - 4
    - 5
    - 6
    - 7
    - 8
    - 9
    -10
    -11
    -12
    apiVersion: v1
    +
    apiVersion: v1
     kind: ConfigMap
     metadata:
       namespace: metallb-system
    @@ -1254,29 +1226,21 @@ the loadBalancer IP field of the ingress-nginx
           addresses:
           - 203.0.113.2-203.0.113.3
     
    -
    -
    1
    -2
    -3
    -4
    $ kubectl -n ingress-nginx get svc
    +
    $ kubectl -n ingress-nginx get svc
     NAME                   TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)
     default-http-backend   ClusterIP     10.0.64.249    <none>       80/TCP
     ingress-nginx          LoadBalancer  10.0.220.217   203.0.113.3  80:30100/TCP,443:30101/TCP
     
    -

    As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:

    -
    1
    -2
    -3
    $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com'
    +
    $ curl -D- http://203.0.113.3 -H 'Host: myapp.example.com'
     HTTP/1.1 200 OK
     Server: nginx/1.15.2
     
    -

    Tip

    @@ -1301,29 +1265,20 @@ requests.

    Example

    Given the NodePort 30100 allocated to the ingress-nginx Service

    -
    1
    -2
    -3
    -4
    $ kubectl -n ingress-nginx get svc
    +
    $ kubectl -n ingress-nginx get svc
     NAME                   TYPE        CLUSTER-IP     PORT(S)
     default-http-backend   ClusterIP   10.0.64.249    80/TCP
     ingress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP
     
    -

    and a Kubernetes node with the public IP address 203.0.113.2 (the external IP is added as an example, in most bare-metal environments this value is <None>)

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl describe node
    +
    $ kubectl describe node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
     host-3   Ready    node     203.0.113.3
     
    -

    a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.

    @@ -1355,30 +1310,20 @@ the NGINX Ingress controller should be scheduled or not scheduled.

    Example

    In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl describe node
    +
    $ kubectl describe node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
     host-3   Ready    node     203.0.113.3
     
    -

    with a nginx-ingress-controller Deployment composed of 2 replicas

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl -n ingress-nginx get pod -o wide
    +
    $ kubectl -n ingress-nginx get pod -o wide
     NAME                                       READY   STATUS    IP           NODE
     default-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1   host-2
     nginx-ingress-controller-cf9ff8c96-8vvf8   1/1     Running   172.17.0.3   host-3
     nginx-ingress-controller-cf9ff8c96-pxsds   1/1     Running   172.17.1.4   host-2
     
    -

    Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.

    @@ -1388,13 +1333,10 @@ while requests to host-1 would get dropped becau

    Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller does not update the status of Ingress objects it manages.

    -
    1
    -2
    -3
    $ kubectl get ingress
    +
    $ kubectl get ingress
     NAME           HOSTS               ADDRESS   PORTS
     test-ingress   myapp.example.com             80
     
    -

    Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx @@ -1409,39 +1351,26 @@ documentation as well as the section about External IPs<

    Example

    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl describe node
    +
    $ kubectl describe node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
     host-3   Ready    node     203.0.113.3
     
    -

    one could edit the ingress-nginx Service and add the following field to the object spec

    -
    1
    -2
    -3
    -4
    -5
    spec:
    +
    spec:
       externalIPs:
       - 203.0.113.1
       - 203.0.113.2
       - 203.0.113.3
     
    -

    which would in turn be reflected on Ingress objects as follows:

    -
    1
    -2
    -3
    $ kubectl get ingress -o wide
    +
    $ kubectl get ingress -o wide
     NAME           HOSTS               ADDRESS                               PORTS
     test-ingress   myapp.example.com   203.0.113.1,203.0.113.2,203.0.113.3   80
     
    -
      @@ -1453,15 +1382,11 @@ for generating redirect URLs that take into account the URL used by external cli

      Example

      Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:

      -
      1
      -2
      -3
      -4
      $ curl -D- http://myapp.example.com:30100`
      +
      $ curl -D- http://myapp.example.com:30100`
       HTTP/1.1 308 Permanent Redirect
       Server: nginx/1.15.2
       Location: https://myapp.example.com/  #-> missing NodePort in HTTPS redirect
       
      -

    Via the host network

    @@ -1475,13 +1400,10 @@ interfaces, without the extra network translation imposed by NodePort Services.< Service exists in the target cluster, it is recommended to delete it.

    This can be achieved by enabling the hostNetwork option in the Pods' spec.

    -
    1
    -2
    -3
    template:
    +
    template:
       spec:
         hostNetwork: true
     
    -

    Security considerations

    @@ -1492,35 +1414,24 @@ including the host's loopback. Please evaluate the impact this may have on the s

    Example

    Consider this nginx-ingress-controller Deployment composed of 2 replicas, NGINX Pods inherit from the IP address of their host instead of an internal Pod IP.

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl -n ingress-nginx get pod -o wide
    +
    $ kubectl -n ingress-nginx get pod -o wide
     NAME                                       READY   STATUS    IP            NODE
     default-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2
     nginx-ingress-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3
     nginx-ingress-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2
     
    -

    One major limitation of this deployment approach is that only a single NGINX Ingress controller Pod may be scheduled on each cluster node, because binding the same port multiple times on the same network interface is technically impossible. Pods that are unschedulable due to such situation fail with the following event:

    -
    1
    -2
    -3
    -4
    -5
    -6
    $ kubectl -n ingress-nginx describe pod <unschedulable-nginx-ingress-controller-pod>
    +
    $ kubectl -n ingress-nginx describe pod <unschedulable-nginx-ingress-controller-pod>
     ...
     Events:
       Type     Reason            From               Message
       ----     ------            ----               -------
       Warning  FailedScheduling  default-scheduler  0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.
     
    -

    One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a DaemonSet instead of a traditional Deployment.

    @@ -1545,13 +1456,10 @@ expected to resolve internal names for any reason.

    Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.

    -
    1
    -2
    -3
    $ kubectl get ingress
    +
    $ kubectl get ingress
     NAME           HOSTS               ADDRESS   PORTS
     test-ingress   myapp.example.com             80
     
    -

    Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP @@ -1559,26 +1467,18 @@ address of all nodes running the NGINX Ingress controller.

    Example

    Given a nginx-ingress-controller DaemonSet composed of 2 replicas

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl -n ingress-nginx get pod -o wide
    +
    $ kubectl -n ingress-nginx get pod -o wide
     NAME                                       READY   STATUS    IP            NODE
     default-http-backend-7c5bc89cc9-p86md      1/1     Running   172.17.1.1    host-2
     nginx-ingress-controller-5b4cf5fc6-7lg6c   1/1     Running   203.0.113.3   host-3
     nginx-ingress-controller-5b4cf5fc6-lzrls   1/1     Running   203.0.113.2   host-2
     
    -

    the controller sets the status of all Ingress objects it manages to the following value:

    -
    1
    -2
    -3
    $ kubectl get ingress -o wide
    +
    $ kubectl get ingress -o wide
     NAME           HOSTS               ADDRESS                   PORTS
     test-ingress   myapp.example.com   203.0.113.2,203.0.113.3   80
     
    -
    @@ -1611,46 +1511,28 @@ Service. These IP addresses must belong to the target node.

    Example

    Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)

    -
    1
    -2
    -3
    -4
    -5
    $ kubectl describe node
    +
    $ kubectl describe node
     NAME     STATUS   ROLES    EXTERNAL-IP
     host-1   Ready    master   203.0.113.1
     host-2   Ready    node     203.0.113.2
     host-3   Ready    node     203.0.113.3
     
    -

    and the following ingress-nginx NodePort Service

    -
    1
    -2
    -3
    $ kubectl -n ingress-nginx get svc
    +
    $ kubectl -n ingress-nginx get svc
     NAME                   TYPE        CLUSTER-IP     PORT(S)
     ingress-nginx          NodePort    10.0.220.217   80:30100/TCP,443:30101/TCP
     
    -

    One could set the following external IPs in the Service spec, and NGINX would become available on both the NodePort and the Service port:

    -
    1
    -2
    -3
    -4
    spec:
    +
    spec:
       externalIPs:
       - 203.0.113.2
       - 203.0.113.3
     
    -
    -
    1
    -2
    -3
    -4
    -5
    -6
    -7
    $ curl -D- http://myapp.example.com:30100
    +
    $ curl -D- http://myapp.example.com:30100
     HTTP/1.1 200 OK
     Server: nginx/1.15.2
     
    @@ -1658,7 +1540,6 @@ and the Service port:

    HTTP/1.1 200 OK Server: nginx/1.15.2
    -

    We assume the myapp.example.com subdomain above resolves to both 203.0.113.2 and 203.0.113.3 IP addresses.

    diff --git a/deploy/index.html b/deploy/index.html index b076e07db..4b02a2728 100644 --- a/deploy/index.html +++ b/deploy/index.html @@ -759,8 +759,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -1077,18 +1077,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication @@ -1392,9 +1380,8 @@

    Generic Deployment

    The following resources are required for a generic deployment.

    Mandatory command

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
     
    -

    Attention

    @@ -1410,37 +1397,30 @@ To change this behavior use the flag --watch-namespace<

    Docker for Mac

    Kubernetes is available in Docker for Mac (from version 18.06.0-ce)

    Create a service

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
     
    -

    minikube

    For standard usage:

    -
    1
    minikube addons enable ingress
    +
    minikube addons enable ingress
     
    -

    For development:

    1. Disable the ingress addon:
    -
    1
    $ minikube addons disable ingress
    +
    $ minikube addons disable ingress
     
    -
    1. Execute make dev-env
    2. Confirm the nginx-ingress-controller deployment exists:
    -
    1
    -2
    -3
    -4
    $ kubectl get pods -n ingress-nginx 
    +
    $ kubectl get pods -n ingress-nginx 
     NAME                                       READY     STATUS    RESTARTS   AGE
     default-http-backend-66b447d9cf-rrlf9      1/1       Running   0          12s
     nginx-ingress-controller-fdcdcd6dd-vvpgs   1/1       Running   0          11s
     
    -

    AWS

    In AWS we use an Elastic Load Balancer (ELB) to expose the NGINX Ingress controller behind a Service of Type=LoadBalancer. @@ -1455,21 +1435,17 @@ Please check the

    For L4:

    Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l4.yaml

    Then execute:

    -
    1
    -2
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
     kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
     
    -

    For L7:

    Change line of the file provider/aws/service-l7.yaml replacing the dummy id with a valid one "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"

    Check that no change is necessary with regards to the ELB idle timeout. In some scenarios, users may want to modify the ELB idle timeout, so please check the ELB Idle Timeouts section for additional information. If a change is required, users will need to update the value of service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout in provider/aws/service-l7.yaml

    Then execute:

    -
    1
    -2
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l7.yaml
     kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l7.yaml
     
    -

    This example creates an ELB with just two listeners, one in port 80 and another in port 443

    Listeners

    @@ -1480,26 +1456,22 @@ Please check the

    More information with regards to idle timeouts for your Load Balancer can be found in the official AWS documentation.

    Network Load Balancer (NLB)

    This type of load balancer is supported since v1.10.0 as an ALPHA feature.

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-nlb.yaml
     
    -

    GCE - GKE

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
     
    -

    Important Note: proxy protocol is not supported in GCE/GKE

    Azure

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
     
    -

    Bare-metal

    Using NodePort:

    -
    1
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
    +
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
     
    -

    Tip

    @@ -1507,40 +1479,32 @@ Please check the

    Verify installation

    To check if the ingress controller pods have started, run the following command:

    -
    1
    kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
    +
    kubectl get pods --all-namespaces -l app.kubernetes.io/name=ingress-nginx --watch
     
    -

    Once the operator pods are running, you can cancel the above command by typing Ctrl+C. Now, you are ready to create your first ingress.

    Detect installed version

    To detect which version of the ingress controller is running, exec into the pod and run nginx-ingress-controller version command.

    -
    1
    -2
    -3
    POD_NAMESPACE=ingress-nginx
    +
    POD_NAMESPACE=ingress-nginx
     POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
     kubectl exec -it $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
     
    -

    Using Helm

    NGINX Ingress controller can be installed via Helm using the chart stable/nginx-ingress from the official charts repository. To install the chart with the release name my-nginx:

    -
    1
    helm install stable/nginx-ingress --name my-nginx
    +
    helm install stable/nginx-ingress --name my-nginx
     
    -

    If the kubernetes cluster has RBAC enabled, then run:

    -
    1
    helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
    +
    helm install stable/nginx-ingress --name my-nginx --set rbac.create=true
     
    -

    Detect installed version:

    -
    1
    -2
    POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
    +
    POD_NAME=$(kubectl get pods -l app.kubernetes.io/name=ingress-nginx -o jsonpath='{.items[0].metadata.name}')
     kubectl exec -it $POD_NAME -- /nginx-ingress-controller --version
     
    -
    diff --git a/deploy/rbac/index.html b/deploy/rbac/index.html index ffe18cee2..f688de1b4 100644 --- a/deploy/rbac/index.html +++ b/deploy/rbac/index.html @@ -677,8 +677,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -995,18 +995,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication diff --git a/deploy/upgrade/index.html b/deploy/upgrade/index.html index c13fcb01c..3b0caaab5 100644 --- a/deploy/upgrade/index.html +++ b/deploy/upgrade/index.html @@ -643,8 +643,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -961,18 +961,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication @@ -1149,20 +1137,7 @@ make sure your templates are compatible with the new version of ingress-nginxTo upgrade your ingress-nginx installation, it should be enough to change the version of the image in the controller Deployment.

    I.e. if your deployment resource looks like (partial example):

    -
     1
    - 2
    - 3
    - 4
    - 5
    - 6
    - 7
    - 8
    - 9
    -10
    -11
    -12
    -13
    -14
    kind: Deployment
    +
    kind: Deployment
     metadata:
       name: nginx-ingress-controller
       namespace: ingress-nginx
    @@ -1177,23 +1152,19 @@ in the controller Deployment.

    image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0 args: ...
    -

    simply change the 0.9.0 tag to the version you wish to upgrade to. The easiest way to do this is e.g. (do note you may need to change the name parameter according to your installation):

    -
    1
    -2
    kubectl set image deployment/nginx-ingress-controller \
    +
    kubectl set image deployment/nginx-ingress-controller \
       nginx-ingress-controller=nginx:quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.18.0
     
    -

    For interactive editing, use kubectl edit deployment nginx-ingress-controller.

    With Helm

    If you installed ingress-nginx using the Helm command in the deployment docs so its name is ngx-ingress, you should be able to upgrade using

    -
    1
    helm upgrade --reuse-values ngx-ingress stable/nginx-ingress
    +
    helm upgrade --reuse-values ngx-ingress stable/nginx-ingress
     
    -
    diff --git a/development/index.html b/development/index.html index a056c3819..87bd133be 100644 --- a/development/index.html +++ b/development/index.html @@ -711,8 +711,8 @@
  • - - Exposing TCP and UDP services + + Regular expressions in paths
  • @@ -1029,18 +1029,6 @@ -
  • - - Custom Upstream server checks - -
  • - - - - - - -
  • External authentication @@ -1281,17 +1269,12 @@ It includes how to build, test, and release ingress controllers.

    Quick Start

    Getting the code

    The code must be checked out as a subdirectory of k8s.io, and not github.com.

    -
    1
    -2
    -3
    -4
    -5
    mkdir -p $GOPATH/src/k8s.io
    +
    mkdir -p $GOPATH/src/k8s.io
     cd $GOPATH/src/k8s.io
     # Replace "$YOUR_GITHUB_USERNAME" below with your github username
     git clone https://github.com/$YOUR_GITHUB_USERNAME/ingress-nginx.git
     cd ingress-nginx
     
    -

    Initial developer environment build

    @@ -1299,35 +1282,24 @@ cd ingress-nginx See releases for installation instructions.

    If you are using MacOS and deploying to minikube, the following command will build the local nginx controller container image and deploy the ingress controller onto a minikube cluster with RBAC enabled in the namespace ingress-nginx:

    -
    1
    $ make dev-env
    +
    $ make dev-env
     
    -

    Updating the deployment

    The nginx controller container image can be rebuilt using: -
    1
    $ ARCH=amd64 TAG=dev REGISTRY=$USER/ingress-controller make build container
    -
    -

    +
    $ ARCH=amd64 TAG=dev REGISTRY=$USER/ingress-controller make build container
    +

    The image will only be used by pods created after the rebuild. To delete old pods which will cause new ones to spin up: -
    1
    -2
    $ kubectl get pods -n ingress-nginx
    +
    $ kubectl get pods -n ingress-nginx
     $ kubectl delete pod -n ingress-nginx nginx-ingress-controller-<unique-pod-id>
    -
    -

    +

    Dependencies

    The build uses dependencies in the vendor directory, which must be installed before building a binary/image. Occasionally, you might need to update the dependencies.

    This guide requires you to install the dep dependency tool.

    Check the version of dep you are using and make sure it is up to date.

    -
    1
    -2
    -3
    -4
    -5
    -6
    -7
    -8
    $ dep version
    +
    $ dep version
     dep:
      version     : devel
      build date  : 
    @@ -1336,84 +1308,63 @@ might need to update the dependencies.

    go compiler : gc platform : linux/amd64
    -

    If you have an older version of dep, you can update it as follows:

    -
    1
    $ go get -u github.com/golang/dep
    +
    $ go get -u github.com/golang/dep
     
    -

    This will automatically save the dependencies to the vendor/ directory.

    -
    1
    -2
    -3
    -4
    $ cd $GOPATH/src/k8s.io/ingress-nginx
    +
    $ cd $GOPATH/src/k8s.io/ingress-nginx
     $ dep ensure
     $ dep ensure -update
     $ dep prune
     
    -

    Building

    All ingress controllers are built through a Makefile. Depending on your requirements you can build a raw server binary, a local container image, or push an image to a remote repository.

    In order to use your local Docker, you may need to set the following environment variables:

    -
    1
    -2
    -3
    -4
    -5
    # "gcloud docker" (default) or "docker"
    +
    # "gcloud docker" (default) or "docker"
     $ export DOCKER=<docker>
     
     # "quay.io/kubernetes-ingress-controller" (default), "index.docker.io", or your own registry
     $ export REGISTRY=<your-docker-registry>
     
    -

    To find the registry simply run: docker system info | grep Registry

    Nginx Controller

    Build a raw server binary -
    1
    $ make build
    -
    -

    +
    $ make build
    +

    TODO: add more specific instructions needed for raw server binary.

    Build a local container image

    -
    1
    $ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-build
    +
    $ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-build
     
    -

    Push the container image to a remote repository

    -
    1
    $ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-push
    +
    $ TAG=<tag> REGISTRY=$USER/ingress-controller make docker-push
     
    -

    Deploying

    There are several ways to deploy the ingress controller onto a cluster. Please check the deployment guide

    Testing

    To run unit-tests, just run

    -
    1
    -2
    $ cd $GOPATH/src/k8s.io/ingress-nginx
    +
    $ cd $GOPATH/src/k8s.io/ingress-nginx
     $ make test
     
    -

    If you have access to a Kubernetes cluster, you can also run e2e tests using ginkgo.

    -
    1
    -2
    $ cd $GOPATH/src/k8s.io/ingress-nginx
    +
    $ cd $GOPATH/src/k8s.io/ingress-nginx
     $ make e2e-test
     
    -

    To run unit-tests for lua code locally, run:

    -
    1
    -2
    -3
    $ cd $GOPATH/src/k8s.io/ingress-nginx
    +
    $ cd $GOPATH/src/k8s.io/ingress-nginx
     $ ./rootfs/etc/nginx/lua/test/up.sh
     $ make lua-test
     
    -

    Lua tests are located in $GOPATH/src/k8s.io/ingress-nginx/rootfs/etc/nginx/lua/test. When creating a new test file it must follow the naming convention <mytest>_test.lua or it will be ignored.

    Releasing

    @@ -1422,9 +1373,8 @@ to a wider Kubernetes user base, push the image to a container registry, like gcr.io. All release images are hosted under gcr.io/google_containers and tagged according to a semver scheme.

    An example release might look like: -
    1
    $ make release
    -
    -

    +
    $ make release
    +

    Please follow these guidelines to cut a release: