Welcome to the Ingress Nginx new contributor tips.
This guide briefly outlines the necessary knowledge & tools, required to start working on Ingress-NGINX Issues.
### Prerequisites
- Basic understanding of linux
- Familiarity with the command line on linux
- OSI Model(Links below)
### Introduction
It all starts with the OSI model...
> The Open Systems Interconnection (OSI) model describes seven layers that computer systems use to communicate over a network. It was the first standard model for network communications, adopted by all major computer and telecommunication companies

#### Reading material for OSI Model
[OSI Model CertificationKits](https://www.certificationkits.com/cisco-certification/cisco-ccna-640-802-exam-certification-guide/cisco-ccna-the-osi-model/)
Not everybody knows everything. But the factors that help are a love/passion for this to begin. But to move forward, it's the approach and not the knowledge that sustains prolonged joy, while working on issues. If the approach is simple and powered by good-wishes-for-community, then info & tools are forthcoming and easy.
Here we take a bird's eye-view of the hops in the network plumbing, that a packet takes, from source to destination, when we run `curl`, from a laptop to a nginx webserver process, running in a container, inside a pod, inside a Kubernetes cluster, created using `kind` or `minikube` or any other cluster-management tool.
### [Kind](https://kind.sigs.k8s.io/) cluster example on a Linux Host
The destination of the packet from the curl command, is looked up, in the `routing table`. Based on the route, the packet first travels to the virtual bridge `172.18.0.1` interface, created by docker, when we created the kind cluster on a laptop. Next the packet is forwarded to `172.18.0.2`(See below on how we got this IP address), within the kind cluster. The `kube-proxy` container creates iptables rules that make sure the packet goes to the correct pod ip in this case `10.244.0.5`
If this part is confusing, you would first need to understand what a [bridge](https://tldp.org/HOWTO/BRIDGE-STP-HOWTO/what-is-a-bridge.html) is and what [docker network](https://docs.docker.com/network/) is.
#### The journey of a curl packet.
Let's begin with creating a [Kind](https://kind.sigs.k8s.io/docs/user/quick-start/) Cluster on your laptop
```
# kind create cluster
```
This will create a cluster called `kind`, to view the clusters type
```
# kind get clusters
kind
```
Kind ships with `kubectl`, so we can use that to communicate with our clusters.
```
# kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
Output Relevance: From the above output, we can see that our nginx pod is being exposed as the `NodePort` service type, and now we can curl the Node IP `172.18.0.2` with the exposed port `32329`
Command: The pod has an IP as shown below
```
# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
From the above output we can see that, there are two bridges connected to our systems network interface,one is the docker default bridge`docker0` and the other created by kind
`br-2fffe5cd5d9e`.
Since kind creates nodes as containers, this is easily accessible via `docker ps`.
```
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
230e7246a32c kindest/node:v1.24.1 "/usr/local/bin/entr…" 6 days ago Up 33 hours 127.0.0.1:38143->6443/tcp kind-control-plane
```
If we do a docker `exec` we can enter the container, we can also see the network interfaces within the container.
```
# docker exec -it 230e7246a32c bash
# root@kind-control-plane:/# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-2fffe5cd5d9e
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-be5b544733a3
192.168.31.0 0.0.0.0 255.255.255.0 U 0 0 0 ethbr0
192.168.31.0 0.0.0.0 255.255.255.0 U 0 0 0 ethbr0
192.168.39.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr2
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
```
Output Relevance: From the above output, you can see that the `iface`(Interface) for `172.18.0.0` is `br-2fffe5cd5d9e`, which means traffic that needs to go to `172.18.0.0` will go through `br-2fffe5cd5d9e` which is created by docker for the kind container (this is the node in case of kind cluster).
Now we need to understand how the packet travels from the container interface to the pod with IP `10.244.0.5`. The component that handles this is called kube-proxy
So what exactly is [kube-proxy](https://kubernetes.io/docs/concepts/overview/components/#kube-proxy):
> Kube-Proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster
So, as we can see that kube proxy handles the network rules required to aid the communication to the pods, we will look at the [iptables](https://linux.die.net/man/8/iptables)
> `iptables` is a command line interface used to set up and maintain tables for the Netfilter firewall for IPv4, included in the Linux kernel. The firewall matches packets with rules defined in these tables and then takes the specified action on a possible match. Tables is the name for a set of chains
Command:
```
# iptables -t nat -L PREROUTING -n
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
DOCKER_OUTPUT all -- 0.0.0.0/0 172.18.0.1
CNI-HOSTPORT-DNAT all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
```
```
# iptables-save | grep PREROUTING
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
```
Output Relevance:
> -A: append new iptable rule
> -j: jump to the target
> KUBE-SERVICES: target
> The above output appends a new rule for PREROUTING which every network packet will go through first as they try to access any kubernetes service
What is `PREROUTING` in iptables?
>PREROUTING: This chain is used to make any routing related decisions before (PRE) sending any packets
To dig in further we need to go to the target, `KUBE-SERVICES` for our nginx service.
As you can see the rules added by `kube-proxy` helps the packet reach to the destination service.
### Minikube KVM VM Example on Linux
#### TL;DR
Now we look at the curl packet journey on minikube. The `routing table` is looked up to know the destination of the curl packet. The packet then first travels to the virtual bridge `192.168.39.1`, created by minikube kvm2 driver, when we created the minikube cluster, on a linux laptop. Then this packet is forwarded to `192.168.39.57`, within the minikube VM. We have docker containers running in the VM. Among them, the `kube-proxy` container creates iptables rules that make sure the packet goes to the correct pod ip, in this case `172.17.0.4`.
To begin with the minikube example, we first need to create a minikube cluster on a linux laptop. In this example I'll be using the `kvm2` driver option for `minikube start` command, as default.
```
minikube start
😄 minikube v1.26.0 on Arch "rolling"
🆕 Kubernetes 1.24.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.24.2
✨ Using the kvm2 driver based on existing profile
👍 Starting control plane node minikube in cluster minikube
🏃 Updating the running kvm2 "minikube" VM ...
🐳 Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
Minikube creates a Virtual Machine using the KVM2 driver(Other drivers such as Virtualbox do exist see `minikube start --help` for more information ), you should be able to see this with the following output(You may have to use sudo to get this output)
```
$ virsh --connect qemu:///system list
Id Name State
--------------------------
1 minikube running
or
$ sudo virsh list
Id Name State
--------------------------
1 minikube running
```
Moving on, simply create a nginx deployment using `kubectl`.
Output Relevance: From the above output, we can see that our nginx pod is being exposed as the `NodePort` service type, and now we can curl the Node IP `192.168.39.57` with the exposed port `32007`
Command: The pod has an IP as shown below
```
# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
Output Relevance: From the above output you can see there are two Virtual Bridges created by minikube when we created the cluster on the network. Here, `virbr0` is the default NAT network bridge while `virbr2` is a isolated network bridge on which the pods run.
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-2fffe5cd5d9e
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-be5b544733a3
192.168.31.0 0.0.0.0 255.255.255.0 U 0 0 0 ethbr0
192.168.31.0 0.0.0.0 255.255.255.0 U 0 0 0 ethbr0
192.168.39.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr2
192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
```
Output Relevance: As you can see multiple routes are defined here, of which our Virtual Machine Node IP(192.168.39.57) is also shown in the table, so the packet now knows where it has to go.
With that clear we now know how the packet goes from the laptop to the virtual bridge and then enters the Virtual Machine.
Inside the virtual machine, [kube-proxy](https://kubernetes.io/docs/concepts/overview/components/#kube-proxy) handles the routing using iptables.
So what exactly is [kube-proxy](https://kubernetes.io/docs/concepts/overview/components/#kube-proxy)(For those who skipped the kind example):
> Kube-Proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept.
kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster
So, as we can see that kube proxy handles the network rules required to aid the communication to the pods, we will look at the [iptables](https://linux.die.net/man/8/iptables)
> `iptables` is a command line interface used to set up and maintain tables for the Netfilter firewall for IPv4, included in the Linux kernel. The firewall matches packets with rules defined in these tables and then takes the specified action on a possible match. Tables is the name for a set of chains
As you can see the rules added by kube-proxy helps the packet reach to the destination service.
### Connection termination
Connection termination is a type of event that occurs when there are load balancers present, the information for this is quite scarce, however I've found the following article, [IBM - Network Termination](https://www.ibm.com/docs/en/sva/9.0.4?topic=balancer-network-termination) that describes what it means by connection termination between clients(laptop) and server(load balancer) and the various other services.
### Different types of connection errors.
The following article on [TCP/IP errors](https://www.ibm.com/docs/en/db2/11.1?topic=message-tcpip-errors) has a list of the important tcp timeout errors that we need to know.
| No space is left on a device or system table.|The disk partition is full|
|No route to the host is available.|The routing table doesn't know where to route the packet.|
|Connection was reset by the partner.|This usually means the packet was dropped as soon as it reached the server can be due to a firewall.|
|The connection was timed out.|This indicates the firewall blocking your connection or the connection took too long.|
## OSI Model Layer 7 (Application Layer)
[What is layer 7?](https://www.cloudflare.com/learning/ddos/what-is-layer-7/)
#### Summary
Layer 7 refers to the seventh and topmost layer of the Open Systems Interconnect (OSI) Model known as the application layer. This is the highest layer which supports end-user processes and applications. Layer 7 identifies the communicating parties and the quality of service between them, considers privacy and user authentication, as well as identifies any constraints on the data syntax. This layer is wholly application-specific.
## Setting up Ingress-Nginx Controller
Since we are doing this on our local laptop, we are going to use the following tools:
- [Minikube using KVM driver](https://minikube.sigs.k8s.io/docs/start/) - The host is linux-based in our example
### So let's begin with Metallb and Ingress-Nginx setup.
For setting up metallb, we are going to follow the below steps:
- To begin the installation, we will execute:
```
minikube start
```
- To install Metallb, one can install it using the [manifest](https://metallb.universe.tf/installation/#installation-by-manifest) or by using [helm](https://metallb.universe.tf/installation/#installation-with-helm), for now we will use the Manifest method:
- We need to now configure Metallb, we are using [Layer 2 configuration](https://metallb.universe.tf/configuration/#announce-the-service-ips), let's head over to the [Metallb Configuration](https://metallb.universe.tf/configuration/) website, here you will see how to setup metallb.
>Layer 2 mode does not require the IPs to be bound to the network interfaces of your worker nodes. It works by responding to ARP requests on your local network directly, to give the machine’s MAC address to clients.
In order to advertise the IP coming from an IPAddressPool, an L2Advertisement instance must be associated to the IPAddressPool.
- We have modified the IP address pool so that our loadbalancer knows which subnet to choose an IP from.Since we have only one minikube IP we need to modify the code given in the documentation.
Save this as `metallb-config.yaml`:
```
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: first-pool
namespace: metallb-system
spec:
addresses:
# The configuration website show's you this
#- 192.168.10.0/24
#- 192.168.9.1-192.168.9.5
#- fc00:f853:0ccd:e799::/124
# We are going to change this to `minikube ip` as such
- 192.168.39.57/32
```
Now deploy it using `kubectl`
```
kubectl apply -f metallb-config.yaml
```
- Now that metallb is setup, let's install [ingress-nginx](https://kubernetes.github.io/ingress-nginx/deploy/#quick-start) on the laptop.
Note: We are using the install by manifest option from the Installation manual
The above output, creates an ingress, for us with the rule to match the service if the host is `httpd.dev.leonnunes.com`. The class here is retrieved from the below command.
We have setup `Ingress-Nginx`, using `nginx` as a class and `httpd` for this example.
In order to display the info on Layer - 7, we have extracted the Layer 7 information from a simple `curl` request, and then using `tcpdump` command within the `httpd` pod we extracted the network packets and opened it using the `Wireshark` utility.
The above output shows the information that the `httpd` pod receives. The `curl` command sends the host header, `Host: httpd.dev.leonnunes.com`, to the nginx controller, that then matches the rule and sends the information to the right controller
* Added httpd.dev.leonnunes.com:80:192.168.39.57 to DNS cache
* Trying 192.168.39.57:80...
* Connected to 192.168.39.57 (192.168.39.57) port 80 (#0)
> GET / HTTP/1.1
> Host: httpd.dev.leonnunes.com
> User-Agent: curl/7.84.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
<HTTP/1.1200OK
<Date:Mon,22Aug202216:05:27GMT
<Content-Type:text/html
<Content-Length:45
<Connection:keep-alive
<Last-Modified:Mon,11Jun200718:53:14GMT
<ETag:"2d-432a5e4a73a80"
<Accept-Ranges:bytes
<
<html><body><h1>It works!</h1></body></html>
* Connection #0 to host 192.168.39.57 left intact
```
As you can see from the above output there are several headers added to the curl output after it reaches the `httpd` pod, these headers are added by the Ingress Nginx Controller.
- [netstat](https://www.lifewire.com/netstat-command-2618098) - List network details
- [curl](https://linuxhandbook.com/curl-command-examples/) - Curl a website from the command line
- [ifconfig](https://www.tecmint.com/ifconfig-command-examples/)/[ip](https://www.geeksforgeeks.org/ip-command-in-linux-with-examples/) - Show ip address configuration