# gRPC This example demonstrates how to route traffic to a gRPC service through the Ingress-NGINX controller. ## Prerequisites 1. You have a kubernetes cluster running. 2. You have a domain name such as `example.com` that is configured to route traffic to the Ingress-NGINX controller. 3. You have the ingress-nginx-controller installed as per docs. 4. You have a backend application running a gRPC server listening for TCP traffic. If you want, you can use as an example. 5. You're also responsible for provisioning an SSL certificate for the ingress. So you need to have a valid SSL certificate, deployed as a Kubernetes secret of type `tls`, in the same namespace as the gRPC application. ### Step 1: Create a Kubernetes `Deployment` for gRPC app - Make sure your gRPC application pod is running and listening for connections. For example you can try a kubectl command like this below: ```console $ kubectl get po -A -o wide | grep go-grpc-greeter-server ``` - If you have a gRPC app deployed in your cluster, then skip further notes in this Step 1, and continue from Step 2 below. - As an example gRPC application, we can use this app . - To create a container image for this app, you can use [this Dockerfile](https://github.com/kubernetes/ingress-nginx/blob/main/images/go-grpc-greeter-server/rootfs/Dockerfile). - If you use the Dockerfile mentioned above, to create a image, then you can use the following example Kubernetes manifest to create a deployment resource that uses that image. If necessary edit this manifest to suit your needs. ``` cat </go-grpc-greeter-server # Edit this for your reponame resources: limits: cpu: 100m memory: 100Mi requests: cpu: 50m memory: 50Mi name: go-grpc-greeter-server ports: - containerPort: 50051 EOF ``` ### Step 2: Create the Kubernetes `Service` for the gRPC app - You can use the following example manifest to create a service of type ClusterIP. Edit the name/namespace/label/port to match your deployment/pod. ``` cat <. > If you are developing public gRPC endpoints, check out > https://proto.stack.build, a protocol buffer / gRPC build service that can use > to help make it easier for your users to consume your API. > See also the specific gRPC settings of NGINX: https://nginx.org/en/docs/http/ngx_http_grpc_module.html ### Notes on using response/request streams > `grpc_read_timeout` and `grpc_send_timeout` will be set as `proxy_read_timeout` and `proxy_send_timeout` when you set backend protocol to `GRPC` or `GRPCS`. 1. If your server only does response streaming and you expect a stream to be open longer than 60 seconds, you will have to change the `grpc_read_timeout` to accommodate this. 2. If your service only does request streaming and you expect a stream to be open longer than 60 seconds, you have to change the `grpc_send_timeout` and the `client_body_timeout`. 3. If you do both response and request streaming with an open stream longer than 60 seconds, you have to change all three timeouts: `grpc_read_timeout`, `grpc_send_timeout` and `client_body_timeout`.