Troubleshooting Kubernetes Ingress Controller

Ovadiah Myrgorod
2 min readDec 2, 2020


In fact troubleshooting Kubernetes ingress controller is easy and it could save you a lot of time. You just need to know when it is a good idea to do so. Here is an example when you may need it: a request produces an error, but no error logs are reported by the pod responsible for it.

When I was building a Drupal site, I noticed that adding a new field on the page produced 502 Bad Gateway Error. I thought the problem was in the field type that caused a fatal PHP error, but usually such errors are reported, however, I was not able to find anything in the logs of pod running Apache and PHP. After going back and forth, I decided to check logs of the Nginx ingress controller.

Typically there is a Nginx pod responsible for ingress controller running in each Kubernetes node in the cluster. So the first step would be getting all such pods.

Running following command:

kubectl get pods --all-namespaces | grep ingress-nginx

Will output something like:

ingress-nginx    default-http-backend-49cd995543-kaff4
ingress-nginx nginx-ingress-controller-bqb4a
ingress-nginx nginx-ingress-controller-gqrb3
ingress-nginx nginx-ingress-controller-agsrd
ingress-nginx nginx-ingress-controller-24w2g
ingress-nginx nginx-ingress-controller-q7pzs

The first column is a namespace, the second one is a pod name.

For each pod (as you are not sure which one is used), you can get logs with a following command:

kubectl logs -n ingress-nginx <nginx-pod-name> | grep <url>


kubectl logs -n ingress-nginx nginx-ingress-controller-bqb4a | grep "node/add/person"

Outputs this:

2020/12/02 14:18:53 [error] 605#605: *436444 upstream sent too big header while reading response header from upstream, client:, server:, request: "GET /node/add/person HTTP/1.1", upstream: "", host: "", referrer: ""

As we can see the actual error is upstream sent too big header while reading response header from upstream. This is caused because Drupal is setting quite a long Cache-Tags header.

The solution was to update Ingress Configuration to something like this.

apiVersion: extensions/v1beta1
kind: Ingress
generation: 2
name: web-ingress
app: web
annotations: 10k
- host:
- backend:
serviceName: web-service
servicePort: http

Depending on the issue, you can tweak Nginx configuration, see to have an idea what you can set.



Ovadiah Myrgorod

Welcome to the day that makes it a good day everyday!