concurrency – Managing concurrent TCP connections with Go, Docker and Kubernetes

I need to consume several APIs concurrently. In order to do that I decided to containerize each API client code and manage them using Kubernetes. Some of those APIs need to be “walked”. They basically have this one endpoint from which you get more endpoints and so on. These “trees” are not that deep, but are constantly changing so hard coding them is not an option. Main challenge is limiting the number of open TCP connections. If I just spawn a goroutine as soon as I get the endpoints program will die because of too many open file descriptors. So the obvious solution is to implement a worker pool. The question is how big should it be?

As far as my understanding of these technologies goes, whole Kubernetes cluster has it’s own limit of open TCP connections, so the sum of connections on each container should not exceed it. And I should also be able to set maximum number of TCP connections for Docker containers. I think that somehow getting the system limit and then making a worker pool of that size makes sense.