I have a Node.js micro-service architecture-based back-end. Services communicate with each other via PubSub. E.g. upon a request to service A, service A messages either service B or service C via PubSub. B and C can message other services and so on.
What exactly is to happen is for the user to decide. The problem is: In rare, but unavoidable edge-cases, the set-up could result in an endless loop of PubSub messages, like e.g.
A->B->C->A and so on.
Infinite loops are, of course, bad, bloat the system, and could make it eventually crash.
Now, one way to prevent that would be passing a counter in the PubSub payload, increasing it by one each time a service messages to another service, and then aborting any further processing as soon as a certain threshold would be reached.
However, that would mean that the user would not get the result they expect.
So, I was thinking about just adding delays based on the counter value: The more often a certain request is passed around between services, the longer the delay before proceeding, till a certain quite long max delay value of e.g. 5 minutes would be reached.
However, I’m not really sure if that would be a good solution.
- What would be the risks of that approach?
- How would I measure if the delay is long enough and how many infinite messages could be passed around without the system breaking down?