optimization – Worried about too many joins when fetching multiple datatypes in a single postgres query

Its not exactly my use case (would take too long too explain), but imagine a task management application where tasks for employees are displayed in a list along with some information about that task. These are some of the Task types.

  • Simple Note like “take out the trash” or “refill fridge”
  • Customer Support Ticket for a customer
  • Delivery Job with multiple destinations

These jobs can be assigned to multiple employees until someone claims it and later marks it as completed.

Heres the way I would model this:

(1): https://i.stack.imgur.com/tgbv7.png

Now In order to get all the Information of all the tasks for a given employee, I would first need to join every assignment relevant to the employee, then join a task for each assignment, then for every task, see if there is a relevant entry in any of the task types (keep in mind there are more than the ones in the diagram), then for CustomerSupport the Customer info needs to be joined aswell and for each delivery each destination is joined (as a json array in the query).

Now I am able to create the query necessary for this (it would require some procedural generation though), that is not the problem. I am just worried that this many cascading queries might heavily affect performance, especially as the task types grow, since with DeliveryDestination we are already at O(N³) complexity.

So my question simply is, can I expect PostgreSQL to properly optimize this? Or should I find a different way of modelling it or even cache the customer and delivery information withing the customerSupport and delivery pages?