The server config:
The Postgres config:
- There is only 1 Postgres instance. I.e. no partioning, no clustering, …
- shared_buffers = 5GB
- work_mem = 10MB
- maintenance_work_mem = 800MB
- wal_buffers = 16MB
- effective_cache_size = 10GB
My users are experiencing bad user experience while querying Postgres (v13) due to bad query performance. An index exists and is used. However, there are many READs and not so many HITs while the index is used. I.e. larger parts of the index need to be loaded from disc into the cache. Immediately executing the same query again leads to expected results: The relevant parts of the index were loaded into the cache due to the first query and are now available leading to no READs at all with a performance much less than 1 sec, while the first query took approx. 120 sec.
The reason why Postgres does not keep my index in cache also makes sense: There are other indices which are used much more often and thus “push out” of the cache the not so often used indices. In fact only 10% of the cache is used for indices, the remaining 90% for parts of the actual main table. There are many background jobs querying the Postgres instance which are using other indices quite frequently. The indices which are used when users interact with the system (on a web frontend via an API) are much more seldom used – which is leading to really bad user experience as described above.
Is there a way/configuration/best_practice_architecture which somehow tackles this problem by telling Postgres to keep specific indices in cache all the time?
I also have the feeling that the load on the Postgres instance might lead to the very bad performance where READs on the disc take “ages”.
Thanks upfront to everyone considering this question!