I was thinking on caching some of the tables that have lots of reads and few writes in the client application. To make sure the cached table is the same as the one in the database I wanted the client application to send a precalculated hash of the cached table when making a request. The server application would compare it against its own precalculated hash stored in the DB (which I’d keep updated through triggers) and if it’s the same it’d send a confirmation instead of the whole table.
Am I over engineering this? My main reason to do it this way is to give the user a more fluent experience. I used a dozen applications similar to the one I want to make and there’s always a second or 2 of delay for the ones where the client app is browser based.
The other ones I used had on site server or even managed the data on the same computer, so it’s not really comparable.
I’m using postgres and the client application will be a PWA. I expect the tables to have peaks of a thousand reads per hour and the tables shouldn’t have over 200 rows and 10 columns.