postgresql – How to design a database for search filters?

We have a normalized relational database with lots of tables. We plan on adding search filters that can be combined together. We don’t know which filters are going to be used the most at this time, and may add more later on. We have low number of reads, and a very few number of writes. Certain fields need write/read consistency and responsiveness for a web-app. We have records in the millions.

  1. Is there anything we can do now to prepare for performance w/o knowing how the queries will look E.g. Filter by artist and name of song. Or filter song durations above 1 minute, and sort by popularity. There are many possibilities.
  2. If we need to improve read performance on some queries, how do we decide whether to denormalize versus create indices?
  3. The usage patterns for filters might change, so the popular filters one day may not be popular the next day. If we created indexes, I’d imagine this would get very difficult to manage. We might even add filters from time to time, and have to add more indexes. How does one handle this complexity? Same question goes for denormalization.
  4. If multicolumn indexes are used, we really need to predict what type of filter combinations are used. Could single column indexes on the popular filters be sufficient? Will we only know by testing both multi-column and single column? Should we start out with single column?