postgresql – Reduce Count(*) time in Postgres for Many Records

Query:

EXPLAIN ANALYZE select count(*) from product;

ROWS: 534965

EXPLANATION :

Finalize Aggregate  (cost=53840.85..53840.86 rows=1 width=8) (actual time=5014.774..5014.774 rows=1 loops=1)
  ->  Gather  (cost=53840.64..53840.85 rows=2 width=8) (actual time=5011.623..5015.480 rows=3 loops=1)
        Workers Planned: 2
        Workers Launched: 2
        ->  Partial Aggregate  (cost=52840.64..52840.65 rows=1 width=8) (actual time=4951.366..4951.367 rows=1 loops=3)
              ->  Parallel Seq Scan on product prod  (cost=0.00..52296.71 rows=217571 width=0) (actual time=0.511..4906.569 rows=178088 loops=3)
Planning Time: 34.814 ms
Execution Time: 5015.580 ms

How can we optimize the above query to get the counts very quickly?

This is a simple query, however, its variations can include different conditions and join with other tables.