The idea of using SANs makes sense if you have, for example, one web server which takes care of several smaller websites (e.g. example.com and internal.example.com). In this case, having one certificate per web server, with SANs for each website reduces the amount of configuration overhead.
However, once you start to create what I like to call a “megacertificate”, meaning once certificate with uncountably many SANs, you will run into problems. This certificate, and its associated private key, will likely be distributed to many many different servers, meaning that your private key will be in many different places.
This in turn means that the attack surface has now grown exponentially, as it means one compromised server can now compromise all domains, even if they are on different physical servers.
(+) its easier to standardise the certificate options (i.e. hashing/cyphers) on a shared cert
This is false. First of all, the certificate itself does not define any cryptographic ciphers to be used by TLS, only which cryptographic operations are necessary to validate the certificate.
This means that two different physical servers – let’s call them alice.com and bob.com – can use the same certificate (with SAN entries for both), and yet still support completely different sets of ciphers. alice.com could support only state-of-the-art ciphers and bob.com may be stuck using insecure legacy ciphers for interoperability reasons.
The only valid reason I can think of at the moment why you might use SANs – aside from convenience during manual setup – is that an appliance or application may not support uploading several different certificates, yet needs to serve different domains.