I have inherited a SQL 2017 system that had 114 databases that were synched to a second instance via an Always-On AG, with another 13 databases needing to be added. It was supposed to be providing High Availability.
Long story short, it ended in tears as I started to add in the additional ones.
I know that Microsoft has tested Always-On with up to 100 databases. What I have not been certain about is whether that limit of 100 is a SQL instance limit or per AG.
What I would like to know is if anyone has had experience/success with Always-On with a large number of databases, say 2 or 300? If so was this in a single or multiple groups.
This installation failed due to insufficient memory – I found out the secondary has half the memory of the primary so it is no surprise it came a cropper. There were HADR waits indicating throttling of requests being sent from the primary.
There is a great blog article by Bob Dorr about Always-On and worker pool usage ( https://docs.microsoft.com/en-gb/archive/blogs/psssql/alwayson-hadron-learning-series-worker-pool-usage-for-hadron-enabled-databases )
If the secondary server is beefed up memory-wise, and possibly the primary, is it worth attempting to get this to work using the info from Bob Dorr’s article? Or do I cut my losses and go with a SQL cluster, and then log shipping when management realise the costs involved?
Thanks for any thoughts