I have a RDS PostgreSQL multi-AZ instance (in the “production” aws account). The database size is 43Gb.
I want to create a backup to a S3 bucket located in another account (the “backup” account). I would like to create a backup at least every night.
The first part of the problem would be to extract the data from the RDS instance in a format that can be used as a backup in AWS or outside, and can be stored outside AWS. Thus automatic backups and copying snapshots are not sufficient.
The second part of the problem would be to have an efficient way of restoring the instance from the backup when necessary.
Solutions I’m thinking of are:
Periodically pg_dump (with a lambda ?) to a S3 bucket in the prod account, then do a cross account S3 replicate in the backup account. Then when needed use this dump to restore the instance. Does dumping from the prod will affect the performance ? It seems like restoring from a dump this size takes some (too much) time. Do you know ways to optimize both the dumping and the restoring ? Precisely restoring from a dump stored in another account?
The other solution I’m thinking of is using AWS DMS to do a cross account migration directly to the backup S3. And use DMS to restore de instace from s3 to RDS. Is this solution more effective ? Do you feel like DMS is pricy ? What is your experience using this service ?
For a more general answer How are you handling this in your infrastructre ? How would you ?What would you say are the best practices for handling a cross account RDS backup with the possibility of external use (avoiding the vendor lock-in) ?