As I have come to understand, there is generally two ways of ensuring atomicity of both domain events and state changes to aggregate roots in order to achieve at least once delivery of events–the outbox pattern and event sourcing.
Currently I am working on a monolith that might later be broken into microservices. Therefore I am ensuring that each bounded context is its own independent module. In a nutshell, the only difference between this architecture and microservices is that there is no need for a pub/sub due to events being sent between modules in memory.
However there is a possibility that something might go wrong after committing a transaction but before events have been processed, e.g. if a different transaction triggered by an event fails or the system crashes for some reason.
The application does not have the need for event sourcing, and I am leaning towards using the Outbox pattern, but I am unsure whether this approach would scale well. The outbox pattern seems to be quite popular, but some articles raise the question of scalability.
I am currently using Google Cloud Datastore, which does indeed provide transactions across multiple entities (documents in NoSQL-terminology). My current thinking is to publish events in the same transaction by creating another entity with a collection of events for that aggregate root transaction which e.g. could be indexed on the timestamp of the first event in the collection and implement a polling service to process these event collections.
Another way of implementing the outbox pattern that I have seen is to simply store the unpublished events within the aggregate root, but again I am wondering about scalability.
Does my approach seem sound? Any other ways of ensuring reliability of publishing domain events and ensuring consistency in a monolith using NoSQL?