Is it a bad idea to cache randomness in the general case? My feeling is yes, but I’m having a hard time articulating why.
A programming language of your choice (e.g. node) uses a native call to generate random bytes (e.g. for creating a uuid). It is very hard to predict the random bytes even with access to the operating system.
The call has overhead and to speed up execution time a memory cache is introduced. So instead of retrieving random bytes from the OS as they are needed (i.e. every time a uuid is needed), they are only retrieved when the cache is empty.
The random bytes are now in memory of the running program. In theory they can be accessed by inspecting the memory. In the general case (where this is done by a library and one has no idea how the randomness is being used), is this a bad idea? If so, why?
Disclaimer: I’m really looking for a good argument for this github issue https://github.com/uuidjs/uuid/issues/527 (~36m weekly downloads)