I write a half-complex piece of scientific software. We provide our end users with many configuration options for each job, and allow users to only partially specify configurations, with the other values being set to appropriate / safe defaults.
Sometimes the derived defaults are pathologically combined with the specified options, as they are likely to cause the job to take a long time or to exceed memory or memory. However, these configurations are still well defined and partial results may be useful. It is possible to know when this is the case.
Would it be better in such cases? A warning and allow the job to continue or stop it and give one error This blocks the job until better values are selected or the derived values are manually set by the user (as an override).
Are there general rules of thumb when it might be preferable to spend serious errors over recoverable warnings in recoverable situations?