I am looking for recommendations for algorithms (apparently this is the right SO site for that) that efficiently scan the high-dimensional parameter space of a cost function that is very expensive to evaluate.
- By expensive to evaluate I mean that one can only do O(1) evaluations per second on a modern computer.
- By effienctly I mean that the algorithm should learn to avoid regions where there are no solutions: the evaluation of the cost function may fail for certain sets of parameters.
- Still under the aspect of efficiency, the algorithm should yield a set of samples that represent a (ideally unbiased) coverage of all solutions in the whole parameter space within a reasonable number of samples.
- The parameters are continuous numbers within predefined bounds.
- The cost function is “mostly well-behaved”.
I am not so much after importance sampling, so I guess I am looking for an improved random sampling that is adaptive in the sense that it learns which parts of parameter space to avoid.