Parameter Adjustment Based on Performance Prediction: Towards an Instance-Aware Problem Solver

  • Frank Hutter ,
  • Youssef Hamadi

MSR-TR-2005-125 |

Tuning an algorithm’s parameters for robust and high performance is a tedious and time-consuming task that often requires knowledge about both the domain and the algorithm of interest. Furthermore, the optimal parameter configuration to use may differ considerably across problem instances. In this report, we define and tackle the algorithm configuration problem, which is to automatically choose the optimal parameter configuration for a given algorithm on a per-instance base. We employ an indirect approach that predicts algorithm runtime for the problem instance at hand and each (continuous) parameter configuration, and then simply chooses the configuration that minimizes the prediction. This approach is based on similar work by Leyton-Brown et al. [LBNS02, NLBD+04] who tackle the algorithm selection problem [Ric76] (given a problem instance, choose the best algorithm to solve it). While all previous studies for runtime prediction focussed on tree search algorithm, we demonstrate that it is possible to fairly accurately predict the runtime of SAPS [HTH02], one of the best-performing stochastic local search algorithms for SAT. We also show that our approach automatically picks parameter configurations that speed up SAPS by an average factor of more than two when compared to its default parameter configuration. Finally, we introduce sequential Bayesian learning to the problem of runtime prediction, enabling an incremental learning approach and yielding very informative estimates of predictive uncertainty.