CMA-ES – a Stochastic Second-Order Method for Function-Value FreeNumerical Optimization

We consider black-box optimization with little assumptions on the underlying objective function. Further, we consider sampling from a distribution to obtain new candidate solutions. Under mild assumptions, solving the original black-box optimization problem coincides with optimizing a parametrized family of distributions of our choice. Choosing the family of multivariate normal distributions on the continuous search domain, a natural gradient descent on this family leads to an instantiation of the so-called CMA-ES algorithm (covariance matrix adaptation evolution strategy). In this talk, the continuous black-box optimization problem will be introduced and the CMA-ES algorithm will be illustrated. The CMA-ES adapts a second-order model of the underlying objective function. On convex-quadratic functions, the resulting covariance matrix resembles the inverse Hessian matrix of the function. In contrast to Quasi-Netwon methods, this can be accomplished derivative- and even function-value free. The CMA-ES reveals the same invariance properties as the famous Nelder-Mead simplex downhill method, is robust and works reliably not only in low dimensions and is surprisingly efficient on convex as well as non-convex, highly ill-conditioned problems.

Date:
Speakers:
Nikolaus Hansen
Affiliation:
INRIA
    • Portrait of Jeff Running

      Jeff Running