Combining Conjugate Direction Methods with Stochastic Approximation of Gradients

  • Nicol N. Schraudolph ,
  • Thore Graepel

Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, AISTATS 2003 |

to appear

The method of conjugate directions provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent. Numerical experiments are carried out for both the linear, realisable as well as the non-linear, non-realisable case.