Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Conjugate Directions for Stochastic Gradient Descent

Nicol N. Schraudolph and Thore Graepel

Abstract

The method of conjugate directions provides a very effective way to optimize large, deterministic systems by gradient descent. In its standard form, however, it is not amenable to stochastic approximation of the gradient. Here we explore ideas from conjugate gradient in the stochastic (online) setting, using fast Hessian-gradient products to set up low-dimensional Krylov subspaces within individual mini-batches. In our benchmark experiments the resulting online learning algorithms converge orders of magnitude faster than ordinary stochastic gradient descent. The experiments are restricted to the linear, realisable case.

Details

Publication typeInproceedings
Published inProceedings of the International Conference on Neural Networks, ICANN 2002
Pages1351–1356
Number2415
SeriesLecture Notes in Computer Science
PublisherSpringer
> Publications > Conjugate Directions for Stochastic Gradient Descent