NIPS: Oral Session 1 – Yurii Nesterov

Subgradient Methods for Huge-Scale Optimization Problems

​We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recursive update of the results of matrix/vector products and the values of symmetric functions. It works well, for example, for matrices with few nonzero diagonals and for max-type functions.

We show that the updating technique can be efficiently coupled only with the simplest subgradient methods. Similar results can be obtained for a new nonsmooth random variant of a coordinate descent scheme. We present also the promising results of preliminary computational experiments.

Date:
Speakers:
Yurii Nesterov
Affiliation:
Catholic University of Louvain (UCL)
    • Portrait of Lori Stone

      Lori Stone