Scaling Up Stochastic Dual Coordinate Ascent

  • Kenneth Tran ,
  • Saghar Hosseini ,
  • Lin Xiao ,
  • Thomas Finely ,
  • Misha Bilenko

Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining |

Published by ACM - Association for Computing Machinery

Publication

Stochastic Dual Coordinate Ascent (SDCA) has recently emerged as a state-of-the-art method for solving large-scale supervised learning problems formulated as minimization of convex loss functions. It performs iterative, randomcoordinate updates to maximize the dual objective. Due to the sequential nature of the iterations, it is typically implemented as a single-threaded algorithm limited to inmemory datasets. In this paper, we introduce an asynchronous parallel version of the algorithm, analyze its convergence properties, and propose a solution for primal-dual synchronization required to achieve convergence in practice. In addition, we describe a method for scaling the algorithm to out-of-memory datasets via multi-threaded deserialization of block-compressed data. This approach yields sufficient pseudo-randomness to provide the same convergence rate as random-order in-memory access. Empirical evaluation demonstrates the efficiency of the proposed methods and their ability to fully utilize computational resources and scale to out-of-memory datasets.