Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Optimal Distributed Online Prediction

Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao

Abstract

Online prediction methods are typically studied as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which examples arrive. In this paper, we present the distributed mini-batch (DMB) framework, a method of converting a serial gradient-based online algorithm into a distributed algorithm, and prove an asymptotically optimal regret bound for smooth convex loss functions and stochastic examples. Our analysis explicitly takes into account communication latencies between computing nodes in a network. We also present robust variants, which are resilient to failures and node heterogeneity in an asynchronous distributed environment. Our method can also be used for distributed stochastic optimization, attaining an asymptotically linear speedup. Finally, we empirically demonstrate the merits of our approach on large-scale online prediction problems.

Details

Publication typeInproceedings
Published inProceedings of the 28th International Conference on Machine Learning (ICML)
> Publications > Optimal Distributed Online Prediction