Graphical Bandits

We consider a setting for nonstochastic multiarmed bandits in which actions are vertices of a graph G, the edges of G denote similarities between actions, and the payoffs observed in each play are in the neighborhood of the played action. This setting interpolates between the standard bandit problem (where G is the empty graph) and the setting of prediction with expert advice (where G is the clique). I will describe simple extensions of the Exp3 algorithm applicable to undirected and directed graphs. The corresponding regret bounds are shown to scale with the independence number of G, yielding the known expert and bandit bounds as special cases.

Speaker Details

Nicolo Cesa-Bianchi is a Professor of Computer Science at the Universita degli Studi di Milano, Italy. He is one of the most influential researchers in the area of online learning, and in particular a co-author of the seminal paper [1] as well as the books [2], [3]. For his publications, see http://homes.di.unimi.it/~cesabian/papers.html.

[1] P. Auer, N. Cesa-Bianchi, Y. Freund, and R.E. Schapire, The nonstochastic multiarmed bandit problem SIAM Journal on Computing, 32(1):48-77, 2002.

[2] PREDICTION, LEARNING, AND GAMES Nicolo Cesa-Bianchi and Gabor Lugosi Cambridge University Press, 2006 [3] Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems. S. Bubeck and N. Cesa-Bianchi In Foundations and Trends in Machine Learning, Vol 5: No 1, 1-122, 2012

Date:
Speakers:
Nicolo Cesa-Bianchi
Affiliation:
Universita degli Studi di Milano
    • Portrait of Jeff Running

      Jeff Running

Series: Microsoft Research Talks