Optimal and Adaptive Online Learning

Online learning is one of the most important and well-established learning models in machine learning. Generally speaking, the goal of online learning is to make a sequence of accurate predictions “on the fly” when interacting with the environment. Online learning has been extensively studied in recent years, and has also become of great interest to practitioners due to its applicability to large scale applications such as advertisement placement and recommendation systems.

In this talk, I will present novel, optimal and adaptive online learning algorithms for three problems. The first problem is online boosting, a theory of boosting the accuracy of any existing online learning algorithms; the second problem is on combining expert advice more efficiently and adaptively when making online predictions; the last part of the talk is about using data sketching techniques to obtain efficient online learning algorithms that make use of second order information and have robust performance against ill-conditioned data.

Speaker Details

Haipeng Luo is currently a fifth year graduate student working with Prof. Rob Schapire at Princeton. His main research interest is in theoretical and applied machine learning, with a focus on adaptive and robust online learning and its connections to boosting, optimization, stochastic learning and game theory. He won the Wu Prize for Excellence and two best paper awards (ICML and NIPS) in 2015.

Date:
Speakers:
Haipeng Luo
Affiliation:
Princeton University
    • Portrait of Jeff Running

      Jeff Running