Optimizing Trade-offs for Scalable Machine Learning

Modern machine learning applications require large models, lots of data, and complicated optimization. I will discuss scaling machine learning by decomposing learning problems into simpler sub-problems. These decompositions allow us to trade off accuracy, computational complexity, and potential for parallelization, where a small sacrifice in one can mean a big gain in another. Moreover, we can tailor our decomposition to our model and data in order to optimize these trade-offs.

I will present two examples. First, I will discuss parallel optimization for sparse regression. Our Shotgun algorithm parallelizes coordinate descent, a seemingly sequential method. Shotgun theoretically achieves near-linear speedups and empirically is one of the fastest methods for multicore sparse regression. Second, I will discuss parameter learning for Probabilistic Graphical Models. Using structured composite likelihood, we prove finite sample complexity bounds for general Markov and Conditional Random Fields, and our bounds indicate how to choose estimator structure in practice. In both examples, our analysis provides strong theoretical guarantees which guide our very practical implementations.

Speaker Details

Joseph Bradley is a Ph.D. candidate in Machine Learning at Carnegie Mellon University, advised by Carlos Guestrin. His thesis is on learning large-scale Probabilistic Graphical Models, focusing on methods which decompose problems to take advantage of parallel computation. Previously, he received a B.S.E. in Computer Science from Princeton University.

Date:
Speakers:
Joseph Bradley
Affiliation:
Carnegie Mellon University
    • Portrait of Jeff Running

      Jeff Running