Introduction to large-scale optimization – Part1

These lectures will cover both basics as well as cutting-edge topics in large-scale convex and nonconvex optimization (continuous case only). Examples include stochastic convex optimization, variance reduced stochastic gradient, coordinate descent methods, proximal-methods, operator splitting techniques, and more. The lectures will also cover relevant mathematical background, as well as some pointers to interesting directions of future research (time permitting).

Speaker Details

Suvrit Sra is a Research Scientist at the Max Planck Institute for Intelligent Systems (formerly Biological Cybernetics) in Tübingen, Germany. He obtained his M.S. and Ph.D. in Computer Science from the University of Texas at Austin in 2007, and a B.E. (Hons.) in Computer Science from BITS, Pilani (India) in 1999. His main research focus is on large-scale optimization (convex, nonconvex, deterministic, stochastic, etc.): most notably for applications in machine learning, scientific computing, and computational statistics. He takes avid interest in various flavors of analysis, especially convex, harmonic, and matrix.

His research has won awards at several international venues; the most recent being the “SIAM Outstanding Paper Prize (2011)” for his work on metric nearness. He regularly organizes the Neural Information Processing Systems (NIPS) workshops on “Optimization for Machine Learning”.

Date:
Speakers:
Suvrit Sra
Affiliation:
Max Planck Institute for Intelligent Systems