The past two decades has seen machine learning (ML) transformed from an academic curiosity to a multi-billion dollar industry, and a centerpiece of our economic, social, scientific, and security infrastructure. Much work in machine learning has drawn on research in optimization, motivated by large-scale applications requiring analysis of massive high-dimensional data.
In this talk, I’ll argue that the growing importance of networked data environments, from the Internet to cloud computing, requires a fundamental rethinking of our basic analytic tools. My thesis will be that ML needs to shift from its current focus on optimization to equilibration, from modeling the world as uncertain, but stationary and benign, to one where the world is non-stationary, competitive, and potentially malicious.
Adapting to this new world will require developing new ML frameworks and algorithms. My talk will introduce one such framework — equilibration using variational inequalities and projected dynamical systems —which not only generalizes optimization, but is better suited to the distributed networked cloud-oriented future that ML faces.
To explain this paradigm change, I’ll begin by summarizing the au courant optimization-based approach to ML using recent research in the Autonomous Learning Laboratory. I will then present an equilibration-based framework using variational inequalities and projected dynamical systems, which originated in mathematics for solving partial differential equations in physics, but has been since been widely applied in its finite-dimensional formulation to network equilibrium problems in economics, transportation, and other areas. I’ll describe a range of algorithms for solving variational inequalities, showing their scope allows ML to extend beyond optimization, to finding game-theoretic equilibria, solving complementarity problems, and many other areas.