Large matrices beyond singular value decomposition

A number of data sets are naturally described in matrix form. Examples range from microarrays to collaborative filtering data, to the set of pairwise distances of a cloud of points. In many of these examples, singular value decomposition (SVD) provides an efficient way to construct a low-rank approximation thus achievieng both dimensionality reduction, and effective denoising. SVD is also an important tool in the design of approximate linear algebra algorithms for massive data sets.

It is a recent discovery that –for important classes of matrices–SVD is sub-optimal, and can be significantly improved upon. There has been considerable progress on this topic over the last two years, partly spurred by interest in the Netflix challenge. I will overview this progress, and describe recent results on several specific problems, as well as outstanting challenges.

Speaker Details

Andrea Montanari received a Laurea degree in Physics in 1997, and a Ph. D. in Theoretical Physics in 2001 (both from Scuola Normale Superiore in Pisa, Italy). He has been post-doctoral fellow at Laboratoire de Physique Théorique de l’Ecole Normale Supérieure (LPTENS), Paris, France, and the Mathematical Sciences Research Institute, Berkeley, USA. Since 2002 he is Chargé de Recherche (with Centre National de la Recherche Scientifique, CNRS) at LPTENS. In September 2006 he joined Stanford University as a faculty, and since 2010 he is Associate Professor in the Departments of Electrical Engineering and Statistics.
He was co-awarded the ACM SIGMETRICS best paper award in 2008. He received the CNRS bronze medal for theoretical physics in 2006 and the National Science Foundation CAREER award in 2008.

Date:
Speakers:
Andrea Montanari
Affiliation:
Stanford University
    • Portrait of Jeff Running

      Jeff Running