Speaker Santosh Vempala
Host Yael Kalai
Affiliation Georgia Tech
Date recorded 23 March 2011
Standard Principal Components are directions that optimize second moments of a given set of points and have proven to be a powerful tool. Here we consider higher order principal components, i.e., directions that optimize higher moments of a data set (or the spectral norm of higher-dimensional arrays). They appear to be much less structured — there could be exponentially many, need not be pairwise orthogonal and it is NP-hard to find global maxima for arbitrary inputs. We discuss applications to combinatorial optimization and learning: (a) finding a planted clique in a random graph, where higher-order maxima even for semi-random inputs would be effective and (b) learning an unknown function of a low-dimensional subspace from labeled examples (a k-subspace junta, generalizing the well-known class of k-juntas), where *local* optima suffice and can be approximated efficiently for a wide class of input distributions. Most of the talk is joint work with Ying Xiao.
©2011 Microsoft Corporation. All rights reserved.