
Speaker Maryam Fazel Affiliation University of Washington Host Ofer Dekel Duration 00:25:39 Date recorded 26 October 2012 The topic of deriving a structured model from a small number of linear observations by solving a convex optimization problem has been wellstudied in recent years. Examples include the recovery of sparse or groupsparse vectors (which gave rise to the area of Compressed Sensing), lowrank matrices (arising in collaborative filtering and system identification), and the sum of sparse and lowrank matrices (arising in PCA with sparse outliers, graphical models). In many applications in signal processing and machine learning, the model of interest is known to be structured in several ways at the same time, for example, a matrix that is simultaneously sparse and lowrank. An application in signal processing is the classic sparse phase retrieval problem, where the goal is to recover a sparse signal from phaseless (magnitudeonly) measurements. In machine learning, the problem comes up when combining several regularizers that each promote a certain desired structure. In this work, we address the questions: what convex relaxation should we use for the recovery of a “simultaneously structured” model? And how many measurements are needed generically? Often penalties (norms) that promote each structure are known (e.g. l1 norm for sparsity, nuclear norm for matrix rank), so it is reasonable to minimize a combination of these norms. We show that, surprisingly, if we use as objective function any convex joint function of the individual norms, we can do no better than an algorithm that exploits only one of the several structures. We then specialize our result to the case of simultaneously sparse and lowrank matrices, and present numerical simulations that support the theoretical bounds on required number of observations.
©2012 Microsoft Corporation. All rights reserved.
