Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
TUTORIALS

ICCV 2013 Tutorial 

Title: Don't Relax: Why Non-Convex Algorithms are Sometimes Needed for Sparse Estimation

Time: 9am-1pm,  December 1, 2013

Presenter:  David Wipf  (HOME PAGE)

Abstract:  At the most basic level, sparse estimation involves searching for a vector with mostly zero-valued elements that nonetheless allows us to explain data described by an observational model of interest.  It has emerged as an important concept in diverse fields including computational neuroscience, signal/image processing, computer vision, and machine learning.  While classical convex, penalized regression algorithms are frequently employed for sparse estimation (e.g., the Lasso), recently non-convex Bayesian inference techniques have demonstrated significant improvement in certain settings, sometimes provably so by exploiting posterior information beyond the mode. The purpose of this tutorial is to describe various Bayesian methodologies, explain when and why they may be relevant for promoting sparsity, and to clarify some persistent misunderstandings regarding how these models should be justified and interpreted.  In this context, we will consider Bayesian-inspired methods for solving a variety of inter-related outlier detection, source localization, blind deconvolution, and general sparse linear inverse problems. 

This tutorial is suitable for any individual possessing a basic familiarity with signal processing and limited Bayesian-related machine learning concepts. While we expect that most attendees may already have been exposed to sparse estimation theory, the underlying principles will be explained starting from an introductory level before proceeding to more advanced topics.

 Tentative Outline:

  1. Introduction to sparse estimation
  2. Traditional convex L1 norm algorithms
  3. Fundamental limitations of convexity
  4. Non-Convex Bayesian-inspired alternatives
  5. Theoretical analysis
  6. Applications

TUTORIAL SLIDES