Learning Theory

Established: January 28, 2014

We work on questions motivated by machine learning, in particular from the theoretical and computational perspectives. Our goals are to mathematically understand the effectiveness of existing learning algorithms and to design new learning algorithms. We combine expertise from diverse fields such as algorithms and complexity, statistics, and convex geometry.

We are interested in a broad range of problems. For example, we have studied the following problems.

  1. Can we rigorously prove the effectiveness of practical algorithms? For neural networks, we show that under suitable conditions, the classic gradient descent (combined with local perturbation) algorithm can effectively learn low degree polynomials. For deep networks, we show that there is a natural algorithm that grows networks in a bottom up fashion for certain distributions.
  2. Can we learn more effectively by leveraging the structure of the target functions? We have designed regression algorithms to learn low degree sparse polynomials with complexity dependent on the sparsity, nearly optimal high-dimensional linear regression for any design matrix by exploring the geometry of the design, and regression based algorithms for learning convex polytopes using new limit theorems in probability.
  3. Can we learn while providing privacy for our sample points? We show a close connection between differential privacy and robust statistics, and we show how to modify boosting to ensure differential privacy.
  4. What is the complexity of fundamental machine learning tasks? We attempt to shed light on the boundary between tractable and intractable problems using tools from computational complexity and cryptography.

While our investigation is starting up, machine learning in Silicon Valley Center has deep roots. Much work has been done on applying machine learning in real systems such as performance diagnosis in distribute systems and motion analysis in Kinect; on building distributed systems, such as DryadLinq and Dandelion, that support large scale machine learning; on developing algorithms, such as locality sensitive hashing and spectral partitioning, that are widely useful for machine learning tasks. Much work has been done in the complementary area of differential privacy too. By collaboration with colleagues, we hope to produce work that is both insightful and bears fruit.