Microsoft Research India Theory Day: 3rd Annual Symposium

Microsoft Research India, in collaboration with the Indian Institute of Technology Madras (IITM), will host the Third Annual Microsoft Research India Theory Day on 2 January 2010 at the ICSR Auditorium in the IITM campus in Chennai, India

The Theory Day is a day-long annual event. It features talks by world-renowned researchers in theoretical computer science (TCS) on latest research. The talks are intended to be accessible to advanced post-graduate students and be valuable to established researchers. It is hoped that Theory Day will inspire and stimulate young researchers to make new advances in this exciting field.

This page includes information on the Theory Day program and also the key dates. Please note that past experience suggests that many more students than we could possibly accommodate are interested in attending, so we are constrained to select student attendees via a competitive process. While students would comprise the bulk of the attendees, we also expect to have participation from post-doctoral researchers, faculty members and research staff, selected based on the applications we receive.

Venue

ICSR Auditorium, Indian Institute of Technology Madras (IITM) campus, Chennai, India

Speakers

Following are the speakers for the Theory Day. Talk abstracts are given at the bottom of this page.

Naveen Garg, Indian Institute of Technology, Delhi 
Navin Goyal, Microsoft Research India 
Piotr Indyk, Massachusetts Institute of Technology 
Satya Lokam, Microsoft Research India 
R. Ravi, Carnegie Mellon University 
Manfred K. Warmuth, University of California, Santa Cruz 

 

Agenda

0800 - 0900  Registrations & Breakfast 
0900 - 0915  Welcome & Introductions 
0915 - 1015  Manfred K. Warmuth - The blessing and the curse of the multiplicative updates - discusses connections between in vitro selection, and the multiplicative updates of online learning
1015 - 1045  Tea & Coffee break 
1045 - 1145  Naveen Garg - Minimizing Average Flow-time
1145 - 1215  Satya Lokam - Matrix Rigidity and Complexity of Linear Transformations
1215 - 1330 Lunch 
1330 - 1430  R. Ravi - Iterative Methods in Combinatorial Optimization
1430 - 1500  Navin Goyal - Algorithms for the Lovasz Local Lemma
1500 - 1530  Tea & Coffee break 
1530 - 1630  Piotr Indyk - Sparse Recovery Using Sparse Random Matrices
1630 - 1645  Closing 
1645 - 1730  High Tea 

 

Organizing Committee

Ravi Kannan, Microsoft Research India
Satya Lokam, Microsoft Research India
C. Pandu Rangan, Indian Institute of Technology Madras

Important Dates

Theory Day date: Saturday, January 2, 2010

How to attend the Theory Day event:

  • Chennai-based people can register for attending the program by sending an email to msrtheoryday@endtoend.in. Please provide your name, email ID, organization name and whether you are a student/faculty/researcher/industry professional when you send your registration.
  • The process for taking outstation attendees is complete and we cannot accommodate any more people coming from outside of Chennai.
  • Microsoft Research India shall provide for travel, lodging and boarding of all attendees that come from within India, outside of Chennai, for the Theory Day.
  • There are no fees that attendees have to pay for the Theory Day.

For any enquiries, please email msrtheoryday@endtoend.in.

Previous Editions of Theory Day

First annual Microsoft Research India Theory Day - December 2007
Second annual Microsoft Research India Theory Day - December 2008

Talk Abstracts

Title: Minimizing Average Flow-time
Speaker: Naveen Garg, Indian Institute of Technology, Delhi

In this talk I will consider the problem of scheduling jobs on multiple machines both in the online and offline settings. I will attempt to identify the key ideas in recent work on this problem for different machine models.

Title: Algorithms for the Lovasz Local Lemma
Speaker: Navin Goyal, Microsoft Research India

Lovasz Local Lemma is a powerful tool in probability theory asserting that the probability that none of a set of bad events happen is nonzero if the probability of each event is small compared to the number of events that depend on it. The local lemma finds applications in a wide variety of problems. It’s used to show the existence of objects such as satisfying assignments of certain Boolean formulas, or an efficient packet routing scheme in a network. However, the original proof of the local lemma only proves the existence, and does not provide an efficient algorithm to construct such objects. A steady line of work on making the local lemma algorithmic recently culminated in the work of Moser, who gave an efficient randomized algorithm for a general version of the local lemma. More recently, with Karthekeyan Chandrasekaran and Bernhard Haeupler we found a deterministic algorithm for a fairly general version of the local lemma. In this talk, I will survey these developments and sketch some of the proofs.

Title: Sparse Recovery Using Sparse Random Matrices
Speaker: Piotr Indyk, Massachusetts Institute of Technology

Over the recent years, a new linear method for compressing high-dimensional data (e.g., images) has been discovered. For any high-dimensional vector x, its sketch is equal to Ax, where A is an m x n matrix (possibly chosen at random).
Although typically the sketch length m is much smaller than the number of dimensions n, the sketch contains enough information to recover an approximation to x. At the same time, the linearity of the sketching method is very convenient for many applications, such as data stream computing and compressed sensing.

The major sketching approaches can be classified as either combinatorial (using sparse sketching matrices) or geometric (using dense sketching matrices). They achieve different trade-offs, notably between the compression rate and the running time. Thus, it is desirable to understand the connections between them, with the goal of obtaining the "best of both worlds" solution.

Several recent results established such connections, indicating that the two approaches are just different manifestations of the same underlying phenomenon.
This enabled the development of novel algorithms, including the first algorithms that provably achieve the (asymptotically) optimal compression rate and near-linear recovery time simultaneously.

In this talk we give an overview of the results in the area, as well as look at some of them in more detail. In particular, we will describe a new algorithm, called "Sequential Sparse Matching Pursuit (SSMP)". In addition to having the aforementioned theoretical guarantees, the algorithm works well on real data, with the recovery quality often outperforming that of more complex algorithms, such as l_1 minimization.

Joint work with: Radu Berinde, Anna Gilbert, Howard Karloff, Milan Ruzic and Martin Strauss.

Title: Matrix Rigidity and Complexity of Linear Transformations
Speaker: Satya Lokam, Microsoft Research India

A matrix is said to be (k, r)-rigid if at least k entries of A must be altered to reduce its rank to a value ≤ r . A counting/dimension argument shows that almost all n x n matrices over an infinite field are ((n-r)2, r)-rigid. The challenge is to construct explicit matrices that are (n1+δ, εn)-rigid for positive constants ε and δ. Initial motivation for this goal comes from a result due to Valiant (1977), who introduced the notion of rigidity and proved that the linear transformation given by an (n1+δ, εn)-rigid matrix requires superlinear size arithmetic circuits of logarithmic depth.

We will review some recent constructions of (Ω(n2), εn)-rigid matrices with algebraic numbers as entries. While these matrices have a succinct mathematical description, e.g., entries are square roots of distinct primes, they are not explicit in the sense of being efficiently constructible by a Boolean model. The techniques used to prove rigidity lower bounds on these matrices also help us prove nearly-quadratic lower bounds on the arithmetic complexity of the corresponding linear transformations. These are stronger bounds than implied by Valiant's criterion.

Some of the results reported here are joint work with Abhinav Kumar, Vijay Patankar, and Jayalal Sharma.

Title: Iterative Methods in Combinatorial Optimization
Speaker: R. Ravi, Carnegie Mellon University

In this talk, I will describe a simple iterative method for proving a variety of results in combinatorial optimization. It is inspired by Jain's iterative rounding method for designing
approximation algorithms for survivable network design problems, and augmented with a relaxation idea in the work of Lau, Naor, Salvatipour and Singh in their work on designing an approximation algorithm for its degree bounded version. Its application was further refined in recent work of Bansal, Khandekar and Nagarajan on degree-bounded directed network design.

In this talk, I will describe its application to showing the integrality of Edmond's characterization of the spanning tree polyhedron and then extend the argument to show a simpler proof of the Lau-Singh approximation algorithm for its degree
constrained version. I will conclude by showing how Jain's original proof can be much simplified by the new perspective, by unifying its treatment with that for the STSP problem.

This talk describes work of all the above authors interpreted in collaboration with Lau, Nagarajan and Singh.

Title: The blessing and the curse of the multiplicative updates - discusses connections between in vitro selection, and the multiplicative updates of online learning
Speaker: Manfred K Warmuth, UC Santa Cruz

Multiplicative updates multiply the parameters by nonnegative factors. These updates are motivated by a Maximum Entropy Principle and they are prevalent in evolutionary processes where the parameters are for example concentrations of species and the factors are survival rates. The simplest such update is Bayes rule and we give an in vitro selection algorithm for RNA strands that implements this rule in the test tube where each RNA strand represents a different model. In one liter of the RNA soup there are approximately 10^20 different strands and therefore this is a rather high-dimensional implementation of Bayes rule.

We investigate multiplicative updates for the purpose of learning online while processing a stream of examples. The ``blessing'' of these updates is that they learn very fast
because the good parameters grow exponentially. However their ``curse'' is that they learn too fast and wipe out parameters too quickly. We describe a number of methods developed in the realm of online learning that ameliorate the curse of these updates. The methods make the algorithm robust against data that changes over time and prevent the currently good parameters from taking over. We also discuss how the curse is circumvented by nature. Some of nature's methods parallel the ones developed in Machine Learning, but nature also has some additional tricks.

This will be a high level talk. No background in online learning or Biology will be required.