Theory Colloquium

The Microsoft Research New England Theory Colloquium focuses on research in the foundational aspects of computing, communication, networks and society. While a wide variety of perspectives have been utilized successfully to improve our understanding of these fields, this colloquium series focuses on research with a mathematical flavor, and aims to bring speakers from a variety of fields to describe their work. The talks are intended to be accessible to young researchers in diverse fields.

The Theory Colloquium is now part of the Microsoft Research Colloquium.

 

Past Speakers

Costs and Benefits of Dynamic Trading in a Lemons  A Model of Focusing in Economic Choice

Adam Szeidl, Berkeley
Wednesday, June 27
4:00 PM – 5:30 PM
No recording available

Description

We present a generally applicable theory of focusing based on the hypothesis that a person focuses more on, and hence overweights, attributes in which her options differ more. Our model predicts that the decisionmaker is too prone to choose options with concentrated advantages relative to alternatives, but maximizes utility when the advantages and disadvantages of alternatives are equally concentrated. In intertemporal choice, the decisionmaker exhibits present bias and time inconsistency when---such as in lifestyle choices and other widely invoked applications of hyperbolic discounting---the future costs of current misbehavior are distributed over many dates, and the effects of multiple decisions accumulate. But unlike in previous models, (1) present bias is lower when the costs of current misbehavior are less dispersed, helping to explain why individuals respond more to monetary incentives than to health concerns in harmful consumption; and (2) time inconsistency is lower when the ex-ante choice integrates fewer decisions with accumulating effects. In addition, the agent does not fully maximize welfare even when making decisions ex ante: (3) she commits to too much of an activity---e.g., exercise or work---that is beneficial overall; and (4) makes ``future-biased'' commitments when---such as in preparing for a big event---the benefit of many periods' effort is concentrated in a single goal. URL: http://www.personal.ceu.hu/staff/Adam_Szeidl/papers/focusing.pdf

Biography
Adam Szeidl is professor of economics at Central European University. His research has focused on the economics of social networks, household consumption, and international trade. Szeidl received his PhD from Harvard University in 2004. The same year he joined the faculty of the Economics Department at UC-Berkeley, where he stayed until 2012, most recently as tenured associate professor. Adam has published in leading economics journals such as the Quarterly Journal of Economics and the Journal of Economic Theory. He received a European Research Council Starter Grant in 2011, an Alfred P. Sloan Research Fellowship in 2010 and has been supported by the National Science Foundation since 2005. Szeidl is a member of the NBER and the CEPR.

Past Speakers

Costs and Benefits of Dynamic Trading in a Lemons  

Andrzej Skrzypacz, Stanford
Wednesday, June 20
4:00 PM – 5:30 PM
View recording here

Description

We study a dynamic market with asymmetric information that induces the lemons problem. We compare efficiency of the market under different assumption about the timing of trade. We show that there generally exist conditions under which efficiency can be improved by temporarily closing the market as compared to continuous trading opportunities. Full paper and most current abstract available here: http://www.stanford.edu/~skrz/Dynamic_Lemons.pdf

Biography
Andrzej (Andy) Skrzypacz is the Theodore J. Kreps Professor of Economics at the Stanford Graduate School of Business and a Professor, by courtesy at the Department of Economics. His research is in microeconomic theory, specializing in market design, economic dynamics and collusion. He is currently a co-editor of the American Economic Review and an associate editor at the Rand Journal of Economics. He has received a Stanford GSB PhD Distinguished Service Award in 2005 and has advised over a dozen of Ph.D. dissertations.

Quantum Hamiltonian Complexity 

Umesh Vazirani, Berkeley
Wednesday, June 13
4:00 PM – 5:30 PM
View recording here

Description

We consider three basic questions about quantum mechanics:
1. Do `typical' quantum states that occur in Nature have succinct (polynomial) description?
2. Can quantum systems at room temperature exhibit exponential complexity?
3. Is the scientific method sufficiently powerful to comprehend general quantum systems?Each of these questions is best studied through a computational lens as a question about computation. The resulting questions lie at the core of theory. The first asks  about the structure of solutions to the quantum analog of SAT. The second asks whether there is a quantum analog of the PCP theorem. And the third can be formulated as a question about interactive proof systems with BQP provers. In this talk I will describe recent progress on these issues.

Biography
Umesh Vazirani received his B.Tech. in computer science from MIT in 1981 and his Ph.D. in computer science from U.C. Berkeley in 1986. He is currently professor of computer science at U.C. Berkeley and director of BQIC -the Berkeley center for Quantum Information and Computation. Prof. Vazirani is a theoretician with broad interests in novel models of computation. He has done seminal work in quantum computation and on the computational foundations of randomness. He is the author of two books "An introduction to computational learning theory" with Michael Kearns and "Algorithms" with Sanjoy Dasgupta and Christos Papadimitriou.

 

Lattice-Based Cryptography

Oded Regev, Tel Aviv University and CNRS, ENS, Paris Wednesday, June 6
4:00 PM – 5:30 PM
View recording here

Description

We will give a survey of recent work on lattice-based cryptography, mainly focusing on the so-called Learning with Errors (LWE) problem. This problem has turned out to be an amazingly versatile basis for cryptographic constructions, with tens of applications, including the recent celebrated work on fully homomorphic encryption. In addition to applications, we will also mention very recent work providing a better understanding of the security of the problem. The talk does not require any prior knowledge in cryptography or in lattices.

 

Biography
Oded Regev is a professor in Tel Aviv University and a directeur de recherche in the École Normale Supérieure, Paris under the CNRS. He received his Ph.D. in Computer Science from Tel Aviv University in 2001. He is the recipient of the Krill Prize for Excellence, awarded by the Wolf foundation in 2005, as well as best paper awards in STOC'03 and Eurocrypt'06. He was awarded a European Research Council (ERC) Starting Grant in 2008. His main research areas include theoretical computer science, cryptography, quantum computation, and complexity theory. A main focus of his research is in the area of lattice-based cryptography, where he introduced several key concepts, including the "learning with error" problem and the use of Gaussian measures.

Calibrated Incentive Contracts

Sylvain Chassang, Princeton
Wednesday, May 30
4:00 PM – 5:30 PM
No recording available

Description

This paper studies a dynamic agency problem which includes limited liability, moral hazard and adverse selection. The paper develops a robust approach to dynamic contracting based on calibrating the payoffs that would have been delivered by simple benchmark contracts that are attractive but infeasible, due to limited liability constraints. The resulting dynamic contracts are detail-free and satisfy robust performance bounds independently of the underlying process for returns, which need not be i.i.d. or even ergodic.

Biography
Sylvain Chassang is a Professor of Economics at Princeton University. He received his Ph.D. from MIT and is the recipient of a Sloan Fellowship. His interests include Game Theory and Development Economics

Low-distortion Inference of Latent Similarities from a Multiplex Social Network

David Kempe, USC
Wednesday, May 23
4:00 PM – 5:30 PM
No recording available

Description

What can a social network tell us about the underlying latent "social structure," the way in which individuals are similar or dissimilar? Much of social network analysis is, implicitly or explicitly, predicated on the assumption that individuals tend to be more similar to their friends than to strangers. Having explicit access to similarity information instead of merely the noisy signal provided by the presence or absence of edges could improve analysis significantly. We study the following natural question: Given a social network - reflecting the underlying social distances between its nodes - how accurately can we reconstruct the social structure?

It is tempting to model similarities and dissimilarities as distances, so that the social structure is a metric space.However, observed social networks are usually multiplex, in the sense that different edges originate from similarity in one or more among a number of different categories, such as geographical proximity, professional interactions, kinship, or hobbies. Since close proximity in even one category makes the presence of edges much more likely, an observed social network is more accurately modeled as a union of separate networks. In general, it is a priori not known which network a given edge comes from. While generative models based on a single metric space have been analyzed previously, a union of several networks individually generated from metrics is structurally very different from (and more complex than) networks generated from just one metric.

We begin to address this reconstruction problem formally. The latent "social structure" consists of several metric spaces. Each metric space gives rise to a "distance-based random graph," in which edges are created according to a distribution that depends on the underlying metric space and makes long-range edges less likely than short ones. For a concrete model, we consider Kleinberg's small-world model and some variations thereof. The observed social network is the union of these graphs. All edges are unlabeled, in the sense that the existence of an edge does not reveal which random graph it comes from. Our main result is a near-linear time algorithm which reconstructs from this unlabeled union each of the individual metrics with provably low distortion.

(Joint work with Ittai Abraham Shiri Chechik, and Aleksandrs Slivkins.)

Biography
David Kempe received his Ph.D. from Cornell University in 2003, and has been on the faculty in the computer science department at USC since the Fall of 2004, where he is currently an Associate Professor. During the Spring of 2012, he is visiting Microsoft Research, New England.His primary research interests are in computer science theory and the design and analysis of algorithms, with a particular emphasis on social networks, algorithms for feature selection, and game-theoretic and pricing questions. He is a recipient of the NSF CAREER award, the VSoE Junior Research Award, the ONR Young Investigator Award, and a Sloan Fellowship, in addition to several USC Mentoring awards.

How to Compute Blindfolded: New Developments in Fully Homomorphic Encryption

Vinod Vaikuntanathan, Toronto
Wednesday, April 25, 2012
4:00 PM – 5:30 PM
No recording available

Description

A fully homomorphic encryption scheme enables computation of arbitrary functions on encrypted data. Long regarded as cryptography’s prized “holy grail", a concrete construction of such an encryption scheme remained elusive for over three decades. The last three years has witnessed a number of constructions of fully homomorphic encryption involving novel mathematical techniques based on integer lattices, as well as a number of exciting applications. I will give an overview of these developments and provide a glimpse of the research directions that lie ahead.

Biography
Vinod Vaikuntanathan is an assistant professor of Computer Science at the University of Toronto. In previous avatars, he was a Researcher at Microsoft Redmond and a Josef Raviv postdoctoral fellow at IBM T.J. Watson. He received his Ph.D. from MIT, where he was a recipient of a George M. Sprowls doctoral dissertation award.

Identity-Based Encryption: Past, Present, and Future

Dan Boneh, Stanford
Wednesday, May 2
4:00 PM – 5:30 PM
No recording available

Description

Identity Based encryption (IBE) is a public-key encryption system where any string can function as a public key. IBE systems imply chosen ciphertext secure encryption, searching on encrypted data, secure signatures, and many other cryptographic primitives. Over the past decade three families of IBE constructions were developed: one family uses pairing-friendly elliptic curve, another uses arithmetic modulo composites, and a third uses integer lattices. This talk will survey some recent IBE constructions and highlight a few central open problems in this area.

Biography
Dan Boneh is a Professor of Computer Science and Electrical Engineering at Stanford University. He is a well-known researcher in the areas of applied cryptography and computer security.

Expressible Inspections

Eran Shmaya, Northwestern
Wednesday, April 18, 2012
4:00 PM – 5:30 PM
View recording here

Description

A decision maker needs predictions about the realization of a repeated experi- ment in each period. An expert provides a theory that, conditional on each finite history of outcomes, supplies a probabilistic prediction about the next outcome. However, there may be false experts without any knowledge of the data-generating process who deliver theories strategically. Hence, empirical tests for predictions are necessary. A test is manipulable if a false expert can pass the test with a high probability. Like contracts, tests have to be com- putable to be implemented. Considering only computable tests, we show that there is a test which passes true experts with a high probability yet is not manipulable by any computable strategy. In particular, the constructed test is both prequential and future-independent. On the other hand, any computable test is manipulable by a strategy that is computable relative to the halting problem. Our conclusion overturns earlier results that prequential or future independent tests are manipulable, and shows that computability considerations have significant effects in these problems.

Biography
Eran Shmaya joined the Managerial Economics and Decision Sciences department at the Kellogg School of Management in 2008. Professor Shmaya graduated from Tel Aviv University in 2007. Professor Shmaya's research areas are game theory, probability and information theory. He is currently working on infinite games. He is also interested in applications of game theory in the natural sciences.

Illegitimi non carborundum

Ron Rivest, MIT
Wednesday, April 11, 2012
4:00 PM – 5:30 PM
No recording available

Description
Motivated by the increasing sophistication of recent Advanced Persistent Threats (APTs), we introduce the game of ``stealthy takeovers'' (aka ``FlipIt''). FlipIt is a two-player game (between an attacker and a defender) where either player can move at any given time, but cannot determine the state of the game until he moves himself. This stealthy game models various security situations in which a resource can be ``silently corrupted'' by an adversary, but then ``disinfected'' later by the defender. Interesting features of this game include:

* Aggressive play by one player can motivate the other player to drop out (not play at all).
* Optimal play by the defender generally requires tolerance of some amount of corruption by the adversary.

We analyze several variants of the game, and show applications of these ideas in modeling various computer security scenarios, including password reset, cloud audit, and machine patching.
(The work discussed here is joint with Marten van Dijk, Ari Juels, and Alina Oprea of RSA Labs.)

Biography
Professor Rivest is the Viterbi Professor of Computer Science in MIT's Department of Electrical Engineering and Computer Science, a member of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), a member of that lab's Theory of Computation Group and a leader of its Cryptography and Information Security Group. He received a B.A. in Mathematics from Yale University in 1969, and a Ph.D. in Computer Science from Stanford University in 1974. His research interests include cryptography, computer and network security, voting systems, and algorithms. Rivest is a co-inventor of the RSA public-key cryptosystem, has extensive experience in cryptographic design and cryptanalysis. He is also a founder of RSA Data Security and of Verisign. He is a member of the National Academy of Engineering and the National Academy of Sciences, and is a Fellow of the Association for Computing Machinery, the International Association for Cryptographic Research, and the American Academy of Arts and Sciences. He is also on the Advisory Board for the Electronic Privacy Information Center. Together with Adi Shamir and Len Adleman, he has received the 2002 ACM Turing Award and the 2009 NEC C&C Award.

Reputation for Quality

Simon Board, UCLA
Wednesday, April 4, 2012
4:00 PM – 5:30 PM
No recording available

Description
We propose a new model of firm reputation where product quality is persistent and depends stochastically on the firm’s past investments. Reputation is then modeled directly as the market belief about quality. We analyze how investment incentives depend on the firm’s reputation and derive implications for reputational dynamics.
Reputational incentives depend on the specification of market learning. When consumers learn about quality through perfect good news signals, incentives decrease in reputation and there is a unique work-shirk equilibrium with ergodic dynamics. When learning is through perfect bad news signals, incentives increase in reputation and there is a continuum of shirk-work equilibria with divergent dynamics. For a large class of imperfect Poisson learning processes and low investment costs, we show there exists a work-shirk equilibrium with path-dependent dynamics. We also derive conditions under which this equilibrium is essentially unique.
Papers:
http://www.econ.ucla.edu/sboard/papers/reputation.pdf
http://www.econ.ucla.edu/sboard/papers/reputation_exit.pdf

Biography
Simon Board is an assistant professor of Economics at UCLA. He received his Ph.D at Stanford Business School, and has taught at the University of Toronto and UCLA. Simon’s research interests include pricing with strategic customers, relational contracts and models of reputation. He teaches contract theory in the PhD program and e-commerce to undergraduates.

Local Computation Algorithms

Ronnitt Rubinfeld, MIT
Wednesday, March 28, 2012
4:00 PM – 5:30 PM
No recording available

Description
Consider a setting in which inputs to and outputs from a computational problem are so large, that there is not time to read them in their entirety. However, if one is only interested in small parts of the output at any given time, is it really necessary to solve the entire computational problem? Is it even necessary to view the whole input? We propose a model of {\em local computation algorithms} which for a given input, supports queries by a user to values of specified bits of a legal output. The goal is to design local computation algorithms in such a way that very little of the input needs to be seen in order to determine the value of any single bit of the output. In particular, the runtime and space requirements of the local computation algorithms should be sub-linear in the size of the input. Surprisingly, there are nontrivial computational problems where such efficiency is achievable. Local computation algorithms are intended to distill the common features of several concepts that have previously appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. In this talk, we survey some of the recent problems, including maximal independent set and hypergraph coloring, for which polylogarthmic time and space local computation algorithms have been given.
Joint work with Noga Alon, Gil Tamir, Shai Vardi and Ning Xie.

Biography
Ronitt Rubinfeld received Bachelor's degree from the University of Michigan and her PhD from the University of California, Berkeley in 1991. She then held postdoctoral researcher positions at DIMACS/Princeton University and at the Hebrew University in Jerusalem. In 1992, she joined the faculty of the Computer Science Department at Cornell University, where she was an ONR Young Investigator, a Sloan Research Fellow, the 1995 Cornell Association for Computer Science Undergraduates Faculty of the Year, and a recipient of the Cornell College of Engineering Teaching Award. From 1999 to 2003, Ronitt was a Senior Research Scientist at NEC Research Laboratories. Since 2004, she has been on the faculty at MIT, in the Department of Electrical Engineering and Computer Science. In 2008, Ronitt has joined the school of Computer Science at Tel Aviv University. Ronitt's research focuses on algorithms for massive data sets, including sublinear time approximation algorithms, property testing algorithms and algorithms that estimate properties of probability distributions over large domains.

Aligning Incentives in Shared-Use Environments

Lisa Fleischer, Dartmouth
Wednesday, March 21, 2012
4:00 PM – 5:30 PM
No recording available

Description
We study the design of information technology in environments where multiple users make their own utility maximizing decisions. We propose models and algorithms to incentivize users to achieve system-wide objectives. This talk will define three distinct problems and discuss our approach and results for each of these problems (as time permits).
1) Ensuring efficient use of traffic networks:
We analyze competitive traffic routing models that incorporate travel time.
We show that a network owner can reduce available capacity so that a competitive equilibrium in the reduced network is no worse than a small constant times the optimal solution in the original network using two natural measures of optimum: the time by which all traffic reaches the destination, and the average amount of time it takes traffic to reach the destination.
2) Purchasing the use of personal data to compute population statistics:
An analyst wants to estimate statistics of sensitive data. When his data are used, a user may suffer a cost --- a loss in expected utility over future events --- in which case he should be compensated. The analyst wishes to get an accurate estimate, while the users want to maximize their utility. Since a user's costs and sensitive data may be correlated, it is important to protect the privacy of both data and cost. While there is a natural tradeoff of privacy and accuracy and cost, we show it is possible to design a (Bayesian) incentive-compatible mechanism that estimates statistics accurately without compromising the privacy of costs or data.
3) Encouraging customers to help sell a product by rewarding referrals:
To encourage potential customers to buy early and to give referrals to influential people, a multi-level mechanism rewards buyers for making both direct and indirect referrals --- a direct referral linked to the buyer through other direct referrals.
Rewarding indirect referrals can make the referral/reward system vulnerable to sybil attacks--- where profit maximizers create several replicas in order to maximize their rewards. We give a simple mechanism for which sybil attacks are not profitable.

Biography
Lisa Fleischer is an Associate Professor of Computer Science at Dartmouth. Her research focuses on algorithm design and optimization, in particular on problems in networks, scheduling, and game theory. Lisa received her PhD from Cornell University, and has held regular positions at IBM Research, Carnegie Mellon University, and Columbia University. She is a recipient of an NSF Career Award, the 2003 Mathematical Optimization Society's Fulkerson Prize, and the 1997 INFORMS Nicholson Prize. She has served on numerous program committees and is or has been an editor for the journals Mathematics of Operations Research, SIAM Journal on Discrete Math, Operations Research, and Operations Research Letters.

Learning to Behave by Reading

Regina Barzilay, MIT
Wednesday, March 14, 2012
4:00 PM – 5:30 PM
No recording available

Description
In this talk, I will address the problem of grounding linguistic analysis in control applications, such as game playing and robot navigation. We assume access to natural language documents that describe the desired behavior of a control algorithm (e.g., game strategy guides). Our goal is to demonstrate that knowledge automatically extracted from such documents can dramatically improve performance of the target application.
First, I will present a reinforcement learning algorithm for learning to map natural language instructions to executable actions. This technique has enabled automation of tasks that until now have required human participation --- for example, automatically configuring software by consulting how-to guides. Next, I will present a Monte-Carlo search algorithm for game playing that incorporates information from game strategy guides. In this framework, the task of text interpretation is formulated as a probabilistic model that is trained based on feedback from Monte-Carlo search. When applied to the Civilization strategy game, a language-empowered player outperforms its traditional counterpart by a significant margin.

Biography
Regina Barzilay is an associate professor in the Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory. Her research interests are in natural language processing and machine learning. She is a recipient of various awards including the NSF Career Award, Microsoft Faculty Fellowship, the MIT Technology Review TR-35 Award, and Best Paper Awards in top NLP conferences. She is a PC co-chair for EMNLP 2011 and associate editor of Journal of Artificial Intelligence Research (JAIR).

Overbooking 

Jeff Ely, Northwestern
Wednesday, December 14, 2011
4:00 PM – 5:30 PM
No recording available

Description
We consider optimal pricing policies for airlines when passengers are uncertain at the time of ticketing of their eventual realized willing- ness to pay for air travel. Auctions at the time of departure efficiently allocate space and a profit maximizing airline can capitalize on these gains by overbooking flights and repurchasing excess tickets from those passengers whose realized value is low. Nevertheless profit maximization entails distortions away from the efficient allocation. In order to encourage early booking, passengers who purchase late are disadvantaged. In order to capture the information rents of passengers with high expected values, ticket repurchases at the time of departure are at a subsidized price, sometimes leading to unused capacity.
A joint paper with Daniel Garrett and Toomas Hinnosaar.

Biography
Jeff Ely is the Charles E. and Emma H. Morrison Professor of Economics at Northwestern University, co-editor of Theoretical Economics and an accomplished latte-artist. Click here for my research page at IDEAS. Or follow me on Twitter.

Which Networks Are Least Susceptible to Cascading Failures? 

Jon Kleinberg, Cornell
Thursday, December 8, 2011
4:00 PM – 5:30 PM
View recording here

Description
The spread of a cascading failure through a network is an issue that comes up in many domains: in the contagious failures that spread among financial institutions during a financial crisis, through nodes of a power grid or communication network during a widespread outage, or through a human population during the outbreak of an epidemic disease. Here we study a natural model of threshold contagion: each node is assigned a numerical threshold drawn independently from an underlying distribution, and it will fail as soon as its number of failed neighbors reaches this threshold. Despite the simplicity of the formulation, it has been very challenging to analyze the failure processes that arise from arbitrary threshold distributions; even qualitative questions concerning which graphs are the most resilient to cascading failures in these models have been difficult to resolve.
We develop a set of new techniques for analyzing the failure probabilities of nodes in arbitrary graphs under this model, and we compare different graphs according to the maximum failure probability of any node in the graph when thresholds are drawn from a given distribution. We find that the space of threshold distributions has a surprisingly rich structure when we consider the risk that these thresholds induce on different graphs: small shifts in the distribution of the thresholds can favor graphs with a maximally clustered structure (i.e., cliques), those with a maximally branching structure (trees), or even intermediate hybrids.
This is joint work with Larry Blume, David Easley, Bobby Kleinberg, and Eva Tardos.

Biography
Jon Kleinberg is the Tisch University Professor in the Computer Science Department at Cornell University. His research focuses on issues at the interface of networks and information, with an emphasis on the social and information networks that underpin the Web and other on-line media. He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences, and serves on the Computer and Information Science and Engineering (CISE) Advisory Committee of the National Science Foundation, and the Computer Science and Telecommunications Board (CSTB) of the National Research Council. He is the recipient of MacArthur, Packard, and Sloan Foundation Fellowships, as well as awards including the Nevanlinna Prize from the International Mathematical Union and the ACM-Infosys Foundation Award in the Computing Sciences

PCPs and Expander Graphs 

Irit Dinur, Weizmann
Wednesday, November 30, 2011
4:00 PM – 5:30 PM
View recording here

Description
A probabilistically checkable proof (PCP) is a special format for writing proofs that is very robust. In this format, a proof of a false theorem is guaranteed to have so many bugs that it can be checked by reading a constant number of random proof bits.
The celebrated PCP theorem says that every NP language has a robust "PCP" proof.
In the talk we will explain how to construct a PCP by taking any standard NP proof and then routing it through an expander graph (i.e., a graph that is very well-connected).
We will also describe a complementary result that shows how in some restricted sense, every construction of a PCP must be based on an expander.
No prior knowledge will be assumed.
Based in part on joint work with Tali Kaufman.

Biography
Irit Dinur received her Ph.D in 2001 from Tel-Aviv University. Following a postdoc at the Institute for Advanced Study, NEC, and at the Miller Institute in Berkeley California she joined the faculty of the Hebrew University in Jerusalem. In 2007 She became an associate professor at the Weizmann Institute. Irit is interested in theoretical computer science, and in combinatorics. In particular she is interested in probabilistically checkable proofs, hardness of approximation, and robustness of computation.

Faces of Facebook: Privacy in the Age of Augmented Reality 

Alessandro Acquisti, CMU
Wednesday, November 30, 2011
4:00 PM – 5:30 PM
No recording available

Description
We investigate the feasibility of combining publicly available Web 2.0 data with off-the-shelf face recognition software for the purpose of large-scale, automated individual re-identification. Two experiments demonstrated the ability of identifying individuals online (on a dating site where individuals protect their identities by using pseudonyms) and offline (in a public space), based on photos made publicly available on a social network site. A third proof-of-concept experiment illustrated the ability of inferring individuals' personal or sensitive information (their interests and Social Security numbers) from their faces, by combining face recognition, data mining algorithms, and statistical re-identification techniques. The results highlight the implications of the inevitable convergence of face recognition technology and increasing online self-disclosures, and the emergence of ``personally predictable'' information. They raise questions about the future of privacy in an "augmented" reality world in which online and offline data will seamlessly blend.
Joint work: Alessandro Acquisti, Ralph Gross, and Fred Stutzman

Biography
Alessandro Acquisti is an Associate Professor at CMU’s Heinz College. Alessandro researches the economics of privacy. His manuscripts have spearheaded the application of behavioral economics to the analysis of privacy decision making, and the study of privacy risks and disclosure behavior in online social networks. Alessandro has been the recipient of the PET Award for Outstanding Research in Privacy Enhancing Technologies, the IBM Best Academic Privacy Faculty Award, and multiple best paper awards. Alessandro holds a PhD from UC Berkeley, and held visiting positions at the Universities of Rome, Paris, Freiburg, and Harvard.

Providing performance guarantees without assuming rationality of others 

Michal Feldman, Hebrew U of Jerusalem
Wednesday, November 9, 2011
4:00 PM – 5:30 PM
No recording available

Description

We study performance guarantees that can be provided to decision makers in a game, without making any assumptions on the rationality of the other players. The first setting is a symmetric two-person repeated game, where one player -- our decision maker -- never observes a single payoff (but observes the opponent’s play), while her opponent has full knowledge of the game. Naturally, the decision maker should attempt to mimic the opponent’s play. However, one has to becareful about how one mimics opponents who may know that they are being mimicked. As we show, a good copycat can reap tremendous rewards without everobserving a single payoff, while a poor copycat may perform worse than making random decisions. We then extend the model to a multi-player setting, in which only a single player is informed. We show that a {\em master} who controls the uninformed players can guarantee for each one of them (simultaneously) a similar performance to that of the informed player, even if the latter gets to choose the payoff matrix after the fact. Key to our analysis is the study of multi-player games, and the guarantees that a master player that plays on behalf of a set of players can offer them, without making any assumptions on the rationality of the other players.
Based on joint work with Yossi Azar, Uri Feige, Adam Kalai and Moshe Tennenholtz.

Biography
Michal Feldman received her Ph.D in 2005 from the University of California at Berkeley. Following a postdoc at the Hebrew University and at Tel-Aviv University, she joined the faculty of the School of Business Administration at the Hebrew University (in 2007). Her research focuses on the intersection of game theory, computer science and microeconomics, a field often termed "Algorithmic Game Theory". She has been recently awarded a Marie Curie grant, and is currently a visiting scholar at the Center for Research on Computation and Society (CRCS) at Harvard University.

Axioms and Decision Theory 

Bart Lipman, Boston University
Wednesday, November 2, 2011
4:00 PM – 5:30 PM
No recording available

Description

In this relatively informal talk, I will introduce axiomatic decision theory with the goal of answering the following questions:
1. What is decision theory and why should we care?
2. What is the purpose of an axiomatic characterization of a model of decision making?
I will illustrate by discussing some characterization theorems on decision making under uncertainty and decision making under temptation.

Biography
Bart Lipman is a professor of economics at Boston University. He was previously on the faculty at Carnegie Mellon, Queen’s University, University of Western Ontario, and University of Wisconsin and received his Ph.D. from University of Michigan in 1985. His primary research interests lie in decision theory, game theory, and mechanism design. Most recently, his primary work focus has been on decision-theoretic models of temptation and mechanism design with verification. He was a founding co-editor of the journal Theoretical Economics and is a Fellow of the Econometric Society. 

Tradeoffs between fundamental complexity measures of propositional proofs 

Eli Ben Sasson, Technion
Wednesday, October 26, 2011
4:00 PM – 5:30 PM
View recording here

Description

What kind of theorems are easy to state yet hard to prove?
This question motivates the study of propositional proof complexity. In this introductory talk I will describe the three fundamental proof-complexity measures of proof length, width, and space. These measures correspond to different aspects of the "hardness'' of proving a given theorem. Then I will discuss the surprising relationships between these three measures and conclude with accessible and intriguing open questions in this area.
Based on joint work with Jakob Nordstrom.

Biography
Eli Ben-Sasson is an associate professor of computer science at Technion, Israel. His research is within theoretical computer science with recent focus on the generation and verification of automated proofs, the analysis of locally testable error correcting codes and the study of randomness arising from algebraically simple sources.

On queues and numbers 

Eitan Bachmat, Ben-Gurion U
Thursday, October 20, 2011
4:00 PM – 5:30 PM
View recording here

Description

We will consider the problem of managing a mini-market with! two checkout counters, one of them serving as an express lane. For many of the standard job size distributions such as the exponential distribution,
the problem of managing the mini-market is rather nasty. However, for some distributions, the! management problem becomes very easy. We will provide some examples coming from the Taniyama-Shimura conjecture and explain the general connection between mini-market queues and number theory, coming from a very basic computation of Riemann. If time permits we will discuss the problem of positivity of L-functions that comes up naturally in this context and describe the relation of super-market queues with Lorentzian geometry. The talk is self-contained, in particular no knowledge of managing a mini-market is necessary.

Simultaneous Selection Problem, and Optimal Team Selection (joint projects with Rakesh Vohra) 

Wojciech Olszewski, Northwestern
Wednesday, October 12, 2011
4:00 PM – 5:30 PM
No recording available

Description

We study two classes of discrete optimization problems. First, we generalize the problem of selecting the set of colleges to apply to, which has been studies by Chade and Smith. This generalization allows for different applications, e.g., the problem of simultaneous purchase of items that are to be consumed sequentially. We show that the steepest ascent algorithm selects an optimal set under quite general conditions, including the application to simultaneous purchase of items for sequential consumption. However, the simultaneous selection problem is in general NP-hard. We also show that without any extra assumption on function f, (SSP) is NP-hard. We conjecture the existence of an algorithm which selects in polynomial time a set S that gives almost 2/3 of the optimum. This share cannot, however, be achieved by using steepest ascent. We show by an example that for any ɛ>0, steepest ascent may select a set S that gives less that ɛ share of the optimum.
In the second project, we are concerned with the question of selecting an optimal team of agents from some grand set of available agents. Agents can produce individually, but can also affect one another's production. Applications include mechanics working together in a garage, and researchers writing joint papers. In a class of problems, including the former application, an adaptation of the Ford-Fulkerson algorithm selects an optimal team. We also provide some comparative statics results for this class of problems. We also have some partial results for another class of problems, including the latter application.

Biography
Wojciech Olszewski is a Professor of Economics at Weinberg College of Arts and Sciences (WCAS) at Northwestern University. He is currently a Visiting Professor at Harvard. He received his PHD from Princeton.

The mathematical challenge of large networks

László Lovász, Eötvös Loránd University
Wednesday, September 7, 2011
4:00 PM – 5:30 PM
View recording here

Description

It is becoming more and more clear that many of the most exciting structures of our world can be described as large networks. The internet is perhaps the foremost example, modeled by different networks (the physical internet, a network of devices; the world wide web, a network of webpages and hyperlinks). Various social networks, several of them created by the internet, are studied by sociologist, historians, epidemiologists, and economists. Huge networks arise in biology (from ecological networks to the brain), physics, and engineering.
These networks pose exciting and challenging problems for the mathematician: these networks are never completely known, and indeed often not even completely defined.
At any time, we can only have partial information about them, through sampling locally, or observing the behavior of some global process.
The approach to the study of such networks includes finding procedures (usually randomized) that produce networks with similar behavior; this relates to the theory of random graphs and randomly growing graphs, and the theory of „Property Testing” in computer science. The methods involve defining a "distance" between two large graphs, defining when a growing sequence of graphs is convergent, and assigning limit objects to such sequences.
The limit theory allows us to answer some basic questions both in theory and practice. For example, in extremal graph theory: Which inequalities between subgraph densities are valid? Can these be proved using just Cauchy-Schwarz? is there always and extremal graph, and what is the possible structure of these?
The limit theory has been developed by Borgs, Chayes, Lovasz, Sos, Szegedy and Vesztergombi in the dense case, and by Aldous, Benjamini, Schramm, Elek and others in the sparse case.

Biography
I am a Professor in the Department of Computer Science of the Eötvös Loránd University in Budapest, Hungary. My research topics: Combinatorial optimization, algorithms, complexity, graph theory, random walks.

Geometry and Theoretical Computer Science

Avi Wigderson, Institute for Advanced Study
Wednesday, August 31, 2011
4:00 PM – 5:30 PM
View recording here

Description

Yes, there are many (and growing) connections between this very ancient branch of mathematics and that very new one!
In this talk I will describe some recent works, on several different basic geometric problems which naturally arose from, and were solve by, intuitions and methods from theoretical computer science. These include periodic foams , Euclidean sections and line-point incidences.
No special background is assumed.

Biography
Avi Wigderson is a Professor in the School of Mathematics at the Institute for Advanced Studies, Princeton. He was with the Computer Science Institute, Hebrew University, Jerusalem from 1986 until 2003. He received his Ph.D. in Computer Science in 1983 from Princeton University. His main research interests are in complexity theory, algorithms, randomness, and cryptography. Prof. Wigderson has received the Nevanlinna Prize, the Yoram Ben-Porat Presidential Prize for Outstanding Researcher, the Conant Prize, and the Godel Prize.

Concentration phenomena in abstract ergodic theory

Tim Austin, Clay Mathematics Institute
Wednesday, August 24, 2011
4:00 PM – 5:30 PM
No recording available

Description

One of the central problems of abstract ergodic theory is to classify probability-preserving systems as far as possible up to measurable isomorphism. Progress by Ornstein in the 1970's solved this problem for certain key examples called Bernoulli shifts, and led to proofs that many other naturally-occurring systems are isomorphic to these basic examples. At the heart of Ornstein's work is a necessary and sufficient condition for an abstract process to be isomorphic to a Bernoulli shift. It turns out that the key property of Bernoulli shifts that characterizes them is an instance of the concentration of measure phenomenon in product spaces. However, despite the central place this phenomenon has in Ornstein Theory and its later extensions, this connection with discrete probability seems to be relatively little known to either ergodic theorists or discrete mathematicians.
In this talk I will give a gentle overview (aimed at a broad audience, including CS theorists and others not in the area) of the classification problem and Ornstein's results, and will then introduce the most important open problem in this part of ergodic theory: Thouvenot's Weak Pinsker Conjecture. Similarly to Ornstein's Theorems, this is intimately connected with a fundamental question about the structure of the Hamming distance on subsets of high-dimensional product spaces, a positive answer to which would both imply the Weak Pinsker Conjecture and constitute a delicate generalization of concentration of measure.

Biography
Tim Austin received his Ph.D in 2010 from the University of California, Los Angeles under the supervision of Terence Tao. His interests cover ergodic theory, metric geometry and geometric group theory. In his recent research he has developed new techniques for the analysis of certain nonconventional ergodic averages associated with the phenomenon of multiple recurrence, and shown how to construct examples of infinite discrete groups with various novel geometric properties.Tim received his B.A. from Trinity College, Cambridge University in 2005. While at UCLA, he held positions of research and teaching assistant; during the summers he has frequently held a visiting position at Microsoft Research. He is currently based at Brown University for 2011 and 2012 academic years.

Opinion Dynamics and Influence in Social Networks

Asu Ozdaglar, MIT
Wednesday, August 17, 2011
4:00 PM – 5:30 PM
No recording available

Description

Almost all important social decisions are taken by individuals on the basis of their opinions, which are formed and updated as a result of their experiences, observations of others’ actions, and news, propaganda and indoctrination from media sources, political leaders and the state. In this talk, we present our recent work on opinion dynamics and influence in social networks. We study a stochastic gossip model of opinion dynamics in a society consisting of two types of agents: regular agents, who update their beliefs according to information that they receive from their social neighbors; and prominent agents with disproportionate impact on the opinions of the rest of the society.
We first consider the case when prominent agents obtain some information from others. We show that in this case all beliefs converge to a stochastic consensus. Our main results quantify the extent of their influence by providing bounds or exact results on the gap between the consensus value and the benchmark without prominent agents (where there is efficient information aggregation). We then consider the case when prominent agents are fully stubborn, i.e., they never update their opinions. In this case, opinion dynamics never lead to a consensus (among the regular agents). Instead, beliefs in the society almost surely fail to converge, and the belief of each regular agent converges in law to a non-degenerate random variable. The model in this case thus generates long-run disagreement and continuous opinion fluctuations. We provide explicit characterizations of the expected values and correlations of the limiting beliefs. We also present bounds on the dispersion of the expected values and variances of limiting beliefs as a function of the structure of the underlying social network and the location of the stubborn agents.

Biography
Asu Ozdaglar received the B.S. degree in electrical engineering from the Middle East Technical University, Ankara, Turkey, in 1996, and the S.M. and the Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, in 1998 and 2003, respectively.
Since 2003, she has been a member of the faculty of the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where she is currently the Class of 1943 Associate Professor. She is also a member of the Laboratory for Information and Decision Systems and the Operations Research Center. Her research interests include optimization theory, with emphasis on nonlinear programming and convex analysis, game theory, with applications in communication, social, and economic networks, and distributed optimization and control. She is the co-author of the book entitled “Convex Analysis and Optimization” (Athena Scientific, 2003).
Professor Ozdaglar is the recipient of a Microsoft fellowship, the MIT Graduate Student Council Teaching award, the NSF Career award, the 2008 Donald P. Eckman award of the American Automatic Control Council, and is a 2011 Kavli Fellow of the National Academy of Sciences. She served on the Board of Governors of the Control Systems Society in 2010 and is currently the chair of the working group “Game-Theoretic Methods in Networks” under the Technical Committee “Networks and Communication Systems”.

Injective tensor norms: hardness and reductions

Aram Harrow, U of Washington
Wednesday, August 10, 2011
4:00 PM – 5:30 PM
View recording here

Description

If a vector has one index and a matrix has two, then a tensor has k indices, where k could be 3 or more. In this talk, I'll consider the injective tensor norm, which for k=1 is the length of a vector and for k=2 is the largest singular value of a matrix. Applications of calculating this norm include finding planted cliques, simulating quantum systems and finding the distortion of certain norm embeddings.
I'll show that much of the difficulty of calculating the injective tensor norm is captured already when k=3, and I'll prove a hardness result in this case, even for finding an approximation with constant additive error. Previous hardness results applied only to the case of 1/poly(dim) accuracy. These results are based on joint work with Ashley Montanaro, and use quantum techniques, although the presentation will not assume familiarity with anything quantum.
I'll conclude by discussing algorithms, and a conjecture that would imply the existence of an algorithm whose complexity would match the above lower bound.

Biography
Aram Harrow is a Research Assistant Professor in the CSE department of the University of Washington. He works on quantum algorithms, quantum information, and their connections to probability, convex geometry, complexity theory and other areas of math and computer science.
He graduated in 2005 from MIT with a PhD in physics, and then was a lecturer at the University of Bristol (in math and computer science) from 2005-2010 before moving to the University of Washington.

Learning Valuation Functions

Maria Florina Balcan, GA Tech
Wednesday, August 3, 2011
4:00 PM – 5:30 PM
View recording here

Description

A core element of microeconomics and game theory is that consumers have valuation functions over bundles of goods and that these valuation functions drive their purchases. In particular, the value given to a bundle need not be the sum of values on the individual items but rather can be a more complex function of how the items relate. Common valuation classes considered in the literature include OXS, submodular, and XOS valuations. Typically it is assumed that these valuations are known to the center or come from a known distribution. In this work we initiate the study of the approximate learnability of valuation classes in a distributional learning setting. We prove upper and lower bounds on the approximation guarantees achievable for learning over general data distributions by using only a polynomial number of examples. Our work combines central issues in economics with central issues in optimization (submodular functions and matroids) with central issues in learning (learnability of natural but complex classes of functions in a distributional setting).

Biography
Maria Florina Balcan is a Asst Professor at the School of Computer Science at the Georgia Institute of Technology. Her main research interests are computational and statistical machine learning, computational aspects in economics and game theory, and algorithms. Other interests include mathematical models for signal processing and computer vision. She is the recipient of the CMU SCS Distinguished Dissertation Award, the 2011 Microsoft Faculty Research Fellowship and of an NSF CAREER Award.

The Laplacian Paradigm: Emerging Algorithms for Massive Graphs

Shanghua Teng, USC
Wednesday, July 27, 2011
4:00 PM – 5:30 PM
View recording here

Description

We describe an emerging paradigm for the design of efficient algorithms for massive graphs. This paradigm, which we will refer to as the Laplacian Paradigm, is built on a recent suite of nearly-linear time primitives in spectral graph theory developed by Spielman and Teng, especially their solver for linear systems Ax = b, where A is the Laplacian matrix of a weighted, undirected n-vertex graph and b is an n-place vector. In the Laplacian Paradigm for solving a problem, we reduce the optimization or computational problem to one or multiple linear algebraic problems that can be solved efficiently by applying the nearly-linear time Laplacian solver and the algorithms in this suite for local clustering, spectral sparsification, and low stretch spanning trees.
The Laplacian Paradigm has been used in a recent algorithm for computing an approximately maximum s-t flow in a capacitated, undirected graph. It has also been applied to obtain nearly-linear-time algorithms for applications in semi-supervised learning, image process, web-spam detection, eigenvalue approximation, and for solving elliptic finite element systems.
Recently, almost all original primitives in this suite including the Laplacian solver itself have been significantly improved and practical implementation might become available in the near future. The goal of this presentation is to encourage more researchers to consider the use of the Laplacian Paradigm to develop faster algorithms for solving fundamental problems in combinatorial optimization, scientific computing, machine learning, network analysis, and other applications that involve massive graphs.
NOTE: Considering the recent colloquium talks by Dan Spielman and Jon Kelner on spectral graph sparsification, maximum flow computation, improved Laplacian linear solver, and application of Laplacian Paradigm to machine learning, respectively, I will focus technically on the local clustering algorithms in the context of motivating the Laplacian Paradigm.
Joint work with Daniel Spielman (Yale) and Paul Christiano (MIT), Jonathan Kelner (MIT), and Aleksander Madry (MIT).

Biography
Shang-Hua Teng is the Seeley G. Mudd Professor and the Chairman of the Computer Science Department at USC Viterbi School of Engineering. He taught as an instructor in the Department of Mathematics of MIT and as a professor in the Computer Science Departments of Boston University, the University of Minnesota and UIUC. He has worked and consulted for Microsoft Research, Akamai, IBM Almaden Research Center, Intel Corporation, Xerox PARC, Cray Research/SGI, Thinking Machine Corporation, and NASA Ames Research Center. He received a dual B.S. degree in computer science and in electrical engineering from Shanghai Jiao Tong University in 1985, a M.S. degree in computer science from University of Southern California (USC) in 1988, and a Ph.D. degree in computer science from Carnegie Mellon University (CMU) in 1991.
He is a recipient of the 2009 Fulkerson Prize (AMS-MPS) and the 2008 Gödel Prize (ACM-EATCS) for his joint work on Smoothed Analysis of Algorithms. He is also an ACM Fellow, Alfred P. Sloan Fellow, winner of Senior Xerox Award for Outstanding Faculty Research (UIUC), and has received NSF Faculty Early Career Development Award. He recently coauthored a paper which was chosen to be one of the two best papers for ACM STOC 2011. His research interests include spectral graph theory, computational game and economics theory, scientific computing, mathematical programming, and computational geometry. He has received more than ten US Patents for his work on compiler optimization and Internet technology.

Medium Access Using Queues

Devavrat Shah, MIT
Wednesday, July 20, 2011
4:00 PM – 5:30 PM
View recording here

Description

Medium access control (MAC) resolves contention among simultaneously transmitting wireless nodes so as to utilize the wireless medium efficiently while mitigating interference.To be implementable, the MAC algorithm is required to be totally distributed and simple. In this talk, we will present such a MAC algorithm that is optimal in terms of utilizing medium and thus bringing a long pursued quest (since 1970s) to an end. In this MAC, each node utilizes its local queue-size and recent contention history to make decisions. We will also discuss related issues in stochastic networks. The talk will be self-contained. It is based on joint work with Jinwoo Shin and Prasad Tetali.

Biography
Devavrat Shah is currently a Jamieson career development associate professor with the department of electrical engineering and computer science, MIT. He is a member of the Laboratory for Information and Decision Systems (LIDS) and Operations Research Center (ORC).
His research focus is on theory of large complex networks which includes network algorithms, stochastic networks, network information theory and large scale statistical inference. He was awarded the ACM SIGMETRICS Rising Star Award 2008 for his work on network scheduling algorithms.
He received the 2010 Erlang Prize from INFORMS which is given to a young researcher for outstanding contributions to applied probability.
He is currently an associate editor of Operations Research.

Fitting a Graph to Vector Data

Jonathan Kelner, MIT
Wednesday, July 13, 2011
4:00 PM – 5:30 PM
View recording here

Description

In this talk, I will set forth a general approach to many of the major problems in Machine Learning, including classification, regression and clustering, based on ideas from spectral graph theory. Applying this approach will yield a simple and clean methodology that gives very good solutions to these problems, outperforming the state-of-the-art on many of the standard data sets.
In recent years, a number of researchers have gained insight by fitting graphs to their data and then using these graphs to solve various problems. They have employed simply defined graphs that are easy to compute, associating a vertex of the graph with each data vector, and then connecting vertices whose vectors are sufficiently close, sometimes with weights depending on the distance. Not surprisingly, different results are obtained by the use of different graphs, and researchers have studied how to combine different graphs in a way that tends to give heavier weight to the better graphs.
I will examine what can be gained by choosing the graphs with more care. To this end, I will pose the question,"What is the right graph to fit to a set of vectors in R^n?"
I will then propose an answer based on the graph Laplacian and a discrete analogue of the harmonic energy of a manifold, and I will discuss how to surmount the algorithmic challenges that arise in computing the optimal graphs according to this measure.
These graphs have several interesting theoretical properties and relate to a variety of other concepts in the machine learning literature, including manifold learning and nonlinear dimensionality reduction. In addition, I will show how they can be viewed as a generalization of both k-means clustering and singular value decompositions.
Joint work with Daniel Spielman and Samuel Daitch.

Biography
Jonathan Kelner is an Assistant Professor of Applied Mathematics in the MIT Department of Mathematics and a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). His research focuses on the application of techniques from pure mathematics to the solution of fundamental problems in algorithms and complexity theory.
He was an undergraduate at Harvard University, and he received his Ph.D. in Computer Science from MIT in 2006. Before joining the MIT faculty, he spent a year as a member of the Institute for Advanced Study. He has received a variety of awards for his work, including an NSF CAREER Award, an Alfred P. Sloan Research Fellowship, the NEC Award for Research in Computers and Communication, the Sprowls Doctoral Dissertation Award, the Best Student Paper Award at STOC 2004, the Best Paper Awards at SIGMETRICS 2011 and STOC 2011, and the Harold E. Edgerton Faculty Achievement Award.

On the Power of Adaptivity in Sparse Recovery

Piotr Indyk, MIT
Wednesday, July 6, 2011
4:00 PM – 5:30 PM
No recording available

Description

The goal of sparse recovery is to recover a sparse (with few non-zeros) approximation of data from the available information. We consider the problem of recovering the (approximately) best k-sparse approximation x* of an n-dimensional vector x from linear measurements of x. It is known that this task can be accomplished using m=O(k log (n/k)) non-adaptive measurements and that this bound is tight.
In this talk we show that if one is allowed to perform measurements that are adaptive, then the number of measurements can be considerably reduced compared to the non-adaptive case. We also discuss implications of our results to data stream computing and compressive sensing.

Biography
Piotr Indyk is a Full Professor of Electrical Engineering and Computer Science at MIT (as of July 1). He joined MIT in 2000, after earning PhD from Stanford University. Earlier, he received Magister degree from Uniwersytet Warszawski in 1995. His research interests include high-dimensional computational geometry, sketching and streaming algorithms, sparse recovery and compressive sensing. He received the NSF CAREER Award, Sloan Fellowship and Packard Fellowship. He is an Associate Editor for the IEEE Transactions on Signal Processing.

Price Discrimination Through Communication

Rakesh Vohra, Kellogg School of Management, Northwestern
Wednesday, June 29
4:00 PM – 5:30 PM
No recording available

Description

Itai Sher and myself study a seller's optimal mechanism for maximizing the seller's revenue when the buyer may present evidence which is relevant to the buyer's value, or when different types of buyer have a differential ability to communicate. We also study a dynamic bargaining protocol in which the buyer first makes a sequence of concessions in a cheap talk phase, and then at a time determined by the seller, the buyer presents evidence to support his previous assertions, and then the seller makes a take-it-or-leave-it offer. Our main result is that the optimal mechanism can be implemented as a sequential equilibrium of our dynamic bargaining protocol. Unlike the optimal mechanism to which the seller can commit, the equilibrium of the bargaining protocol also provides incentives for the seller to behave as required. We thereby provide a natural procedure whereby the seller can optimally price discriminate on the basis of the buyer's evidence.

Biography
Rakesh Vohra was educated at University College London, the LSE and the University of Maryland. He is currently the John L. & Helen Kellogg Professor of Managerial Economics at the Kellogg School of Management as well as Director of the Center for Mathematical Studies in Economics and Management Science. His research interests are in pricing, auction theory and game theory.

On Optimal Multidimensional Mechanism Design

Constantinos Daskalakis, MIT
Wednesday, June 22, 2011
4:00 PM – 5:30 PM
View recording here

Description

In his seminal paper, Myerson [1981] provides a revenue-optimal auction for a seller who is looking to sell a single item to multiple bidders. His design has been generalized to various related settings, but despite much research effort there is no optimal auction known to date for the multiple-item multiple-bidder problem, known in the literature as the "optimal multidimensional mechanism design problem". We solve this problem when either the number of bidders or the number of items is a constant. In the first setting, we need that the values of each bidder for the items are i.i.d., but allow different distributions for each bidder. In the second setting, we allow the values of each bidder for the items to be arbitrarily correlated, but assume that the bidders are i.i.d. For all epsilon> 0, we obtain a computationally efficient additive epsilon-approximation, when the value distributions are bounded, or a multiplicative (1-epsilon)-approximation when the value distributions are unbounded, but satisfy the Monotone Hazard Rate condition. When there is a single bidder, we generalize these results to independent but not necessarily identically distributed value distributions, and to independent regular distributions. (This is joint work with Yang Cai and Matt Weinberg)

Biography
Constantinos (or Costis) Daskalakis is an Assistant Professor of EECS at MIT. He completed his undergraduate studies in Greece, at the National Technical University of Athens, and obtained a PhD in Computer Science from UC Berkeley. After Berkeley he was a postdoctoral researcher in Microsoft Research New England, and since 2009 he has been at the faculty of MIT. His research interests lie in Algorithmic Game Theory and Applied Probability, in particular computational aspects of markets and the Internet, social networks, and computational problems in Biology. Costis has been honored with a 2007 Microsoft Graduate Research Fellowship, the 2008 Game Theory and Computer Science Prize from the Game Theory Society, the 2008 ACM Doctoral Dissertation Award, a NSF Career Award, a 2010 Sloan Foundation Fellowship in Computer Science, the 2011 SIAM Outstanding Paper Prize, and the MIT Ruth and Joel Spira Award for Distinguished Teaching.

Phase Transitions for Modified Erdos-Renyi Processes

Joel Spencer, Courant Institute, NYU
Wednesday, June 15, 2011
4:00 PM – 5:30 PM
View recording here

Description

A fundamental and very well studied region of the Erdos-Renyi process is the phase transition at m near n/2 edges in which a giant component suddenly appears. We review the behavior in the barely subcritical and barely supercritical regimes. We modify the process, particularly discussing a modification due to Tom Bohman and Alan Frieze in which isolated vertices are given preference.
While the position of the phase transition changes and many constants change, we show to a large extent (and conjecture the rest!) that the critical exponents remain the same and that the two processes belong to the same universality class. A key role is played by the susceptibility of the graph, a concept taken from theoretical physics with strong application to large finite graphs. The susceptibility, asymptotically, is shown to satisfy a differential equation from which its barely subcritical behavior may be deduced. We also discuss other natural processes which appear to have very different behavior.

Biography
Joel Spencer's research centers on Probabilistic Methods in Discrete Math and Computer Science. He is coauthor of "The Probabilistic Method" and cofounder of "Random Structures and Algoritms." His Erdos Number is one.

Stacking Bricks and Stoning Crows

Peter Winkler, Dartmouth
Wednesday, June 8, 2011
4:00 PM – 5:30 PM
No recording available

Description

How far can a stack of n bricks hang over the edge of a table? This 150-year-old problem is finally solved (asymptotically), and the answer is not what most people thought. We will present a construction and a curious result about random walk by an invisible object, upon which the upper bound relies.
[This work won the 2011 Robbins Prize for Mike Paterson (Warwick), Yuval Peres (MSR!), Mikkel Thorup (AT&T Labs), the speaker, and Uri Zwick (Tel Aviv)---all nominally computer theorists.]

Biography
Peter Winkler is Professor of Mathematics and Computer Science, and Albert Bradley Third Century Professor in the Sciences, at Dartmouth College. A winner of the Mathematical Association of America's Lester R. Ford Award for mathematical exposition, he is the author of about 135 mathematical research papers and holds a dozen patents in computing, cryptology, holography, optical networking and marine navigation. His research is primarily in combinatorics, probability, and the theory of computing, with forays into statistical physics. Prof. Winkler has also authored two collections of mathematical puzzles, a portfolio of compositions for ragtime piano, and a book on uses of cryptography in the game of bridge.

Strong LP Formulations and Primal-Dual Approximation Algorithms

David Shmoys, Cornell
Wednesday, June 1, 2011
4:00 PM – 5:30 PM
View recording here

Description

The state of the art of the design and analysis of approximation algorithms for NP-hard discrete optimization has advanced significantly over the past two decades; furthermore, the most prevalent approach has been to rely on the strength of natural linear programming relaxations. Work in computational integer programming over the same time period has shown the power of adding combinatorially-motivated valid inequalities, but this has not been employed often in the design of approximation algorithms.
We will show how this approach can be applied in the design and analysis of primal-dual approximation algorithms. We will present several recent approximation results along these lines, starting with a (surprisingly simple) primal-dual analogue of an LP-rounding result of Carr, Fleischer, Leung, & Phillips for the minimum-cost covering knapsack problem, which finds a solution of cost at most twice the optimum.
We will then consider a broad class of single-machine scheduling problems, where the cost of scheduling each job is a (job-specific) non-decreasing function of the time at which it completes, and the objective is to minimize the total cost of the schedule. Bansal and Pruhs recently gave the first constant approximation algorithm for this setting; we adapt the primal-dual approach to directly yield a 2-approximation algorithm for this class of problems.
This is joint work with Tim Carnes and Maurice Cheung

Biography
David Shmoys is a Professor of Operations Research and Information Engineering as well as of Computer Science at Cornell University. He obtained his Ph.D. in Computer Science from the University of California at Berkeley in 1984, and held postdoctoral positions at MSRI in Berkeley and Harvard University, and a faculty position at MIT before joining the Cornell faculty. Shmoys's research has focused on the design and analysis of efficient algorithms for discrete optimization problems, and his work has highlighted the central role that linear programming plays in the design of approximation algorithms for NP-hard problems. He is a Fellow of the ACM, was an NSF Presidential Young Investigator, and has served on numerous editorial boards, including both the SIAM Journals of Computing and of Discrete Mathematics (for which he was Editor-in-Chief), Mathematics of Operations Research, Operations Research, and Mathematical Programming.

The Complexity of Fixpoints and Other Existence Theorems

Christos Papadimitriou, Berkeley
Wednesday, May 18, 2011
4:00 PM – 5:30 PM
No recording available

Description

Factoring an integer and finding a Nash equilibrium of a game are two examples of hard problems that always have a solution, and are therefore unlikely to be NP-complete. There is a fascinating complexity theory of such problems, and many more interesting examples, each corresponding to an elegant, non-constructive existence proof, including the theorems by Chevalley, Smith, Minkowski, Brouwer, and Borsuk-Ulam. I will survey this area, identify the four elementary lemmata on which all these proofs are based, and describe some recent developments.

Biography
Christos Papadimitriou studied in Greece and at Princeton, and has taught at Harvard, MIT, Stanford, UCSD, and since 1996 at Berkeley, where he is the C. Lester Hogan Professor of Computer Science. His research is about algorithms and complexity, and their applications to a variety of fields such as databases, optimization, AI and robotics, networks, game theory and economics, and evolution. He has published several of the standard textbooks in Computer Science, and the novels "Turing" and "Logicomix." He is a member of the National Academy of Sciences, the National Academy of Enginering, and the American Academy of Arts and Sciences

Computational Complexity in Differential Privacy

Salil P. Vadhan, Harvard
Wednesday, May 11, 2011
4:00 PM – 5:30 PM
No recording available

Description

An exciting recent line of work in theoretical computer science has developed a new notion of privacy for computing on databases with sensitive information, known as "differential privacy." It requires that no individual's data should have a significant influence on the distribution of the (randomized) output. This is a strong privacy notion that is robust to auxiliary information available to an adversary, yet it has been shown to require only a small cost in accuracy for many computations of interest.
In this talk, we will see how computational resource constraints can affect the achievability of differential privacy. We will discuss both resource constraints on the data curator (making privacy harder to achieve) and on the adversary (making privacy easier to achieve). The technical portion of the talk will focus on the latter. Specifically, we show that a computational relaxation of differential privacy (where we only consider computationally bounded adversaries) allows for significantly more accurate 2-party protocols for estimating the Hamming distance between two binary vectors.
Based on joint work with Andrew McGregor, Ilya Mironov, Omer Reingold, Omkant Pandey, Toni Pitassi, and Kunal Talwar.

Biography
Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics and Director of the Center for Research on Computation and Society in the Harvard University School of Engineering and Applied Sciences. He received his Ph.D. in Applied Mathematics from MIT in 1999 under the supervision of Shafi Goldwasser. He was an NSF Mathematical Sciences postdoctoral fellow at MIT and the Institute for Advanced Study before joining the Harvard faculty in 2001. He has also been a Fellow at the Radcliffe Institute for Advanced Study and a Miller Visiting Professor at UC Berkeley.
Vadhan's research is in computational complexity, cryptography, and randomness in computation. Specific areas of focus have included zero-knowledge proofs, pseudorandomness, and data privacy. His Ph.D. thesis on statistical zero-knowledge proofs received the ACM Doctoral Dissertation Award in 2000, and his paper on the zig-zag product for expander graphs was a co-recipient of the Godel Prize in 2009. Vadhan has also received a Guggenheim Fellowship (2007-08), an ONR Young Investigator Award (2004-07), a Sloan Fellowship (2002-04), NSF CAREER award (2002-07), and a Phi Beta Kappa Award for Excellence in Teaching (2004).

Protecting Circuits from Leakage: The Computationally-Bounded and Noisy Cases

Leo Reyzin, Boston University
Wednesday, May 4, 2011
4:00 PM – 5:30 PM
View recording here

Description

Joint work with Sebastian Faust, Tal Rabin, Eran Tromer, and Vinod Vaikuntanathan
Computational devices leak side-channel information that may, and often does, reveal secret internal states. Such leakage is often not modeled properly by the analysis of security protocols, where the internals of computing devices are simply abstracted away. To make such analysis relevant, computational devices must be fully shielded from the oustide world.
We present a general transformation that compiles any circuit into a new circuit with the same functionality and on top of that with resiliency against well-defined classes of leakage. Our construction requires a small, stateless, and computation-independent leak-proof component that draws random elements from a fixed distribution. In essence, we reduce the problem of shielding arbitrarily complex circuits to the problem of shielding a single, simple component.
Our approach is based on modeling the adversary as a powerful observer that inspects the device via a limited measurement apparatus. We allow the apparatus to access all the bits of the computation (except those inside the leak-proof component) and the amount of leaked information to grow unbounded over time. However, we assume that the apparatus is limited either in its computational ability (it lacks the ability to decode certain linear encodings and outputs a limited number of bits per iteration) or its precision (each observed bit is flipped with some probability). While our results apply in general to such leakage classes, in particular, we obtain security against:
-- Constant-depth circuits leakage, where the measurement apparatus can be computed by an AC0 circuit (composed of NOT gates and unbounded fan-in AND and OR gates), or an ACC0[m] circuit (which is the same as AC[0] except that it also uses mod m gates).
-- Noisy leakage, where the measurement apparatus reveals all the bits of the state of the circuit, perturbed by independent binomial noise. Namely, each bit of the computation is perturbed with probability p, and remains unchanged with probability 1-p.

Biography
Leo Reyzin is an Associate Professor of Computer Science at Boston University, conducting research on extending cryptographic techniques to provide protection in more hostile environments. He received his A.B. in computer science from Harvard, and S.M. and Ph.D. in cryptography at MIT. He is a recipient of the National Science Foundation's CAREER award and of Boston University's Neu Family Award for Excellence in Teaching.

All Pairs Shortest Path in Quadratic Time with High Probability

Benny Sudakov, UCLA
Wednesday, April 27, 2011
4:00 PM – 5:30 PM
View recording here

Description
All-pairs shortest path problem is one of the most important, and most studied algorithmic graph problems. For this problem many researches develop algorithms which work well on random instances, most notably complete directed graphs on n vertices with random weights on their edges. The simplest such model, on which we focus in this talk, is the one in which all edge weights are drawn independently at random from the uniform distribution on [0,1]
We present an all-pairs shortest path algorithm in this setting whose running time is $O(n^2)$, in expectation and with high probability. This is clearly best possible and resolves a long standing open problem.
Joint work with Y. Peres, D. Sotnikov and U. Zwick

Biography
Benny Sudakov got his Ph.D from Tel Aviv University in 1999 under Noga Alon, had appointments in Princeton University and Institute for Advanced Studies and is currently professor of mathematics in UCLA. He is the recipient of an Alfred P. Sloan Foundation Fellowship, an NSF CAREER Award and was invited speaker at 2010 International Congress of Mathematicians. His main interests are Combinatorics and its applications in Computer Science

Random Sampling and Cuts in Graphs: Three Combinatorial Proofs

David Karger, MIT
Thursday, April 21, 2011 **Special Date & Time**
1:00 PM – 2:30 PM
No recording available

Description
I'll synthesize work showing that randomly sampling the edges of an undirected graph preserves cut values. I'll give three distinct proofs, based on random contraction, tree packing, and network reliability, that the values of cuts are tightly concentrated around their expectations when those expectations are sufficiently large. I'll give a combinatorial construction for sparsifying any n-vertex undirected graph down to O(n log n) edges with only small perturbations in cut value and show how it can be used in fast algorithms for approximate and exact maximum flow computations. While one can achieve slightly better sparsification using algebraic techniques, I believe these combinatorial methods offer useful insights and may yield more in the future. The talk is self-contained, requiring only elementary knowledge of combinatorics and probability.

Towards Coding for Maximum Errors in Interactive Communication

Anup Rao, U Washington
Wednesday, April 13, 2011
4:00 PM – 5:30 PM
View recording here

Description
We show that it is possible to encode any communication protocol between two parties so that the protocol succeeds even if a (1/4 - epsilon) fraction of all symbols transmitted by the parties are corrupted adversarially, at a cost of increasing the communication in the protocol by a constant factor (the constant depends on epsilon).This encoding uses a constant sized alphabet. This improves on an earlier result of Schulman, who showed how to recover when the fraction of errors is bounded by 1/240. We also show how to simulate an arbitrary protocol with a protocol using the binary alphabet, a constant factor increase in communication and tolerating a 1/8 - epsilon fraction of errors.
Joint work with Mark Braverman.

Biography
Anup Rao is an Assistant Professor at the University of Washington. He was a researcher at Princeton, a member of the Institute For Advanced Study, and a graduate student at the University of Texas at Austin under David Zuckerman.

The Theory of Crowdsourcing

Jason Hartline, Northwestern
Wednesday, April 6, 2011
4:00 PM – 5:30 PM

No recording available

Description
Crowdsourcing contests have been popularized by the Netflix challenge and websites like TopCoder and Taskcn. What is crowdsourcing? Imagine you are designing a new web service, you have it all coded up, but the site looks bad because you haven't got any graphic design skills. You could hire an artist to design your logo, or you could post the design task as a competition to crowdsourcing website Taskcn with a monetary reward of $100. Contestants on Taskcn would then compete to produce the best logo. You then select your favorite logo and award that contestant the $100 prize.
In this talk, I discuss the theory of crowdsourcing contests. First, I will show how to model crowdsourcing contests using auction theory. Second, I will discuss how to solve for contestant strategies. I.e., suppose you were entering such a programming contest on TopCoder, how much work should you do on your entry to optimize your gains from winning less the cost of doing the work? Finally, I will discuss inefficiency from the fact that the effort of losing contestants is wasted (e.g., every contestant has to do work to design a logo, but you only value your favorite logo). I will show that this wasted effort is at most half of the total amount of effort. A consequence is that crowdsourcing is approximately as efficient a means of procurement as conventional methods (e.g., auctions or negotiations).
Joint work with Shuchi Chawla and Balu Sivan

Biography
TBD

Slow to Anger and Fast to Forgive

Drew Fudenberg, Harvard
Wednesday, March 30, 2011
4:00 PM – 5:30 PM

No recording available

Description
We study the experimental play of the repeated prisoner’s dilemma when intended actions are implemented with noise. In treatments where cooperation is an equilibrium, subjects cooperate substantially more than in treatments without cooperative equilibria. In all settings there was considerable strategic diversity, indicating that subjects had not fully learned the distribution of play. Furthermore, cooperative strategies yielded higher payoffs than uncooperative strategies in the treatments with cooperative equilibria. In these treatments successful strategies were “lenient” in not retaliating for the first defection, and many were “forgiving” in trying to return to cooperation after inflicting a punishment. Joint work with Dave Rand and Anna Dreber.

Biography
Drew Fudenberg is the Frederick E. Abbe Professor of Economics at Harvard University, where he has been teaching since 1993. He received his AB in Applied Mathematics from Harvard College in 1978, and his PhD in economics from MIT in 1981. In addition to being a fellow of the American Academy of Arts and Sciences and of the Econometric Society, Fudenberg has been a Guggenheim Fellow and a Sloan Research Fellow. He was the editor of Econometrica from 1996 to 2000, helped found Theoretical Economics, and has served on the editorial board of many leading journals, as well as on the council and other committees of the Econometric Society. Fudenberg has written four books, including a leading game-theory text, and more than 80 scientific articles. His seminal work on learning in games highlights the crucial roles of informational feedback and the extensive details of the game. His work on repeated games shows how patient players can achieve cooperative payoffs in a noncooperative equilibrium, even in the presence of moral hazard, adverse selection and imperfect monitoring. He has made major contributions to many other areas of game theory, including reputation effects, evolutionary game theory, incomplete-information bargaining, oligopoly theory, and theoretical industrial organization.

Higher Order Principal Components: Complexity and Applications

Santosh S. Vempala, GA Tech
Wednesday, March 23, 2011
4:00 PM – 5:30 PM
View recording here

Description
Standard Principal Components are directions that optimize second moments of a given set of points and have proven to be a powerful tool. Here we consider higher order principal components, i.e., directions that optimize higher moments of a data set (or the spectral norm of higher-dimensional arrays). They appear to be much less structured --- there could be exponentially many, need not be pairwise orthogonal and it is NP-hard to find global maxima for arbitrary inputs.
We discuss applications to combinatorial optimization and learning: (a) finding a planted clique in a random graph, where higher-order maxima even for semi-random inputs would be effective and (b) learning an unknown function of a low-dimensional subspace from labeled examples (a k-subspace junta, generalizing the well-known class of k-juntas), where *local* optima suffice and can be approximated efficiently for a wide class of input distributions.
Most of the talk is joint work with Ying Xiao.

Biography
Santosh Vempala is Distinguished Professor of Computer Science and currently serves as the Director of the Algorithms, Randomness and Complexity Center (ARC) at Georgia Tech. His research interests are, ironically, in algorithms, randomness and complexity. He graduated from CMU in 1997, advised by Avrim Blum and was at MIT until 2006 except for a year at UC Berkeley. He has written two books, "The Random Projection Method," (2004) and "Spectral Algorithms" (with Ravi Kannan, 2009) and is working on a third titled, "How to be(at) Kalai." Vempala continues to get unreasonably excited when a phenomenon that appears complex from one perspective turns out to be simple from another. More information can be found on his webpage: http://www.cc.gatech.edu/~vempala

A Two Prover One Round Game with Strong Soundness

Subhash Khot, NYU
Wednesday, March 16, 2011
4:00 PM – 5:30 PM
No recording available

Description
We show that for any large integer q, it is NP-hard to distinguish whether a two prover one round game with q^6 answers has value close to 1 or at most O(1/q). The result is obtained by combining two new techniques: (i) An Inner PCP based on the "point versus subspace" test for linear functions. The test is analyzed Fourier analytically. (ii) The Outer/Inner PCP composition that relies on a certain "sub-code covering" property for Hadamard codes.
As an application, we show that unless NP has quasi-polynomial time deterministic algorithms, the Quadratic Programming Problem is inapproximable within factor (\log n)^{1/6 - o(1)}.
The talk should be be self-contained.
Joint work with Muli Safra.

Statistical Physics, Interpolation Method and Scaling Limits in Sparse Random Graphs

David D. Gamarnik, MIT
Wednesday, March 2, 2011
4:00 PM – 5:30 PM
View recording here

Description
Statistical physics, provided powerful insights into the theory of combinatorial structures and algorithms. For example, the success of certain counting algorithms is well known to be linked with the phase transition properties of the underlying combinatorial model. Recently, however, a new powerful method was developed in the statistical physics of spin glasses, based on the interpolation idea. Among other things, the interpolation method was used to prove the existence of the so-called free energy thermodynamic limits for several spin glass models. In this talk we will discuss applications of the interpolation method in the context of combinatorial optimization problems on sparse random graphs. Using this method we establish the existence of scaling limits for a variety of combinatorial problems, including Maximum Independent Set, MAX-CUT, Coloring, K-SAT, and related problems. For these models, we show that the optimal values, appropriately normalized, converge to a limit with high probability, as the size of the underlying graph diverges to infinity. For the case of Independent Set model, this resolves a problem which was proposed by David Aldous in 2000 as one of his six favorite open problems.
The talk will be completely self-contained. No background on the theory of random graphs or statistical physics is necessary.
Joint work with Mohsen Bayati (Stanford) and Prasad Tetali (Georgia Tech).

Biography
David Gamarnik is an associate professor of operations research at the Sloan School of Management of Massachusetts Institute of Technology.
He received B.A. in mathematics from New York University in 1993 and Ph.D. in Operations Research from MIT in 1998. Since then he worked in the Department of Mathematical Sciences, IBM Research, before joining MIT in 2005.
His research interests include applied probability and stochastic processes, theory of random graphs and algorithms, combinatorial optimization, statistical learning theory and various applications. He is a recipient of the Erlang Prize from the INFORMS Applied Probability Society, IBM Faculty Partnership Award and several NSF sponsored grants.
He serves as an associate editor of the Annals of Applied Probability, Operations Research, Mathematics of Operations Reserach and Queueing Systems journals.

On the Fourier Spectrum of Symmetric Boolean Functions

Amir Shpilka, Technion
Wednesday, Feb 23, 2011
4:00 PM – 5:30 PM
View recording here

Description
It is well-known that any Boolean function f:{-1,+1}^n \to {-1,+1} can be written uniquely as a polynomial f(x) = \sum_{S subset [n]} f_s \prod_{i in S} x_i. The collection of coefficients (f_S's) this expression are referred to (with good reason) as the Fourier spectrum of f. The Fourier spectrum has played a central role in modern computer science by converting combinatorial and algorithmic questions about f into algebraic or analytic questions about the spectrum.
In this talk I will focus on a basic feature of the Fourier spectrum, namely the minimal Fourier degree, or the size of the smallest non-empty set S such that f_S is non-zero. For every symmetric function *except the parity function* we show that the minimal Fourier degree is at most O(Gamma(n)) where Gamma(m) < m^{0.525} is the largest gap between consecutive prime numbers in {1,...,m}.This improves the previous result of Kolountzakis et al. (Combinatorica '09) who showed that the minimal Fourier degree is at most k/log k.
As an application we obtain a new analysis of the PAC learning algorithm for symmetric juntas, under the uniform distribution, of Mossel et al. (STOC '03).
This is a joint work with Avishay Tal.

Biography
Amir Shpilka is an associate professor of computer science at the Technion, currently on leave at Microsoft Research NE. He received his Ph.D. in 2001 under the supervision of Avi Wigderson at the Hebrew University. He spent his postdoc years at Weizmann, Harvard and MIT. His research area is computational complexity and he is mostly interested in studying arithmetic circuits and in understanding properties of polynomials.

“We Will be Right With You”: Managing Customers Expectations with Vague Promises and Cheap Talk

Gad Allon, Kellogg School of Management at Northwestern University
Tuesday, January 25, 2011
4:00 PM – 5:30 PM
View recording here

Description
Delay announcements informing customers about anticipated service delays are prevalent in service-oriented systems. How to use delay announcements to manage the service system in an efficient manner is a complex problem which depends on both the dynamics of the underlying queueing system and on the customer behavior. We examine this problem of information communication by considering a model in which both the firm and the customers act strategically: the firm in choosing its delay announcement while anticipating customer response, and the customers in interpreting these announcements and in making the decision about when to join the system and when to balk. We characterize the equilibrium language that emerges between the service provider and her customers. The analysis of the emerging equilibria provides new and interesting insights into customer-firm information sharing. We show that even though the information provided to customers is non-verifiable and non-credible, it improves the profits of the firm and the expected utility of the customers. Further, the information could be as simple as "High Congestion"/"Low Congestion" announcements, or could be as detailed as the true state of the system. We also show that firms may choose to shade some of the truth by using intentional vagueness to lure customers.

Biography
Gad Allon is an associate professor of managerial economics and decision science at the Kellogg School of Management at Northwestern University. He received his PhD in Management Science from Columbia Business School in New York and holds a Bachelor and a Master degree from the Israeli Institute of Technology. His research interests include operations management in general, and service operations and operations strategy in particular.

Discrete differentiation and Local Rigidity of Cuts

James R Lee, U of Washington
Wednesday, Dec 8, 2010
4:00 PM – 5:30 PM
No recording available

Description
Global rigidity of near-isoperimetric cuts in the hypercube (via theorems of Kahn-Kalai-Linial and Friedgut) have played a fundamental role in complexity theory and approximation algorithms. Recently, local rigidity of cuts in graphs has become important in deepening our understanding of the Sparsest Cut problem. I will talk about this phenomenon, present some elementary examples (which nevertheless are able to resolve some interesting open problems), and then discuss how the theory is close to resolving the long-standing problem of determining the integrality gap of the Goemans-Linial SDP. The technical part of the talk will revolve around an example, constructed with A. Sidiropoulos, of a doubling metric space whose finite subsets are as far from L_1 as possible, in the sense of bi-Lipschitz distortion. The analysis will involve an interesting use of randomness to connect discrete, quantitative bounds to classical continuous studies in integral geometry

Biography
James R. Lee is an Assistant Professor of Computer Science at the University of Washington. He is generally interested in algorithms and complexity theory, and in particular the interaction of these fields with analysis, geometry, and probability. After graduating from Berkeley in 2005, Lee spent a year at the Institute for Advanced Study before joining the faculty at UW, where he has received an NSF CAREER Award and a Sloan Research Fellowship.

Converting any Algorithm into an Incentive-Compatible Mechanism

Robert Kleinberg, Cornell
Wednesday, Dec 1, 2010
View recording here

Description
Does algorithm design become harder in the presence of incentive constraints? The theory of algorithmic mechanism design is largely founded on the presumption that the answer is affirmative, a presumption that has been rigorously confirmed under various interpretations of the question. This is unfortunate, since it would be very convenient if there existed generic procedures to convert any algorithm into an incentive-compatible mechanism with little or no computational overhead.
In this talk I will identify two broad settings in which such generic procedures exist. First, I will present a reduction that removes the computational overhead of computing payments in truthful mechanisms: it transforms any monotone allocation function into a randomized, truthful-in-expectation mechanism that evaluates the allocation function only once. Second, I will present a polynomial-time reduction transforming an arbitrary algorithm into a Bayesian incentive-compatible mechanism, given a suitable amount of information about the type distributions of agents.
The first result is joint work with Moshe Babaioff and Alex Slivkins; the second result is joint work with Jason Hartline and Azarakhsh Malekian.

Biography
Robert Kleinberg is an Assistant Professor of Computer Science at Cornell University. His research studies the design and analysis of algorithms, and their applications to electronic commerce, networking, information retrieval, and other areas. Prior to receiving his doctorate from MIT in 2005, Kleinberg spent three years at Akamai Technologies, where he assisted in designing the world's largest Internet Content Delivery Network. He is the recipient of a Microsoft Research New Faculty Fellowship, an Alfred P. Sloan Foundation Fellowship, and an NSF CAREER Award.

The Many Entropies of One-Way Functions

Omer Reingold, Microsoft & Weizmann
Wednesday, Nov 17, 2010
View recording here

Description
One-way functions are the most basic, unstructured form of cryptographic hardness. Yet in a sequence of celebrated results (mostly in the eighties and early nineties), one-way functions were shown to imply a rich collection of cryptographic schemes and protocols (such as digital signatures and secret-key encryption). At the basis of this beautiful mathematical structure, are a few constructions of basic primitives: pseudorandom generators [Hastad-Impagliazzo-Levin-Luby ‘91], universal one-way hash functions [Naor-Yung ‘89, Rompel ‘90], and more recently statistically hiding commitments and statistical zero-knowledge arguments [Haitner-Nguyen-Ong-Reingold-Vadhan ‘06 & ‘07]. In all three cases, turning raw hardness into a much more exploitable cryptographic object requires some very elaborate constructions and proofs. In this talk we will try to hint on how one-way functions naturally contain a basic form of each of these objects. The talk will be influenced by a recent line of results, simplifying and improving all of these constructions. The crux of each new construction is defining the “right” notion of computational entropy and recovering this form of entropy from one-way functions. Based on several works with (subsets of) Iftach Haitner, Thomas Holenstein, Salil Vadhan and Hoteck Wee.

Biography
Omer Reingold is a Principal Researcher at Microsoft Research Silicon Valley and a Computer Science Professor at the Weizmann Institute of Science. His research is in the Foundations of Computer Science, mainly in Computational Complexity and Foundations of Cryptography. He received the 2005 Grace Murray Hopper Award and co-won the 2009 Gödel Prize.

Rank/Sparsity Minimization And Latent Variable Graphical Model Selection

Pablo A. Parrilo, MIT
Wednesday, Nov 10, 2010
4:00 PM – 5:30 PM
No recording available

Description
Suppose we have a Gaussian graphical model with sample observations of only a subset of the variables. Can we separate the extra correlations induced due to marginalization over the unobserved, hidden variables from the structure among the observed variables? In other words, is it still possible to consistently perform model selection despite the latent variables? As we shall see, the key issue here is to decompose the concentration matrix of the observed variables into a sparse matrix (representing graphical model structure among the observed variables) and a low-rank matrix (representing the effects of marginalization over the hidden variables). This estimator is given by a tractable convex program, and it consistently estimates model structure in the high-dimensional regime in which the number of observed/hidden variables grow with the number of samples of the observed variables. In our analysis the algebraic varieties of sparse matrices and low-rank matrices play an important role. Joint work with Venkat Chandrasekaran and Alan Willsky (MIT).

Biography
Pablo A. Parrilo received an Electronics Engineering undergraduate degree from the University of Buenos Aires, and a Ph.D. in Control and Dynamical Systems from the California Institute of Technology. He has held short-term visiting appointments at the University of California at Santa Barbara (Physics), Lund Institute of Technology (Automatic Control), and UC Berkeley (Mathematics). From October 2001 through September 2004, he was Assistant Professor of Analysis and Control Systems at the Automatic Control Laboratory of the Swiss Federal Institute of Technology (ETH Zurich). He is currently a Professor at the Department of Electrical Engineering and Computer Science of the Massachusetts Institute of Technology, where he is also affiliated with the Laboratory for Information and Decision Systems (LIDS) and the Operations Research Center (ORC). Prof. Parrilo is the recipient of the 2005 Donald P. Eckman Award of the American Automatic Control Council, as well as the 2005 SIAM Activity Group on Control and Systems Theory (SIAG/CST) Prize. He was also a finalist for the Tucker Prize of the Mathematical Programming Society for the years 2000-2003. He is currently on the Board of Directors of the Foundations of Computational Mathematics (FoCM) society, and a member of the Editorial Board of the MOS/SIAM Book Series on Optimization. His research interests include optimization and game theory methods for engineering applications, control and identification of uncertain complex systems, robustness analysis and synthesis, and the development and application of computational tools based on convex optimization and algorithmic algebra to practically relevant engineering problems.

Analytical Tools for Natural Algorithms

Bernard Chazelle, Princeton
Wednesday, Nov 3, 2010
View recording here

Description
I will discuss the merits of an algorithmic approach to the analysis of complex self-organizing systems. Specifically, I will show how a new analytical tool, the "total s-energy," grants us a unique perspective on multiagent dynamics. I will mention applications of this and other techniques to classical agreement systems, from bird flocking to firefly synchronization to opinion dynamics.

Biography
Bernard Chazelle is Eugene Higgins professor of computer science at Princeton University, where he has been on the faculty since 1986. He has held research and faculty positions at Carnegie-Mellon University, Brown University, Ecole Polytechnique, Ecole normale superieure, the University of Paris, and INRIA. He did extensive consulting for Xerox PARC, DEC SRC, and NEC Research, where he presided the Board of Fellows for several years. He received his Ph.D in computer science from Yale University in 1980. He is the author of the book "The Discrepancy Method." He is a fellow of the American Academy of Arts and Sciences, the European Academy of Sciences, and the World Innovation Foundation. He is an ACM Fellow and a former Guggenheim fellow.

Conservative Rationalizability And The Second Knowledge Mechanism

Silvio Micali, MIT
Wednesday, Oct 27, 2010
4:00 PM – 5:30 PM
No recording available

Description
Assuming a Bayesian is the traditional way to model uncertainty in settings of incomplete information. This assumption, however, is very strong, and may not hold in many real applications. Accordingly, we put forward:
(1) a set-theoretic way to model the knowledge that a player might have about his opponents, together with
(2) a new class of mechanisms -not equilibrium-based- capable of leveraging such more conservative knowledge in a robust way.
In auctions of a single good, we show that one such mechanism can perfectly guarantee a revenue benchmark (always in between the second highest and the highest valuation) that no classical mechanism can even approximate in any robust way.
Joint Work with Jing Chen

Biography
Silvio Micali received his Laurea in Mathematics from the University of Rome in 1978, and his Ph.D. in Computer Science from the University of California at Berkeley in 1983. He joined MIT's faculty in 1983, where is the co-founder and co-leader of the Information and Computer Security Group. Silvio’s long-time research interests are cryptography, zero knowledge, pseudo-random generation, and secure protocols. His latest scientific interest is cryptographic game theory. Silvio is the recipient of the Goedel Prize (in theoretical computer science) and the RSA prize (in Cryptography). He is a member of the National Academy of Sciences, the National Academy of Engineering, and the American Academy of Arts and Sciences.

Phase Transitions and Computation

Allan Sly, Microsoft Research
Wednesday, Oct 6, 2010
4:00 PM – 5:30 PM
View recording here

Description
The last decade has seen a growing number of connections between statistical physics phase transitions and the theory of computation. Techniques from spin glasses have transformed the understanding of random constraint satisfaction problems while phase transitions play the central role in the efficiency of a wide class of MCMC algorithms. I will survey recent developments in these areas and describe new results on the complexity of counting independent sets.

Biography
Allan Sly is a postdoc in the theory group of Microsoft Research, Redmond. He received his PhD in statistics at UC Berkeley in 2009 under the supervision of Elchanan Mossel. Allan's research interests lie in probability theory and its applications in theoretical computer science, statistical physics, statistics and computational biology.

Rank Bounds for Design Matrices with Applications to Combinatorial Geometry and Locally Correctable Codes

Zeev Dvir, Princeton
Wednesday, Sept 29, 2010
4:00 PM – 5:30 PM

Description
A (q,k,t)-design matrix is an m by n matrix whose pattern of zeros/non-zeros satisfies the following design-like condition: Each row has at most q non-zeros, each column has at least k non-zeros and the supports of every two columns intersect in at most t rows. Our main theorem is a lower bound of n - (qtn/2k)^2 on the rank of such matrices over fields of characteristic zero (or sufficiently large finite characteristic).This result is motivated by questions regarding matrix rigidity and locally correctable codes. Using this theorem we derive the following three applications:
(1) Our first application is to linear 2-query locally correctable codes. These are error correcting codes in which each codeword coordinate can be recovered, probabilistically, by reading at most two other code positions. Over finite fields of small characteristic constructions of such codes with exponential encoding length are known. We show, using our rank theorem, that these codes do not exist over fields of large characteristic or characteristic zero regardless of the encoding length. This theorem comes as an extension of the following result in combinatorial geometry:
(2) We prove a quantitative analog of the Sylvester Gallai theorem: Let v_1,...,v_m be a set of points in C^d such that for every i in [m] there exists at least delta * m values of j in [m] such that the line through v_i,v_j contains a third point in the set. We show that in this case the dimension of the set is at most O(1/delta^2). The only case that was studied before is when delta is very close to one.
(3) Another variant of the Sylvester Gallai theorem that we strengthen is the Motzkin-Rabin Theorem. In this variant the points are each colored by either red or blue. The assumption is that for every blue (red) point there is a delta-fraction of the blue (red) points such that the line connecting them contains a point of the opposite color. Our theorem gives a bound of O(1/delta^4) on the dimension of the set of points in this case.
Joint work with Boaz Barak, Avi Wigderson and Amir Yehudayoff

Biography
Zeev Dvir is currently a postdoc at the department of computer science in Princeton University. He got his Ph.D from the Weizmann Institute in Israel in 2008 under the guidance of Ran Raz and Amir Shpilka. His main field of research is computational complexity. In particular, he is interested in circuit complexity, de-randomization and coding theory.

The Computational Complexity of Linear Optics

Scott Aaronson, MIT
Wednesday, Sept 22, 2010
4:00 PM – 5:30 PM
No recording available

Description
I'll discuss a linear-optics experiment that might be feasible with current technology, and argue that, if the experiment succeeded, it would provide evidence that at least some nontrivial quantum computation is possible in nature. The resources that we consider are not known or believed to be universal for quantum computation; nevertheless, we show that they would allow the solution of certain sampling and search problems that seem to be intractable for classical computers. Joint work with Alex Arkhipov.

Biography
Scott Aaronson is an Associate Professor of Electrical Engineering and Computer Science at MIT. He did his undergraduate work at Cornell and his graduate work at UC Berkeley, and was a postdoc at the Institute for Advanced Study in Princeton, as well as the Institute for Quantum Computing at the University of Waterloo. His research interests center around computational complexity theory, and especially the fundamental capabilities and limitations of quantum computers -- as he puts it, "what we can't do with computers we don't have." He is known for creating the Complexity Zoo (www.complexityzoo.com), as well as writing a popular blog (www.scottaaronson.com/blog).

Price of Anarchy in Adword Auctions

Éva Tardos, Cornell
Wednesday, Sept 15, 2010
4:00 PM – 5:30 PM

Description
In this talk we consider the quality of equilibria in Generalized Second Price Auction, a simple model of auctions widely used in the newly developing markets for search advertising . It is known that in the full information model, the Generalized Second Price Auction has socially optimal Nash equilibria (i.e., that the Price of Stability is 1), but not all equilibria are optimal. Even worse, in the Bayesian setting socially optimal Nash equilibria may not exists . In this talk we will show that under some mild assumptions, the price of anarchy is of this game is small both in the full information and in the Bayesian settings. The results are joint work with Renato Paes Leme.

Biography
Eva Tardos is a Jacob Gould Schurman Professor of Computer Science at Cornell University, and was department chair 2006-2010. She received her PhD from Eotvos University, Budapest. Her research interest is algorithms and algorithmic game theory, the subarea theory of designing systems and algorithms for selfish users. Her research focuses on algorithms and games on networks. She has been elected to the National Academy of Engineering and the American Academy of Arts and Sciences, and is the recipient of numerous fellowships and awards, including the Packard Fellowship, the Fulkerson Prize and the Dantzig prize. She editor of several other journals, including the Journal of the ACM, and Combinatorica.

In Search of a Thin Tree: New Approximation Algorithms for the Asymmetric Traveling Salesman Problem

Amin Saberi, Stanford
Wednesday, Sept 8, 2010
4:00 PM – 5:30 PM
No recording available

Description
I will talk about new approximation algorithms for the Asymmetric Traveling Salesman Problem (ATSP) when the costs satisfy the triangle inequality. Our approach is based on constructing a 'thin' spanning tree from the solution of a classical linear programming relaxation of the problem and augmenting the tree to an Eulerian graph.The existence of such trees was conjectured by Goddyn in the context of nowhere-zero flows. We obtain an O(logn/loglogn) approximation algorithm by proving a weaker form of his conjecture. The algorithm uses a new rounding method that is based on sampling from a maximum entropy distribution. Also, I will talk about a special case where the underlying undirected graph of the LP relaxation of the problem has bounded genus. This is the case for example, when the distance functions are shortest paths in a city with few bridges and underpasses. We give a constant factor approximation algorithm in that case. The focus of the talk will be on the underlying techniques especially the rounding by sampling method (developed jointly with A. Asadpour) and open questions. The first result is joint work with A. Asadpour, M. Goemans, A. Madry and S. Oveis Gharan, and the second result is joint work with S. Oveis Gharan.

Biography
Amin Saberi received his B.Sc. in Computer Science from Sharif Institute of Technology in 2000, and his Ph.D. in Computer Science from Georgia Institute of Technology in 2004. After a short Postdoc in Microsoft Research, he joined Stanford University in January 2005. His research interests include algorithms, and algorithmic aspects of games, markets, and information networks. He has received NSF Career Award, Sloan Fellowship, and some best paper awards. In his spare time, he rediscovers the virtues of idleness.

Cascades in Networks and Aggregate Volatility

Daron Acemoglu, MIT
Wednesday, Aug 25, 2010
4:00 PM – 5:30 PM
No recording available

Description
We provide a general framework for the study of cascade effects created by interconnections between sectors, firms or financial institutions. Focusing on a multi-sector economy linked through a supply network, we show how structural properties of the supply network determine both whether aggregate volatility disappears in large economies (i.e., whether the law of large numbers holds) and when it does, the rate at which this happens. Our main results characterize the relationship between first-order interconnections (captured by the weighted degree sequence in the graph induced by the supply network) and aggregate volatility, and more importantly, the relationship between higher-order interconnections and aggregate volatility. These higher-order interconnections capture the cascade effects, whereby low productivity or the failure of a set of suppliers propagates through the rest of the economy as their downstream sectors/firms also suffer and transmit the negative shock to their downstream sectors/firms. We also link the probabilities of tail events (large negative deviations of aggregate output from its mean) to sector-specific volatility and to the structural properties of the supply network. Authors: Daron Acemoglu (MIT Economics), Asuman Ozdaglar (MIT EECS) and Alireza Tahbaz-Salehi (MIT EECS)

Biography
Daron Acemoglu is Charles P. Kindleberger Professor of Applied Economics in the Department of Economics at the Massachusetts Institute of Technology and a member of the Economic Growth program of the Canadian Institute of Advanced Research. He is also affiliated with the National Bureau Economic Research, the Center for Economic Performance, the Center for Economic Policy Research, and Microsoft Research Center. He is an elected fellow of the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association, and the Society of Labor Economists. Daron Acemoglu has received a BA in economics at the University of York, 1989, M.Sc. in mathematical economics and econometrics at the London School of Economics, 1990, and Ph.D. in economics at the London School of Economics in 1992. Since 1993, he has held the academic positions of Lecturer at the London School of Economics, and Assistant Professor, Pentti Kouri Associate Professor and Professor of Economics at MIT. He has received numerous awards and fellowships, including the award for best paper published in the Economic Journal in 1996 for his paper "Consumer Confidence and Rational Expectations: Are Agents' Beliefs Consistent With the Theory?", the inaugural T. W. Shultz Prize from the University of Chicago in 2004, and the inaugural Sherwin Rosen Award for outstanding contribution to labor economics in 2004, Distinguished Science Award from the Turkish Sciences Association in 2006, the John von Neumann Award, Rajk College, Budapest in 2007. He was also awarded the John Bates Clark Medal in 2005, given every two years to the best economist in the United States under the age of 40 by the American Economic Association, and holds an Honorary Doctorate from the University of Utrecht.His work has been published in leading scholarly journals, including the American Economic Review, Econometrica, Journal of Political Economy, Quarterly Journal Economics, Review of Economic Studies, and Mathematics of Operations Research. Daron Acemoglu's research covers a wide range of areas within economics, including political economy, economic development and growth, human capital theory, growth theory, innovation, search theory, network economics and learning. Daron Acemoglu is also the co-editor of Econometrica and of the National Bureau of Economic Research Macroeconomic Annual.

Perfect Matchings in Regular Bipartite Graphs

Sanjeev Khanna, Penn
Wednesday, Aug 18, 2010
4:00 PM – 5:30 PM
No recording available

Description
The problem of finding a perfect matching in a regular bipartite graph is a classical problem with applications to edge-colorings, routing, and scheduling, and is closely related to the Birkhoff-von Neumann decomposition of doubly stochastic matrices. The first non-trivial algorithm for this problem dates back to Konig's work in 1916 that gives an $O(mn)$ time algorithm; here $n$ and $m$ denote the number of vertices and edges in the graph, respectively. A sequence of improvements over the years have culminated in a linear-time algorithm (i.e. O(m) time) by Cole, Ost, and Schirra. This talk considers the question if a perfect matching can be computed in sub-linear time in a regular bipartite graph. We show that using randomization, one can obtain surprisingly efficient sub-linear time algorithms for this problem. In contrast, we show that no deterministic algorithm can beat the $O(m)$ running time bound. This is based on joint work with Ashish Goel and Michael Kapralov at Stanford University.

Biography
Sanjeev Khanna is a Professor of Computer and Information Science and a Rosenbluth Faculty Fellow at University of Pennsylvania. He received a Ph.D. in Computer Science from Stanford University (1996), and undergraduate degrees in Computer Science and Economics from Birla Institute of Technology, India (1990). From 1996 to 1999, he was a member of the Mathematical Sciences Research center at Bell Labs. He joined University of Pennsylvania in 1999. Sanjeev's research interests are in algorithms and complexity with a focus on approximation algorithms and hardness of approximation. He is a recipient of an Arthur Samuel dissertation award, a Sloan Fellowship, and a Guggenheim Fellowship.

Optimal Auctions with Budget Constraints

Rakesh Vohra, Kellogg School of Management, Northwestern
Wednesday, Aug 11, 2010
4:00 PM – 5:30 PM
View recording here

Description
We consider an environment where potential buyers of an indivisible good have liquidity constraints, in that they cannot pay more than their `budget' regardless of their valuation. A buyer's valuation for the good as well as her budget are her private information. We derive constrained-efficient and revenue maximizing auctions for this setting. In general, the optimal auction requires ‘pooling’ both at the top and in the middle despite the maintained assumption of a monotone hazard rate. Further, the auctioneer will never find it desirable to offer lump sum subsidies to bidders with low budgets. On a technical note, our analysis is based on the `reduced form’ representation of auctions, which enables one to exploit a polymatroid representation of auctions. This polymatroid representation is useful in other applications, time permitting, will be outlined.

Biography
Rakesh Vohra was educated at University College London, the LSE and the University of Maryland. He is currently the John L. & Helen Kellogg Professor of Managerial Economics at the Kellogg School of Management as well as Director of the Center for Mathematical Studies in Economics and Management Science. His research interests are in pricing, auction theory and game theory.

Invariance Principles in Theoretical Computer Science

Ryan O'Donnell, CMU
Wednesday, Aug 4, 2010
4:00 PM – 5:30 PM
View recording here

Description
In this talk I will give proofs of some "invariance principles" in probability; the Central Limit Theorem, the Berry--Esseen Theorem, and multidimensional and higher-degree versions thereof. I will discuss these proofs from a computer science perspective, and show some applications to fields such as property testing, derandomization, learning, and inapproximability.

Biography
Ryan O'Donnell received a B. Sc. from the University of Toronto in 1999 and a Ph.D. from the MIT Mathematics Department in 2003. His Ph.D. advisor was Madhu Sudan. Following this he was a postdoc at IAS for a year in Avi Wigderson’s group, and a postdoc at Microsoft Research for two years in Jennifer Chayes’s group. Since 2006 he has been an assistant professor in the Computer Science Department at Carnegie Mellon University. Ryan’s research interests include Analysis of Boolean Functions, Hardness of Approximation, Learning Theory, and Probability.

Spectral Sparsification of Graphs and Approximations of Matrices

Dan Spielman, Yale
Wednesday, July 28, 2010
4:00 PM – 5:30 PM
View recording here

Description
We introduce a notion of what it means for one graph to be a good spectral approximation of another. This induces the problem of spectral sparsification: finding a sparse graph that is a good spectral approximation of a given graph. It turns out that Ramanujan expanders are the best sparse spectral approximations of complete graphs. We show that every graph may be approximated almost as well by a sparse graph as the complete graphs are by Ramanujan graphs. We also present an efficient randomized algorithm for construction sparse approximations which only uses a logarithmic factor more edges than optimal. Our algorithms follow from the solution of a problem in linear algebra. Given a rank-n symmetric matrix A that is expressed as a sum of rank-1 symmetric matrices, we show that A can be well approximated by a weighted sum of only O(n) of those rank-1 matrices. This is joint work with Joshua Batson, Nikhil Srivastava and Shang-Hua Teng.

Biography
Daniel Alan Spielman received his B.A. in Mathematics and Computer Science from Yale in 1992, and his Ph.D in Applied Mathematics from M.I.T. in 1995. He spent a year as a NSF Mathematical Sciences Postdoc in the Computer Science Department at U.C. Berkeley, and then taught in the Applied Mathematics Department at M.I.T. until 2005. Since 2006, he has been a Professor of Applied Mathematics and Computer Science at Yale University. The awards he has received include the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 Godel Prize and the 2009 Fulkerson Prize. His main research interests include the design and analysis of algorithms,graph theory, machine learning, error-correcting codes and combinatorial scientific computing.

Past Speakers: Video Archive

More videos...

Past Speakers: Roster

Spring/Summer of 2012

Fall of 2011

Spring/Summer of 2011

Fall of 2010