Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Microsoft Research Colloquium

The Microsoft Research Colloquium at Microsoft Research New England focuses on research in the foundational aspects of computer science, mathematics, economics, anthropology, and sociology. With an interdisciplinary flavor, this colloquium series features some of the foremost researchers in their fields talking about their research, breakthroughs, and advances.

The agenda typically consists of approximately 50 minutes of prepared presentation and brief Q&A, followed immediately by a brief reception* to meet the speaker and address detailed questions. We welcome members of the local academic community to attend.

Upcoming Speakers

Please note: after December 3rd, the Microsoft Research Colloquium Series will be on hiatus until February 25th, 2015.

Elizabeth Pontikes, University of Chicago
Wednesday, February 25

Stephanie Dick, Harvard
Wednesday, March 4

Jessica Silbey, Suffolk
Wednesday, March 11

James Evans, University of Chicago
Wednesday, April 8

Past Speakers

Optimal Design for Social Learning

Johannes Horner, Yale
Wednesday, December 3

Watch the video

Description

We study the design of a recommender system for organizing social learning on a product. The optimal design trades off fully transparent social learning to improve incentives for early experimentation, by selectively over-recommending a product in the early phase of the product release. Under the optimal scheme, experimentation occurs faster than under full transparency but slower than under the first-best opti- mum, and the rate of experimentation increases over an initial phase and lasts until the posterior becomes sufficiently bad in which case the recommendation stops along with experimentation on the product. Fully transparent recommendation may become optimal if the (socially-benevolent) designer does not observe the agents’ costs or the agents choose the timing of receiving a recommendation.

Biography
Johannes Hörner is Professor of Economics, Department of Economics, and Cowles Foundation for Research in Economics, Yale University. He has received his Ph.D. in economics from the University of Pennsylvania in 2000, and has held previous positions at the Kellogg School of Management, Northwestern University (2000–2008).
His academic interests range from game theory to the theory of industrial organization. His research has focused on repeated games, dynamic games, and auctions.

Back to top

Physics-inspired algorithms and phase transitions in community detection

Cristopher Moore, Santa Fe Institute
Tuesday, November 18

Watch the video

Description
Detecting communities, and labeling nodes, is a ubiquitous problem in the study of networks. Recently, we developed scalable Belief Propagation algorithms that update probability distributions of node labels until they reach a fixed point. In addition to being of practical use, these algorithms can be studied analytically, revealing phase transitions in the ability of any algorithm to solve this problem. Specifically, there is a detectability transition in the stochastic block model, below which no algorithm can label nodes better than chance. This transition was subsequently established rigorously by Mossel, Neeman, and Sly, and Massoulie.
I'll explain this transition, and give an accessible introduction to Belief Propagation and the analogy with free energy and the cavity method of statistical physics. We'll see that the consensus of many good solutions is a better labeling than the "best" solution --- something that is true for many real-world optimization problems. While many algorithms overfit, and find "communities" even in random graphs where none exist, our method lets us focus on statistically-significant communities. In physical terms, we focus on the free energy rather than the ground state energy.
I'll then turn to spectral methods. It's popular to classify nodes according to the first few eigenvectors of the adjacency matrix or the graph Laplacian. However, in the sparse case these operators get confused by localized eigenvectors, focusing on high-degree nodes or dangling trees rather than large-scale communities. As a result, they fail significantly above the detectability transition. I will describe a new spectral algorithm based on the non-backtracking matrix, which avoids these localized eigenvectors: it appears to be optimal in the sense that it succeeds all the way down to the transition. Making this rigorous will require us to prove an interesting conjecture in the theory of random matrices and random graphs.
This is joint work with Aurelien Decelle, Florent Krzakala, Elchanan Mossel, Joe Neeman, Mark Newman, Allan Sly, Lenka Zdeborova, and Pan Zhang.

Biography
Cristopher Moore is a Professor at the Santa Fe Institute. He received his B.A. in Physics, Mathematics, and Integrated Science from Northwestern University, and his Ph.D. in Physics from Cornell. In 2000, he joined the University of New Mexico faculty, with joint appointments in Computer Science, and Physics and Astronomy. In 2012, Moore left the University of New Mexico and became full-time resident faculty at the Santa Fe Institute. He has published over 120 papers at the boundary between physics and computer science, ranging from quantum computing, to phase transitions in NP-complete problems, to the theory of social networks and efficient algorithms for analyzing their structure. With Stephan Mertens, he is the author of The Nature of Computation, published by Oxford University Press.

Back to top

Mapping single cells: A geometric approach

Dana Pe'er, Columbia
Wednesday, November 5

Watch the video

Description
High dimensional single cell technologies are on the rise, rapidly increasing in accuracy and throughput. These offer computational biology both a challenge and an opportunity. One of the big challenges with this data-type is to understand regions of density in this multi-dimensional space, given millions of noisy measurements. Underlying many of our approaches is mapping this high-dimensional geometry into a nearest neighbor graph and characterization single cell behavior using this graph structure. We will discuss a number of approaches (1) An algorithm that harnesses the nearest neighbor graph to order cells according to their developmental maturity and its use to identify novel progenitor B-cell sub-populations. (2) Using reweighted density estimation to characterize cellular signal processing in T-cell activation. (2) New clustering and dimensionality reduction approaches to map heterogeneity between cells; with an application to characterizing tumor heterogeneity in Acute Myeloid Leukemia.

Biography
Dana Pe’er is an associate professor in the Departments of Biological Sciences and Computer Science. Her lab endeavors to understand the organization, function, and evolution of molecular networks, particularly how variation in DNA sequence alters regulatory networks and leads to the vivid phenotypic diversity of life. Her team develops computational methods that integrate diverse high-throughput data to provide a holistic, systems-level view of molecular networks. She is particularly interested in exploring how systems biology can be used to personalize care for people with cancer. By developing models that can predict how individual tumors will respond to certain drugs and drug combinations, her goal is to develop ways to determine the best drug regime for each patient. Her interest is not only in understanding which molecular components go wrong in cancer cells, but also in using this information to improve cancer therapeutics.
Dr. Pe’er is the recipient of the 2014 Overton Prize, and has been recognized with the Burroughs Wellcome Fund Career Award, an NIH Directors New Innovator Award, an NSF CAREER Award, and a Stand Up To Cancer Innovative Research Grant. She was also named a Packard Fellow in Science and Engineering.

Back to top

Barack Obama and the politics of social media for national policy-making

James Katz, Boston University
Wednesday, October 15

Description
Social media help people do most everything, ranging from meeting new friends and finding new restaurants to overthrowing dictatorships. This includes political campaigning; one need look no further than Barack Obama’s successful presidential campaigns to see how these communication technologies can alter the way politics is conducted. Yet social media have not had much import for setting national policy as part of regular administrative routines. This is the case despite the fact that, since his election in 2008, President Obama has on several occasions proclaimed that he wanted his administration to draw on social media to make the federal government run better. While there have been some modifications to governmental procedures due to the introduction of social media, the Obama administration practices have fallen far short of its leader’s audacious vision. Despite voluminous attention to social media in other spheres of activity, there has been little to point to in terms of successfully drawing on the public to help set national policies. What might account for this? I try to answer this question in my talk by exploring the attempts by the Obama White House to use social media tools and the consequences arising from such attempts. I also suggest some potential reasons behind the particular uses and outcomes that have emerged in terms of presidential-level social media outreach. As part of my conclusion, I outline possible future directions.

Biography
James E. Katz, Ph.D., is the Feld Family Professor of Emerging Media at Boston University’s College of Communication where he directs its Center for Mobile Communication Studies and Division of Emerging Media. His research on the internet, social media and mobile communication has been internationally recognized, and he is frequently invited to address high-level industry, governmental and academic groups on his research findings. His latest book, with Barris and Jain, is The Social Media President: Barack Obama and the Politics of Citizen Engagement on which this talk is based.

Back to top

Cooperation on Social Networks

Nageeb Ali, UCSD
Wednesday, October 1 

Watch the video

Description
At most places, and at most times, cooperation takes place in the absence of legal or contractual enforcement. What motivates players to cooperate? A growing literature in the social sciences emphasizes the importance of future interactions and social mechanisms by which defectors are punished both by their victims and third-parties. This perspective has, in recent years, influenced our understanding of contractual and lending relationships in developing economies, reputations in market platforms such as eBay, and even that of indirect reciprocity in theoretical biology. In this talk, I will describe how the nature and strength of these incentives varies with a social network, how a player may cooperate so as to preserve his reputation in a social network, and what guarantees that a victim of defection truthfully reveals to others that someone else has violated the social norm. We will see that dividing society into cliques and that a modicum of forgiveness can facilitate cooperation. We might see that a commonly made assumption made in much of the literature on cooperation---that victims always reveal when someone else has defected---may be less innocuous than it seems.

Biography
S. Nageeb Ali is an assistant professor of economics at UCSD. He studies game-theoretic models of cooperation, social learning, political economy, and behavioral economics. He received his Ph.D. from Stanford University in 2007, and is a frequent Microsoft visitor.

Back to top

The origins of common sense: Modeling human intelligence with probabilistic programs and program induction

Joshua Tenenbaum, MIT
Wednesday, September 17 

Watch the video

Description
Our work seeks to understand the roots of human thinking by looking at the core cognitive capacities and learning mechanisms of young children and infants. We build computational models of these capacities with the twin goals of explaining human thought in more principled, rigorous "reverse engineering" terms, and engineering more human-like AI and machine learning systems. This talk will focus on two ways in which the intelligence of very young children goes beyond the conventional paradigms in machine learning: (1) Scene understanding, where we cannot detect not only objects and their locations, but what is happening, what will happen next, who is doing what to whom and why, in terms of our intuitive theories of physics (forces, masses) and psychology (beliefs, desires, ...); (2) Learning concepts from examples, where just a single example is often sufficient to grasp a new concept and generalize in richer ways than machine learning systems can typically do even with hundreds or thousands of examples. I will show how we are beginning to capture these reasoning and learning abilities in computational terms using techniques based on probabilistic programs and program induction, embedded in a broadly Bayesian framework for inference under uncertainty.

Biography
Josh Tenenbaum studies learning, reasoning and perception in humans and machines, with the twin goals of understanding human intelligence in computational terms and bringing computers closer to human capacities. His current work focuses on building probabilistic models to explain how people come to be able to learn new concepts from very sparse data, how we learn to learn, and the nature and origins of people's intuitive theories about the physical and social worlds. He is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and is a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He received his Ph.D. from MIT in 1999, and was a member of the Stanford University faculty in Psychology and (by courtesy) Computer Science from 1999 to 2002. His papers have received awards at numerous conferences, including CVPR (the IEEE Computer Vision and Pattern Recognition conference), ICDL (the International Conference on Learning and Development), NIPS, UAI, IJCAI and the Annual Conference of the Cognitive Science Society. He is the recipient of early career awards from the Society for Mathematical Psychology (2005), the Society of Experimental Psychologists, and the American Psychological Association (2008), and the Troland Research Award from the National Academy of Sciences (2011).

Back to top

Robust Probabilistic Inference

Yishay Mansour, MSR Israel
Wednesday, August 27 

Watch the video

Description
Probabilistic Inference is the task of given a certain set of observations, to deduce the probability of various outcomes. This is a very basic task both in statistics and in machine learning.
Robust probabilistic inference is an extension of probabilistic inference, where some of the observations are adversarially corrupted.  Examples of where such a model may be relevant are spam detection, where spammers try adversarially to fool the spam detectors, or failure detection and correction, where the failure can be modeled as a “worse case” failure.  The framework can be also used to model selection between a few alternative models that possibly generate the data.
Technically, we model robust probabilistic inference as a zero-sum game between an adversary, who can select a modification rule, and a predictor, who wants to accurately predict the state of nature.  Our main result is an efficient near optimal algorithm for the robust probabilistic inference problem.  More specifically, given a black-box access to a Bayesian inference in the classic (adversary-free) setting, our near optimal policy runs in polynomial time in the number of observations and the number of possible modification rules.
This is a joint work with Aviad Rubinstein and Moshe Tennenholtz.

Biography
Prof. Yishay Mansour got his PhD from MIT in 1990, following it he was a postdoctoral fellow in Harvard and a Research Staff Member in IBM T. J. Watson Research Center. Since 1992 he is at Tel-Aviv University, where he is currently a Professor of Computer Science and has served as the head of the School of Computer Science during 2000-2002. Prof. Mansour has held visiting positions with Bell Labs, AT&T research Labs, IBM Research, and Google Research. Prof. Mansour has published over 50 journal papers and over 100 proceeding paper in various areas of computer science with special emphasis on communication networks machine learning, and algorithmic game theory. Prof. Mansour is currently an associate editor in a number of distinguished journals and has been on numerous conference program committees. He was both the program chair of COLT (1998) and served on the COLT steering committee. He has supervised over a dozen graduate students in various areas including communication networks, machine learning, algorithmic game theory and theory of computing.

Back to top

Nerves and Synapses - A General Preview

Michal Linial, Hebrew University
Wednesday, August 13

Watch the video

Description
My talk is a brief preview of neuroscience (pre-101..). I will share with you some of the brain’s mysteries and will illustrate the capacity of neurons to rewire and thus to learn (and forget). To do so, we will discuss (briefly) how neurons convey information, what are the principles underlying neuronal communication and the fundamental rules of electrical and chemical messengers. The uniformity and the variability of neurons that are involved in high brain functions (mathematics?) and those that make sure that we quickly remove our finger from a hot plate will be discussed. I will mention the capacity of the human brain vis-à-vis that of our cousins, the chimps, and other nerve systems. Is our brain really so different? (Probably so), what makes us human? (I have no clue..), why are we all fascinated by the brain? (easy to demonstrate). I will introduce you to synapses and describe classical and novel approaches to understand the brain (or at least better describe it). Importantly, I will emphasize how essential it is to study the brain at different levels of resolution and by applying an interdisciplinary approach. I promise to pose more questions than answers...

Biography
Michal Linial is a Professor of Biochemistry, The Hebrew University, Jerusalem, Israel and a Director of the Israel Institute for Advanced Studies.ML had published over 150 scientific papers and abstracts on diverse topics in molecular biology, cellular biology, bioinformatics, neuroscience the integration of tools to improve knowledge extractions. M. Linial has an experimental and computational laboratory. M.L is the leader and the founder of the first established educational program in Israel for Computer Science and Life Science (from 1999) for Undergraduate-Graduate studies. Her expertise in the synapse let to the study of protein families, protein-protein interactions with a global view on protein networks and their regulation. Molecular biology, cell biology and biochemical methods are applied in all research initiated in her laboratory. She and her laboratory are developing new computational and technological tools for large-scale cell biological research M. Linial and her colleagues apply MS based and genomics (DNA Chip) approaches for studying changes in neuronal development, and disease oriented research. She published over 180 scientific papers including book chapters and numerous reviews. The solid informatics approaches are used for large database storage and constant updating of several systems in view of classification, validation and functional predictions. M.L. and her students has been an active participant in NIH structural genomics initiatives and she participated in Structural Genomics effort Task for target selections. She and her colleagues have created several global classification systems that are used by the biomedical and biology communities. Most notably are the ProtoNet, EVEREST PANDORA, miRror,-Suite, ClanTox and more. All those developed web systems are provided as an open source for investigators.

Back to top

Explore or Exploit? Reflections on an Ancient Dilemma in the Age of the Web

Robert Kleinberg, Cornell
Wednesday, August 6

Watch the video

Description
Learning and decision-making problems often boil down to a balancing act between exploring new possibilities and exploiting the best known one. For more than fifty years, the multi-armed bandit problem has been the predominant theoretical model for investigating these issues. The emergence of the Web as a platform for sequential experimentation at a massive scale is leading to shifts in our understanding of this fundamental problem as we confront new challenges and opportunities. I will present two recent pieces of work addressing these challenges. The first concerns the misalignment of incentives in systems, such as online product reviews and citizen science platforms, that depend on a large population of users to explore a space of options. The second concerns situations in which the learner's actions consume one or more limited-supply resources, as when a ticket seller experiments with prices for an event with limited seating.

Biography
Robert Kleinberg is an Associate Professor of Computer Science at Cornell University. His research studies the design and analysis of algorithms, and their relations to economics, learning theory, and networks. Prior to receiving his doctorate from MIT in 2005, Kleinberg spent three years at Akamai Technologies, where he assisted in designing the world's largest Internet Content Delivery Network. He is the recipient of a Microsoft Research New Faculty Fellowship, an Alfred P. Sloan Foundation Fellowship, and an NSF CAREER Award.

Back to top

Visual Nearest Neighbor Search     

Shai Avidan, Tel-Aviv University
Wednesday, July 30
4:00 PM – 5:00 PM

Watch the video

Description
Template Matching finds the best match in an image to a given template and this is used in a variety of computer vision applications. I will discuss several extensions to Template Matching. First, dealing with the case where we have millions of templates that we must match at once, second dealing with the case of RGBD images, where depth information is available and finally, presenting a fast algorithm for template matching under 2D affine transformations with global approximation guarantees.
Joint work with Simon Korman, Yaron Eshet, Eyal Ofek, Gilad Tsur and Daniel Reichman.

Biography
Shai Avidan is an Associate Professor at the School of Electrical Engineering at Tel-Aviv University, Israel. He earned his PhD at the Hebrew University, Jerusalem, Israel, in 1999. Later, he was a Postdoctoral Researcher at Microsoft Research, a Project Leader at MobilEye, a startup company developing camera based driver assisted systems, a Research Scientist at Mitsubishi Electric Research Labs (MERL), and a Senior Researcher at Adobe. He published extensively in the fields of object tracking in video and 3-D object modeling from images. Recently, he has been working on Computational Photography. Dr. Avidan is an Associate Editor of PAMI and was on the program committee of multiple conferences and workshops in the fields of Computer Vision and Computer Graphics.

Back to top

A Grand Gender Convergence: Its Last Chapter    

Claudia Goldin, Harvard
Wednesday, July 23
4:00 PM – 5:00 PM

Watch the video

Description
The converging roles of men and women are among the grandest advances in society and the economy in the last century. These aspects of the grand gender convergence are figurative chapters in a history of gender roles. But what must the “last” chapter contain for there to be equality in the labor market? The answer may come as a surprise. The solution does not (necessarily) have to involve government intervention and it need not make men more responsible in the home (although that wouldn’t hurt). But it must involve changes in the labor market, in particular how jobs are structured and remunerated to enhance temporal flexibility. The gender gap in pay would be considerably reduced and might vanish altogether if firms did not have an incentive to disproportionately reward individuals who labored long hours and worked particular hours. Such change has taken off in various sectors, such as technology, science and health, but is less apparent in the corporate, financial and legal worlds. 

Biography
Claudia Goldin is the Henry Lee Professor of Economics at Harvard University and director of the NBER’s Development of the American Economy program. Goldin is an economic historian and a labor economist. Her research has covered a wide array of topics, such as slavery, emancipation, the post-bellum south, women in the economy, the economic impact of war, immigration, New Deal policies, inequality, technological change, and education. Most of her research interprets the present through the lens of the past and explores the origins of current issues of concern. In the past several years her work has concerned the rise of mass education in the United States and its impact on economic growth and wage inequality. More recently she has focused her attention on college women’s achievement of career and family.
She is the author and editor of several books, among them Understanding the Gender Gap: An Economic History of American Women (Oxford 1990), The Regulated Economy: A Historical Approach to Political Economy (with G. Libecap; University of Chicago Press 1994), The Defining Moment: The Great Depression and the American Economy in the Twentieth Century (with M. Bordo and E. White; University of Chicago Press 1998), and Corruption and Reform: Lesson’s from America’s Economic History (with E. Glaeser; Chicago 2006). Her most recent book is The Race between Education and Technology (with L. Katz; The Belknap Press, 2008), winner of the 2008 R.R. Hawkins Award for the most outstanding scholarly work in all disciplines of the arts and sciences.
Goldin is best known for her historical work on women in the U.S. economy. Her most recent papers in that area have concerned the history of women’s quest for career and family, coeducation in higher education, the impact of the “pill” on women’s career and marriage decisions, women’s surnames after marriage as a social indicator, and the reasons why women are now the majority of undergraduates. She has recently embarked on a wide ranging project on the family and career transitions of male and female graduates of selective universities from the late 1960s to the present.
Goldin is the current president of the American Economic Association. In 2007 Goldin was elected a member of the National Academy of Sciences and was the Gilman Fellow of the American Academy of Political and Social Science. She is a fellow of the American Academy of Arts and Sciences, the Society of Labor Economists (SOLE), the Econometric Society, and the Cliometric Society. In 2009 SOLE awarded Goldin the Mincer Prize for life-time contributions to the field of labor economics. Goldin completed her term as the President of the Economic History Association in 2000. In 1991 she was elected Vice President of the American Economic Association. From 1984 to 1988 she was editor of the Journal of Economic History and is currently an associate editor of the Quarterly Journal of Economics and a member of various editorial boards. She is the recipient of various teaching awards. Goldin received her B.A. from Cornell University and her Ph.D. from the University of Chicago.

Back to top

Why Not Be Evil? The Costs and Benefits of Corporate Social Responsibility    

Siva Vaidhyanathan, University of Virginia
Wednesday, July 9
4:00 PM – 5:00 PM

Description
Corporate Social Responsibility (CSR) and its Silicon Valley cousin, Social Entrepreneurship, have a rich but recent history. This talk will briefly explore the roots of these schools of thought and practice and examine their rise through business-school curricula and scholarship in the late 20th Century. Why did they come about when they came about? What are their effects on the world? Do they affect consumer behavior and investor behavior? And to what ends? Most seriously, does the identification of a company with particular values or social goals have the effect of depoliticizing an otherwise democratic republic? 

Biography
Siva Vaidhyanathan is the Robertson Professor of Media Studies at the University of Virginia and the author, most recently, of The Googlization of Everything -- and Why We Should Worry (University of California Press, 2011)

Back to top

Rethinking Machine Learning In The 21St Century: From Optimization To Equilibration    

Sridhar Mahadevan, UMASS Amherst
Wednesday, June 11
4:00 PM – 5:00 PM

Watch the video

Description
The past two decades has seen machine learning (ML) transformed from an academic curiosity to a multi-billion dollar industry, and a centerpiece of our economic, social, scientific, and security infrastructure. Much work in machine learning has drawn on research in optimization, motivated by large-scale applications requiring analysis of massive high-dimensional data. In this talk, I’ll argue that the growing importance of networked data environments, from the Internet to cloud computing, requires a fundamental rethinking of our basic analytic tools. My thesis will be that ML needs to shift from its current focus on optimization to equilibration, from modeling the world as uncertain, but stationary and benign, to one where the world is non-stationary, competitive, and potentially malicious. Adapting to this new world will require developing new ML frameworks and algorithms. My talk will introduce one such framework — equilibration using variational inequalities and projected dynamical systems —which not only generalizes optimization, but is better suited to the distributed networked cloud-oriented future that ML faces. To explain this paradigm change, I’ll begin by summarizing the au courant optimization-based approach to ML using recent research in the Autonomous Learning Laboratory. I will then present an equilibration-based framework using variational inequalities and projected dynamical systems, which originated in mathematics for solving partial differential equations in physics, but has been since been widely applied in its finite-dimensional formulation to network equilibrium problems in economics, transportation, and other areas. I’ll describe a range of algorithms for solving variational inequalities, showing their scope allows ML to extend beyond optimization, to finding game-theoretic equilibria, solving complementarity problems, and many other areas. 

Biography
Professor Sridhar Mahadevan directs the Graduate Program at the School of Computer Science at the University of Massachusetts, Amherst. He is a co-director of the Autonomous Learning Laboratory, one of the oldest academic research centers for machine learning in the US, which has graduated more than 30 doctoral students in its three decade history, and includes 3 AAAI fellows among its alumni. The lab currently includes 14 PhD students, who work in a variety of areas in machine learning, including equilibration algorithms, optimization, reinforcement learning, and unsupervised learning.

Back to top

Do Neighborhoods Matter for Disadvantaged Families? Long-Term Evidence from the Moving to Opportunity Experiment    

Larry Katz, Harvard
Wednesday, May 21
4:00 PM – 5:00 PM

Watch the video

Description
We examine long-term neighborhood effects on low-income families using data from the Moving to Opportunity (MTO) randomized housing-mobility experiment, which offered some public-housing families but not others the chance to move to less-disadvantaged neighborhoods. MTO succeed in moving families to lower-poverty and safer residential neighborhoods, but MTO moves did not substantially improve the quality of schools attended by the children. We show that 10-15 years after baseline, MTO improves adult physical and mental health, has no detectable effect on economic outcomes or youth schooling or physical health, and mixed results by gender on other youth outcomes, with girls doing better on some measures and boys doing worse. Despite the somewhat mixed pattern of impacts on traditional behavioral outcomes, MTO moves substantially improve adult subjective well-being. And when opportunities to move with housing vouchers lead to better schools for the children, such moves do have long-run positive impacts on youth education and reduce youth risky behaviors. 

Biography
Lawrence F. Katz is the Elisabeth Allison Professor of Economics at Harvard University and a Research Associate of the National Bureau of Economic Research. His research focuses on issues in labor economics and the economics of social problems. He is the author (with Claudia Goldin) of The Race between Education and Technology (Harvard University Press, 2008), a history of U.S. economic inequality and the roles of technological change and the pace of educational advance in affecting the wage structure.
Katz also has been studying the impacts of neighborhood poverty on low-income families as the principal investigator of the long-term evaluation of the Moving to Opportunity program, a randomized housing mobility experiment. And Katz is working with Claudia Goldin on a major project studying the historical evolution of career and family choices and outcomes for U.S. college men and women. His past research has explored a wide range of topics including U.S. and comparative wage inequality trends, educational wage differentials and the labor market returns to education, the impact of globalization and technological change on the labor market, the economics of immigration, unemployment and unemployment insurance, regional labor markets, the evaluation of labor market programs, the problems of low-income neighborhoods, and the social and economic consequences of the birth control pill.
Professor Katz has been editor of the Quarterly Journal of Economics since 1991 and served as the Chief Economist of the U.S. Department of Labor for 1993 and 1994. He is the co-Scientific Director of J-PAL North America, current President of the Society of Labor Economists, and has been elected a fellow of the National Academy of Sciences, American Academy of Arts and Sciences, the Econometric Society, and the Society of Labor Economists. Katz serves on the Panel of Economic Advisers of the Congressional Budget Office as well as on the Boards of the Russell Sage Foundation and the Manpower Demonstration Research Corporation. He graduated from the University of California at Berkeley in 1981 and earned his Ph.D. in Economics from the Massachusetts Institute of Technology in 1985.

Back to top

Principled Approaches for Learning Latent Variable Models    

Anima Anandkumar, UC Irvine
Wednesday, May 14
4:00 PM – 5:00 PM

Watch the video

Description
In any learning task, it is natural to incorporate latent or hidden variables which are not directly observed. For instance, in a social network, we can observe interactions among the actors, but not their hidden interests/intents, in gene networks, we can measure gene expression levels but not the detailed regulatory mechanisms, and so on. I will present a broad framework for unsupervised learning of latent variable models, addressing both statistical and computational concerns. We show that higher order relationships among observed variables have a low rank representation under natural statistical constraints such as conditional-independence relationships. We also present efficient computational methods for finding these low rank representations. These findings have implications in a number of settings such as finding hidden communities in networks, discovering topics in text documents and learning about gene regulation in computational biology. I will also present principled approaches for learning overcomplete models, where the latent dimensionality can be much larger than the observed dimensionality, under natural sparsity constraints. This has implications in a number of applications such as sparse coding and feature learning. 

Biography
Anima Anandkumar is a faculty at the EECS Dept. at U.C. Irvine since August 2010. Her research interests are in the area of large-scale machine learning and high-dimensional statistics. She received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at the Stochastic Systems Group at MIT between 2009- 2010. She is the recipient of the Alfred P. Sloan Fellowship, Microsoft Faculty Fellowship, ARO Young Investigator Award, NSF CAREER Award, IBM Fran Allen PhD fellowship, thesis award from ACM SIGMETRICS society, and paper awards from the ACM SIGMETRICS and IEEE Signal Processing societies.

Back to top

Economies of Visibility: Girl Empowerment Organizations and the Market for Empowerment    

Sarah Banet-Weiser, USC Annenberg’s School of Communication
Wednesday, April 30
4:00 PM – 5:00 PM

Watch the video

Description
In the past two decades, the invocation of “girl power” as an increasingly normative discourse to describe young girls and women in their everyday practices has been met with both excitement and challenge. However, while many have theorized how the “girl” in girl power is a racially and class specific girl, one that has economic and cultural privilege to access power, the “power” in girl power still needs rigorous theorization. In this talk, I examine what the “power” of girl power means in the current moment, arguing that for the most part, this form of power is legible within an economy of media visibility, where media incessantly look at and invite us to look at girls. More specifically, I examine the construction of a market within the contemporary economy of visibility: the market for empowerment. Looking at girl empowerment organizations, I analyze this market in both a US and international development context, and argue that it works to consolidate a specific kind of empowerment that is personal and individual.

Biography
Sarah Banet-Weiser is Professor of Communication at the Annenberg School of Communication and Journalism and in the Department of American Studies and Ethnicity at the University of Southern California. She is the author of The Most Beautiful Girl in the World: Beauty Pageants and National Identity (1999), Kids Rule! Nickelodeon and Consumer Citizenship (2007), and Authentic™: The Politics of Ambivalence in a Brand Culture (winner of the Outstanding Book Award at the International Communication Association). She is the co-editor of Cable Visions: Television Beyond Broadcasting and Commodity Activism: Cultural Resistance in Neoliberal Times. She edited the NYU press book series Critical Cultural Communication until 2012, and is currently the editor of American Quarterly.

Back to top

Those Of You Who Need a Little More Time    

Jonathan Sterne, McGill
Wednesday, April 16
4:00 PM – 5:00 PM

Watch the video

Description
This talk examines the lesser-known work and legacy of Dennis Gabor. Gabor was a physicist famous for inventing holography. But he also applied quantum theory to sound, and in so doing offered an important corrective to prevailing interpretations of wave theories of sound derived from Joseph Fourier’s work. To prove his point, Gabor built a device called the “kinematic frequency compressor,” which could time-stretch or pitch-shift audio independently of the other operation, a feat previously considered impossible in the analog domain. After considering the machine, I trace its technical and cultural descendants in advertising, cinema, avant-garde music, and today in the world’s most popular audio software, Ableton Live.

Biography

Jonathan Sterne teaches in the Department of Art History and Communication Studies and the History and Philosophy of Science Program at McGill University. He is author of MP3: The Meaning of a Format (Duke 2012), The Audible Past: Cultural Origins of Sound Reproduction (Duke, 2003); and numerous articles on media, technologies and the politics of culture. He is also editor of The Sound Studies Reader (Routledge, 2012). His new projects consider instruments and instrumentalities; histories of signal processing; and the intersections of disability, technology and perception. Visit his website at http://sterneworks.org.

Back to top

Deceptive Products    

Botond Koszegi, Central European University
Wednesday, April 2, 2014
4:00 PM – 5:00 PM

Watch the video

Description
A literature in behavioral economics documents that in a number of retail markets, some consumers misunderstand key fees or other central product features, and many argue that this leads firms to offer contracts and products that take advantage of such naive consumers. This talk will give an overview of some theoretical research on the market for deceptive products. Questions might include (i) what kinds of contracts will be offered in the presence of naive consumers; (ii) how naive and sophisticated consumers affect each other in the market; (iii) how firms attempt to discriminate between naive and sophisticated consumers, and how this affects economic welfare; (iv) whether and when firms have an incentive to “come clean” regarding their products; and (v) what kinds of products will be sold in a deceptive way.
Based on joint work with Paul Heidhues and Takeshi Murooka

Biography
Botond Koszegi is Professor at the Department of Economics at Central European University in Budapest, Hungary, since August 1, 2012. He was previously Professor of Economics at the University of California at Berkeley, and has held visiting positions at Massachusetts Institute of Technology, Cambridge, MA, and CEU. He earned his BA in mathematics from Harvard University in 1996, and his Ph.D. in economics from the Massachusetts Institute of Technology in 2000. His research interests are primarily in the theoretical foundations of behavioral economics. He has produced research on self-control problems and the consumption of harmful products, self-image and anticipatory utility, reference-dependent preferences and loss aversion, and focusing and attention. Recently, he has been studying how firms respond to consumers’ psychological tendencies, especially in the pricing of products and the design of credit and other financial contracts.

Back to top

Mechanism Design for Data Science    

Jason Hartline, Northwestern
Wednesday, March 19, 2014
4:00 PM – 5:00 PM

Watch the video

Description
The promise of data science is that system data can be analyzed and its understanding can be used to improve the system (i.e., to obtain good outcomes). For this promise to be realized, the necessary understanding must be inferable from the data. Whether or not this understanding is inferable often depends on the system itself. Therefore, the system needs to be designed to both obtain good outcomes and to admit good inference. This talk will explore this issue in a mechanism design context where the designer would like use past bid data to adapt an auction mechanism to optimize revenue. Data analysis is necessary for revenue optimization in auctions, but revenue optimization is at odds with good inference. The revenue-optimal auction for selling an item is typically parameterized by a reserve price, and the appropriate reserve price depends on how much the bidders are willing to pay. This willingness to pay could be potentially be learned by inference, but a reserve price precludes learning anything about willingness-to-pay of bidders who are not willing to pay the reserve price. The auctioneer could never learn that lowering the reserve price would give a higher revenue (even if it would). To address this impossibility, the auctioneer could sacrifice revenue-optimality in the initial auction to obtain better inference properties so that the auction's parameters can be adapted to changing preferences in the future. In this talk, I will develop a theory for optimal auction design subject to good inference.

Biography
Prof. Hartline is on sabbatical at Harvard Economics and Computer Science Departments for the 2014 calendar year (January 2014-December 2014).
Prof. Hartline's current research interests lie in the intersection of the fields of theoretical computer science, game theory, and economics. With the Internet developing as the single most important arena for resource sharing among parties with diverse and selfish interests, traditional algorithmic and distributed systems approaches are insufficient. Instead, in protocols for the Internet, game-theoretic and economic issues must be considered. A fundamental research endeavor in this new field is the design and analysis of auction mechanisms and pricing algorithms.
Dr. Hartline joined the EECS department (and MEDS, by courtesy) in January of 2008. He was a researcher at Microsoft Research, Silicon Valley from 2004 to 2007, where his research covered foundational topic of algorithmic mechanism design and applications to auctions for sponsored search. He was an active researcher in the San Francisco bay area algorithmic game theory community and was a founding organizer of the Bay Algorithmic Game Theory Symposium. In 2003, he held a postdoctoral research fellowship at the Aladdin Center at Carnegie Mellon University. He received his Ph.D. in Computer Science from the University of Washington in 2003 with advisor Anna Karlin and B.S.s in Computer Science and Electrical Engineering from Cornell University in 1997.

Back to top

An Experiment in Hiring Discrimination Via Online Social Networks    

Alessandro Acquisti, CMU
Wednesday, Feb 26, 2014
4:00 PM – 5:00 PM

Watch the video

Description
Surveys of U.S. employers suggest that numerous firms seek information about job applicants online. However, little is known about how this information gathering influences employers' hiring behavior. We present results from two complementary randomized experiments (a field experiment and an online experiment) on the impact of online information on U.S. firms' hiring behavior. We manipulate candidates' personal information that is protected under either federal laws or some state laws, and may be risky for employers to enquire about during interviews, but which may be inferred from applicants' online social media profiles. In the field experiment, we test responses of over 4,000 U.S. employers to a Muslim candidate relative to a Christian candidate, and to a gay candidate relative to a straight candidate. We supplement the field experiment with a randomized, survey-based online experiment with over 1,000 subjects (including subjects with previous human resources experience) testing the effects of the manipulated online information on hypothetical hiring decisions and perceptions of employability. The results of the field experiment suggest that a minority of U.S. firms likely searched online for the candidates' information. Hence, the overall effect of the experimental manipulations on interview invitations is small and not statistically significant. However, in the field experiment, we find evidence of discrimination linked to political party affiliation. Following the Gallup Organization's segmentation of U.S. states by political ideology, we use results from the 2012 presidential election and find evidence of discrimination against the Muslim candidate compared to the Christian candidate among employers in more Romney-leaning states and counties. These results are robust to controlling for firm characteristics, state fixed effects, and a host of county-level variables. We find no evidence of discrimination against the gay candidate relative to the straight candidate. Results from the online experiment are consistent with those from the field experiment: we find more evidence of bias among subjects more likely to self-report more political conservative party affiliation.

Biography
Alessandro Acquisti is a professor of information technology and public policy at the Heinz College, Carnegie Mellon University (CMU) and the co-director of CMU Center for Behavioral and Decision Research. He has held visiting positions at the Universities of Rome, Paris, and Freiburg (visiting professor); Harvard University (visiting scholar); University of Chicago (visiting fellow); Microsoft Research (visiting researcher); and Google (visiting scientist). Alessandro investigates economic, policy, and technological issues surrounding privacy. His studies have spearheaded the application of behavioral economics to the analysis of privacy and information security decision making, and the analysis of privacy risks and disclosure behavior in online social networks. Alessandro has been the recipient of the PET Award for Outstanding Research in Privacy Enhancing Technologies, the IBM Best Academic Privacy Faculty Award, multiple Best Paper awards, and the Heinz College School of Information's Teaching Excellence Award. He has testified before the U.S. Senate and House committees on issues related to privacy policy and consumer behavior, and was a TED Global 2013 speaker. Alessandro's findings have been featured in national and international media outlets, including the Economist, the New York Times, the Wall Street Journal, the Washington Post, the Financial Times, Wired.com, NPR, CNN, and CBS 60 Minutes. His 2009 study on the predictability of Social Security numbers was featured in the "Year in Ideas" issue of the New York Times Magazine. Alessandro holds a PhD from UC Berkeley, and Master degrees from UC Berkeley, the London School of Economics, and Trinity College Dublin. He has been a member of the National Academies' Committee on public response to alerts and warnings using social media.

Back to top

Economic Models as Analogies    

Larry Samuelson, Yale
Wednesday, Dec 18
4:00 PM – 5:00 PM

Watch the video

Description
People often wonder why economists analyze models whose assumptions are known to be false, while economists feel that they learn a great deal from such exercises. We suggest that part of the knowledge generated by academic economists is case-based rather than rule-based. That is, instead of offering general rules or theories that should be contrasted with data, economists often analyze models that are “theoretical cases”, which help understand economic problems by drawing analogies between the model and the problem. According to this view, economic models, empirical data, experimental results and other sources of knowledge are all on equal footing, that is, they all provide cases to which a given problem can be compared. We offer complexity arguments that explain why case-based reasoning may sometimes be the method of choice and why economists prefer simple cases.
Joint work with Itzhak Gilboa, Andrew Postlewaite, and David Schmeidler

Biography
Samuelson is a Fellow of the Econometric Society and a Fellow of the American Academy of Arts and Sciences. He has been a Co-editor of Econometrica and is currently a Co-editor of the American Economic Review. His research spans microeconomic theory and game theory.

Back to top

Tools for Large Scale Public Engagement in Research   

Krzysztof Gajos, Harvard
Wednesday, Dec 4
4:00 PM – 5:00 PM

Watch the video

Description
Non-scientists have long been contributing to research: by gathering observations on plant and animal behavior, by gazing at the sky through private amateur telescopes, or by participating in psychology experiments. The Internet has created entirely new opportunities for enabling public participation in research, both in terms of the scale of public participation and the kinds of activities that the non-professional scientists can perform in support of scientific inquiry. Yet, inclusion of the broader publics in one's research program remains an exception rather than a norm, presumably because of concerns related to technical infrastructure, recruitment, and reliability of contributions.
I will highlight two strands of research in my group that contribute toward wider involvement of broader publics in research.
In the first strand, we have specifically focused on methods for studying human motor performance on computer input tasks. We have developed and validated mechanisms for collecting lab-quality data in three settings: 1. unobtrusively in situ from observations of a user's natural interactions with a computer; 2. on Amazon Mechanical Turk; 3. with unpaid online volunteers through our Lab in the Wild platform. Our recent study with 500,000 participants allowed us to replicate several past results and also to conduct new analyses that were not possible before. For example, we provided fine grained estimates of when in life basic abilities (such as cognitive processing speed, fine motor control, and gross motor control) peak.
In the second strand, we focused on developing procedures to enable non-experts to perform expert-level analytical tasks accurately and at scale. Specifically, we have developed PlateMate, a system for crowdsourcing nutritional analysis from food photographs. In an ongoing project, we are studying the behavioral and nutritional factors impacting preterm birth. A key technical enabler of this project is a mechanism, based on our PlateMate system, for scalable nutritional analysis, which will make it possible to track the nutritional intake of 400 pregnant women for several months each. 

Biography
Krzysztof Z. Gajos is an associate professor of computer science at the Harvard School of Engineering and Applied Sciences. Krzysztof is primarily interested in intelligent interactive systems, an area that spans human-computer interaction, artificial intelligence, and applied machine learning. Krzysztof received his B.Sc. and M.Eng. degrees in Computer Science from MIT. Subsequently he was a research scientist at the MIT Artificial Intelligence Laboratory, where he managed The Intelligent Room Project. In 2008, he received his Ph.D. in Computer Science from the University of Washington in Seattle. Before coming to Harvard in September of 2009, he spent a year as a post-doctoral researcher in the Adaptive Systems and Interaction group at Microsoft Research.
URL: http://www.eecs.harvard.edu/~kgajos/

Back to top

Understanding Audition Via Sound Synthesis   

Josh McDermott, MIT
Wednesday, Nov 20
4:00 PM – 5:00 PM

Watch the video

Description
Humans infer many important things about the world from the sound pressure waveforms that enter the ears. In doing so we solve a number of difficult and intriguing computational problems. We recognize sound sources despite large variability in the waveforms they produce, extract behaviorally relevant attributes that are not explicit in the input to the ear, and do so even when sound sources are embedded in dense mixtures with other sounds. This talk will describe recent progress in understanding these remarkable auditory abilities. The work stems from the premise that a theory of the perception of some property should enable the synthesis of signals that appear to have that property. Sound synthesis can thus be used to test theories of perception and to explore representations of sound. I will describe several examples of this approach. 

Biography
Josh McDermott is a perceptual scientist studying sound, hearing, and music in the Department of Brain and Cognitive Sciences at MIT. His research addresses human and machine audition using tools from experimental psychology, engineering, and neuroscience. He is particularly interested in using the gap between human and machine competence to both better understand biological hearing and design better algorithms for analyzing sound.
McDermott obtained a BA in Brain and Cognitive Science from Harvard, an MPhil in Computational Neuroscience from University College London, a PhD in Brain and Cognitive Science from MIT, and postdoctoral training in psychoacoustics at the University of Minnesota and in computational neuroscience at NYU. He is the recipient of a Marshall Scholarship, a National Defense Science and Engineering fellowship, and a James S. McDonnell Foundation Scholar Award. He is currently an Assistant Professor in the Department of Brain and Cognitive Sciences at MIT.

Back to top

Graphical approaches to Biological Problems   

Ernest Fraenkel, MIT
Wednesday, Nov 6
4:00 PM – 5:00 PM

Watch the video

Description
Biology has been transformed by new technologies that provide detailed descriptions of the molecular changes that occur in diseases. However, it is difficult to use these data to reveal new therapeutic insights for several reasons. Despite their power, each of these methods still only captures a small fraction of the cellular response. Moreover, when different assays are applied to the same problem, they provide apparently conflicting answers. I will show that network modeling reveals the underlying consistency of the data by identifying small, functionally coherent pathways linking the disparate observations. We have used these methods to analyze how oncogenic mutations alter signaling and transcription and to prioritize experiments aimed at discovering therapeutic targets.

Biography
Ernest Fraenkel was first introduced to computational biology in high school when the field did not yet have a name. His early experiences with Professor Cyrus Levinthal of Columbia University taught him that biological insights often come from unexpected disciplines. After graduating summa cum laude from Harvard College in Chemistry and Physics he obtained his Ph.D. at MIT in the department of Biology and did post-doctoral work at Harvard. As the field of Systems Biology began to emerge, he established a research group in this area at the Whitehead Institute and then moved to the Department of Biological Engineering at the Massachusetts Institute of Technology. His research group takes a multi-disciplinary approach involving tightly connected computational and experimental methods to uncover the molecular pathways that are altered in cancer, neurodegenerative diseases, and diabetes.

Back to top

Social Norms and the Impact of Laws   

Matt Jackson, Stanford
Wednesday, Sept 18 
4:00 PM – 5:00 PM

Watch the video

Description
We examine the impact of laws in a model of social norms. Agents each choose a level of behavior (e.g., a speed of driving, an amount of corruption, etc.). Agents choose behaviors not only based on their personal preference but also based on a preference to match or conform to the behaviors of other agents with whom they interact. A law caps the level of behavior and a law-abiding agent may whistle-blow on an agent who is breaking the law: correcting the behavior of the latter and making him or her pay a fine. The impact of a law is endogenous to the social norm (equilibrium of behavior) and as such laws can have nonmonotone effects: a strict law may be broken more frequently than an lax one. Moreover, law-breakers may choose more extreme behavior as a law becomes stricter. Historical behavior can influence the impact of a law: exactly the same law can have drastically different impacts in two different societies depending on past social norms.

Biography
Matthew O. Jackson is the Eberle Professor of Economics at Stanford University and an external faculty member of the Santa Fe Institute and a fellow of CIFAR. Jackson's research interests include game theory, microeconomic theory, and the study of social and economic networks, including diffusion, learning, and network formation. He was at Northwestern and Caltech before joining Stanford, and has a PhD from Stanford and BA from Princeton. Jackson is a Fellow of the Econometric Society and the American Academy of Arts and Sciences, and former Guggenheim Fellow.

Back to top

Crowdsourcing Audio Production Interfaces   

Bryan Pardo, Northwestern
Wednesday, Sept 11
4:00 PM – 5:00 PM
Watch the video

Description
Potential users of audio production software, such as audio equalizers, may be discouraged by the complexity of the interface and a lack of clear affordances in typical interfaces. We seek to simplify interfaces for task such as audio production (e.g. mastering a music album with ProTools), audio tools (e.g. equalizers) and related consumer devices (e.g. hearing aids). Our approach is to use an evaluative paradigm (“I like this sound better than that sound”) with the use of descriptive language (e.g. “Make the violin sound ‘warmer.’”). To achieve this goal, a system must be able to tell whether the stated goal is appropriate for the selected tool (e.g. making the violin “warmer” with a panning tool does not make sense). If the goal is appropriate for the tool, it must know what actions need to be taken (e.g. add some reverberation). Further, the tool should not impose a vocabulary on users, but rather understand the vocabulary users prefer. In this talk, Bryan Pardo describes, iQ, an equalizer that uses an evaluative control paradigm and SocialEQ, a web-based project to crowdsource a vocabulary of actionable audio descriptors.

Biography
Bryan Pardo, head of the Northwestern University Interactive Audio Lab, is an associate professor in the Northwestern University Department of Electrical Engineering and Computer Science. Prof. Pardo received a M. Mus. in Jazz Studies in 2001 and a Ph.D. in Computer Science in 2005, both from the University of Michigan. He has authored over 70 peer-reviewed publications. He has developed speech analysis software for the Speech and Hearing department of the Ohio State University, statistical software for SPSS and worked as a machine learning researcher for General Dynamics. While finishing his doctorate, he taught in the Music Department of Madonna University. When he's not programming, writing or teaching, he performs throughout the United States on saxophone and clarinet at venues such as Albion College, the Chicago Cultural Center, the Detroit Concert of Colors, Bloomington Indiana's Lotus Festival and Tucson's Rialto Theatre.

Back to top

Seeing the invisible; Predicting the unexpected   

Michal Irani, Weizmann
Wednesday,  September 4
4:00 PM – 5:00 PM

Watch the video

Description
In this talk I will show how complex visual inference tasks can be performed, with no prior examples, by exploiting internal redundancy within visual data. Comparing and integrating local pieces of visual information gives rise to complex notions of visual similarity and to a general "Inference by Composition" approach. This allows to infer about the likelihood of new visual data that was never seen before, and make inferences about complex static and dynamic visual information without any prior examples. I will demonstrate the power of this approach to several example problems (as time permits):

1. Detecting complex objects and actions.
2. Prediction of missing visual information.
3. Inferring the "likelihood" of "never-before-seen" visual data.
4. Detecting the "irregular" and "unexpected"
5. Spatial super-resolution (from a single image) & Temporal super-resolution (from a single video).
6. Generating visual summaries (of images and videos)
7. Segmentation of complex visual data.

Biography
Michal Irani is a Professor at the Weizmann Institute of Science, in the Department of Computer Science and Applied Mathematics. She received a B.Sc. degree in Mathematics and Computer Science from the Hebrew University of Jerusalem in 1985, and M.Sc. and Ph.D. degrees in Computer Science from the same institution in 1989 and 1994, respectively. From 1993 to 1996, she was a member of the technical staff of the Vision Technologies Laboratory at the David Sarnoff Research Center (Princeton, New Jersey, USA). She joined the Weizmann Institute at 1997. Michal's research interests center around computer vision, image processing, and video information analysis. Michal's prizes and honors include the David Sarnoff Research Center Technical Achievement Award (1994), the Yigal Allon three-year Fellowship for Outstanding Young Scientists (1998), and the Morris L. Levinson Prize in Mathematics (2003). At the European Conference on Computer Vision, she received awards for Best Paper in 2000 and in 2002, and was awarded an Honorable Mention for the Marr Prize at the IEEE International Conference on Computer Vision in 2001 and in 2005.

Back to top

Differential Privacy: Theoretical and Practical Challenges  

Salil Vadhan, Harvard
Wednesday, August 14
4:00 PM – 5:00 PM

Watch the video

Description
Differential Privacy is framework for enabling the analysis of privacy-sensitive datasets while ensuring that individual-specific information is not revealed. The concept was developed in a body of work in theoretical computer science starting about a decade ago, largely coming from Microsoft Research. It is now flourishing as an area of theory research, with deep connections to many other topics in theoretical computer science. At the same time, its potential for addressing pressing privacy problems in a variety of domains has attracted the interest of scholars from many other areas, including statistics, databases, medical informatics, law, social science, computer security and programming languages.
In this talk, I will give a general introduction to differential privacy, and discuss some of the theoretical and practical challenges for future work in this area. I will also describe a large, multidisciplinary research project at Harvard, called "Privacy Tools for Sharing Research Data," in which we are working on some of these challenges as well as others associated with the collection, analysis, and sharing of personal data for research in social science and other fields.

Biography
Salil Vadhan is the Vicky Joseph Professor of Computer Science and Applied Mathematics at the School of Engineering & Applied Sciences at Harvard University. He is a member of the Theory of Computation research group. His research areas include computational complexity, cryptography, randomness in computation, and data privacy.

Back to top

Technologies of Choice? – ICTs, development and the capabilities approach  

Dorothea Kleine, University of London
Wednesday, July 31
4:00 PM – 5:00 PM
Watch the video

Description
ICT for development (ICT4D) scholars claim that the internet, radio and mobile phones can support development. Yet the dominant paradigm of development as economic growth is too limiting to understand the full potential of these technologies. One key rival to such econocentric understandings is Amartya Sen’s capabilities approach to development – focusing on a pluralistic understanding of people’s values and the lives they want to lead. In her book, Technologies of Choice? (MIT Press 2013), Dorothea Kleine translates Sen’s approach into policy analysis and ethnographic work on technology adaptation. She shows how technologies are not neutral, but imbued with values that may or may not coincide with the values of users. The case study analyses Chile’s pioneering ICT policies in the areas of public access, digital literacy, and online procurement and the sobering reality of one of the most marginalised communities in the country where these policies play out. The book shows how both neoliberal and egalitarian ideologies are written into technologies as they permeate the everyday lives and livelihoods of women and men in the town. Technologies of Choice? examines the relationship between ICTs, choice, and development. It argues for a people-centred view of development that has individual and collective choice at its heart.

Biography
Dorothea Kleine is Senior Lecturer in Human Geography and Director of the interdisciplinary ICT4D Centre at Royal Holloway, University of London (www.ict4dc.org). In 2013 the Centre was named among the top 10 global think tanks in science and technology (U of Penn survey of experts) and has a highly recognized PhD and Masters program in ICT for development. Dorothea’s work focuses on the relationship between notions of “development”, choice and individual agency, sustainability, gender and technology. She has published widely on these subjects, and has worked as an advisor to UNICEF, UNEP, EUAid, DFID, GIZ and to NGOs. The Centre runs various collaborative research projects with international agencies and private sector partners.

Back to top

The Cryptographic Lens  

Shafi Goldwasser, MIT
Wednesday, July 17
4:00 PM – 5:00 PM

Watch the video

Description
Going beyond the basic challenge of private communication, in the last 35 years, cryptography has become the general study of correctness and privacy of computation in the presence of a computationally bounded adversary, and as such has changed how we think of proofs, reductions, randomness, secrets, and information. In this talk I will discuss some beautiful developments in the theory of computing through this cryptographic lens, and the role cryptography can play in the next successful shift from local to global computation.

Biography
Goldwasser is the RSA Professor of Electrical Engineering and Computer Science at MIT and a professor of computer science and applied mathematics at the Weizmann Institute of Science. Goldwasser received a BS (1979) in applied mathematics from CMU and PhD (1984) in computer science from UC Berkeley. Goldwasser is the 2012 recipient of the ACM Turing Award.

Back to top

Does the Classic Microfinance Model Discourage Entrepreneurship Among the Poor? Experimental Evidence from India  

Erica Field, Duke
Wednesday, July 10
4:00 PM – 5:00 PM
Watch the video

Description
Do the repayment requirements of the classic microfinance contract inhibit investment in high-return but illiquid business opportunities among the poor? Using a field experiment, we compare the classic contract which requires that repayment begin immediately after loan disbursement to a contract that includes ta two-month grace period. The provision of a grace period increased short-run business investment and long-run profits but also default rates. The results, thus, indicate that debt contracts that require early repayment discourage illiquid risky investment and thereby limit the potential impact of microfinance on microenterprise growth and household poverty.

Biography
Erica M. Field joined the Duke faculty as an associate professor in 2011. She is also a faculty research fellow at the National Bureau of Economic Research. Professor Field received her Ph.D. and M.A. in economics from Princeton University in 2003 and her B.A. in economics and Latin American studies from Vassar College in 1996. Since receiving her doctorate, she has worked at Princeton, Stanford, and most recently Harvard, where she was a professor for six years before coming to Duke.

Back to top

Machine Learning for Complex Social Processes  

Hanna Wallach, UMass Amherst
Wednesday, July 3
4:00 PM – 5:00 PM

Watch the video

Description
From the activities of the US Patent Office or the National Institutes of Health to communications between scientists or political legislators, complex social processes---groups of people interacting with each other in order to achieve specific and sometimes contradictory goals---underlie almost all human endeavor. In order draw thorough, data-driven conclusions about complex social processes, researchers and decision-makers need new quantitative tools for exploring, explaining, and making predictions using massive collections of interaction data. In this talk, I will discuss the development of machine learning methods for modeling interaction data. I will concentrate on exploratory analysis of communication networks --- specifically, discovery and visualization of topic-specific subnetworks in email data sets. I will present a new Bayesian latent variable model of network structure and content and explain how this model can be used to analyze intra-governmental email networks.

Biography
In fall 2010, Hanna Wallach started as an assistant professor in the Department of Computer Science at the University of Massachusetts Amherst. She is one of five core faculty members involved in UMass's new Computational Social Science Initiative. Prior to this, Hanna was a senior postdoctoral research associate, also at UMass, where she developed statistical machine learning techniques for analyzing complex data regarding communication and collaboration within scientific and technological innovation communities. Hanna's Ph.D. work, undertaken at the University of Cambridge, introduced new methods for statistically modeling text using structured topic models—models that automatically infer semantic information from unstructured text and information about document structure, ranging from sentence structure to inter-document relationships. Hanna holds an M.Sc. from the University of Edinburgh, where she specialized in neural computing and learning from data, and was awarded the University of Edinburgh's 2001/2002 prize for Best M.Sc. Student in Cognitive Science. Hanna received her B.A. from the University of Cambridge Computer Laboratory in 2001. Her undergraduate project, "Visual Representation of Computer-Aided Design Constraints," won the award for the best computer science student in the 2001 U.K. Science Engineering and Technology Awards. In addition to her many papers on statistical machine learning techniques for analyzing structured and unstructured data, Hanna's tutorial on conditional random fields is extremely widely cited and used in machine learning courses around the world. Her recent work (with Ryan Prescott Adams and Zoubin Ghahramani) on infinite belief networks won the best paper award at AISTATS 2010. As well as her research, Hanna works to promote and support women's involvement in computing. In 2006, she co-founded an annual workshop for women in machine learning, in order to give female faculty, research scientists, postdoctoral researchers, and graduate students an opportunity to meet, exchange research ideas, and build mentoring and networking relationships. In her not-so-spare time, Hanna is a member of Pioneer Valley Roller Derby, where she is better known as Logistic Aggression.

Back to top

Crowd Computing  

Rob Miller, MIT
Wednesday, June 19
4:00 PM – 5:00 PM
Watch the video

Description
Crowd computing harnesses the power of people out in the web to do tasks that are hard for individual users or computers to do alone. Like cloud computing, crowd computing offers elastic, on-demand human resources that can drive new applications and new ways of thinking about technology. This talk will describe several prototype systems we have built, including:
- Soylent, a Word plugin that crowdsources text editing tasks;
- VizWiz, an app that helps blind people see using a crowd’s eyes;
- Adrenaline, a camera shutter driven by crowd perception;
- Caesar, a system for code reviewing by a crowd of programmers.
Crowd computing raises new challenges at the intersection of computer systems and human-computer interaction, including minimizing latency, improving quality of work, and providing the right incentives to the crowd. The talk will discuss the design space and the techniques we have developed to address some of these problems. We are now in a position where "Wizard of Oz" is no longer just a prototyping technique -- thanks to crowd computing, Wizard of Oz systems can be useful and deployable

Biography
Rob Miller is an associate professor of computer science at MIT, and associate director of the Computer Science and Artificial Intelligence Laboratory (CSAIL). He earned bachelors and masters degrees in computer science from MIT (1995) and PhD from Carnegie Mellon University (2002). He has won an ACM Distinguished Dissertation honorable mention, NSF CAREER award, and six best paper awards at UIST and USENIX. He has been program co-chair for UIST 2010, general chair for UIST 2012, and associate editor of ACM TOCHI. He has won two department awards for teaching, and was named a MacVicar Faculty Fellow for outstanding contributions to MIT undergraduate education. His research interests lie at the intersection of programming and human computer interaction: making programming easier for end-users (web end-user programming), making it more productive for professionals (HCI for software developers), and making people part of the programming system itself (crowd computing and human computation). 

Back to top

Random Sampling, Random Structures and Phase Transitions

Dana Randall, Georgia Tech
Wednesday, June 5
4:00 PM – 5:00 PM
Watch the video

Description
Sampling algorithms using Markov chains arise in many areas of computation, engineering, and science. The idea is to perform a random walk among the elements in a large state space so that samples chosen from the stationary distribution are useful for the application. In order to get reliable results efficiently, we require the chain to be rapidly mixing, or quickly converging to equilibrium. Often there is a parameter of the system (typically related to temperature or fugacity) so that at low values many natural chains converge rapidly while at high values they converge slowly, requiring exponential time. This dichotomy is often related to phase transitions in the underlying models. In this talk we will explain this phenomenon, giving examples from the natural and social sciences, including magnetization, lattice gasses, colloids, and models of segregation.

Biography
Dana Randall is the Advance Professor of Computing and an Adjunct Professor of Mathematics at the Georgia Institute of Technology. Her research in randomized algorithms focuses on the design and analysis of efficient algorithms for sampling and approximate counting, using techniques from computing, discrete mathematics and statistical physics. Dr. Randall received her A.B. in Mathematics from Harvard and her Ph.D. in Computer Science from U.C. Berkeley and held postdoctoral positions at the Institute for Advanced Study and Princeton. She is a Fellow of the American Mathematical Society, a National Associate of the National Academies, and the recipient of a Sloan Fellowship and an NSF Career Award.

Back to top

Time Incentives in Public Procurement: Evidence from California and Minnesota

Greg Lewis, Harvard
Wednesday, May 22
4:00 PM – 5:00 PM
Watch the video

Description
Most procurement contracts incentivize timely delivery, either through the auction mechanism or the contract terms. We evaluate both of these approaches in the context of highway procurement, using data from California and Minnesota. We show that firms respond strongly to incentives: for example, in California, when contractors compete for contracts on the basis of both price and delivery date, contracts are completed 30-40% faster. We simulate counterfactual outcomes under different incentive schemes, and discuss the practical implications of our research for the design of procurement contracts.

Biography
Greg Lewis is associate professor of economics at Harvard University, and faculty research fellow at the National Bureau of Economic Research. His main research interests lie in industrial organization and market design, with a particular focus on auction theory and estimation. Recently, his time has been spent developing dynamic models of auction markets, suggesting methods for price discrimination in online display advertising, examining learning by firms in the British electricity market and analyzing how contracts terms interact with moral hazard in highway procurement. He received his bachelor’s degree in economics and statistics from the University of the Witwatersrand in South Africa, and his MA and PhD both from the University of Michigan.

Back to top

Sum-Product Networks: Powerful Models with Tractable Inference

Pedro Domingos, U Washington
Wednesday, May 8
4:00 PM – 5:00 PM
Watch the video

Description
Big data makes it possible in principle to learn very rich probabilistic models, but inference in them is prohibitively expensive. Since inference is typically a subroutine of learning, in practice learning such models is very hard. Sum-product networks (SPNs) are a new model class that squares this circle by providing maximum flexibility while guaranteeing tractability. In contrast to Bayesian networks and Markov random fields, SPNs can remain tractable even in the absence of conditional independence. SPNs are defined recursively: an SPN is either a univariate distribution, a product of SPNs over disjoint variables, or a weighted sum of SPNs over the same variables. It's easy to show that the partition function, all marginals and all conditional MAP states of an SPN can be computed in time linear in its size. SPNs have most tractable distributions as special cases, including hierarchical mixture models, thin junction trees, and nonrecursive probabilistic context-free grammars. I will present generative and discriminative algorithms for learning SPN weights, and an algorithm for learning SPN structure. SPNs have achieved impressive results in a wide variety of domains, including object recognition, image completion, collaborative filtering, and click prediction. Our algorithms can easily learn SPNs with many layers of latent variables, making them arguably the most powerful type of deep learning to date. (Joint work with Rob Gens and Hoifung Poon.)

Biography
Pedro Domingos received an undergraduate degree (1988) and M.S. in Electrical Engineering and Computer Science (1992) from IST, in Lisbon. He received an M.S. (1994) and Ph.D. (1997) in Information and Computer Science from the University of California at Irvine. He spent two years as an assistant professor at IST, before joining the faculty of the University of Washington in 1999. He is the author or co-author of over 200 technical publications in machine learning, data mining, and other areas. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and served on the program committees of AAAI, ICML, IJCAI, KDD, NIPS, SIGMOD, UAI, WWW, and others. He is a AAAI Fellow, and received a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, several best paper awards, and other distinctions.

Back to top

Compressed Sensing and Natural Image Statistics

Yair Weiss, Hebrew U
Wednesday, April 24
4:00 PM – 5:00 PM
Watch the video

Description
Compressed sensing (CS) refers to a branch of applied mathematics which is based on the surprising result whereby signals that are exactly “k-sparse” (i.e. can be represented by at most k nonzero coefficients in some basis) can be exactly reconstructed using a small number of random measurements. Since natural images tend to be sparse in the wavelet basis, one of the motivating examples of CS has always been to reconstruct high resolution images from a small number of random measurements. Unfortunately, there are some significant deviations between the way that natural images behave and the assumptions of the dramatic theorems, and in fact random projections perform quite poorly when applied to real images. I will describe an alternative theory, which we call “Informative Sensing”, that seeks a small number of projections that are maximally informative given a known distribution over signals. I will show experimental results demonstrating that the informative projections indeed outperform random projections, but that the savings relative to more standard imaging methods are altogether rather modest.
Joint work with Hyun Sung Chang and Bill Freeman.

Biography
Yair Weiss is a Professor of Computer Science and Engineering at the Hebrew University of Jerusalem. He is currently on sabbatical at Microsoft Research New England.

Back to top

The Disruptive Power Of Three-Dimensional Printing

Deven Desai, Thomas Jefferson School of Law
Thursday, May 2 *note the alternate date*
4:00 PM – 5:00 PM
Watch the video

Description
The Industrial Revolution was founded on economies of scale, but the next transformation in manufacturing may come from individual households. An additive (or 3D) printer is a desktop machine that can make customized physical objects from software and simple raw materials. This device promises to dramatically reduce the cost of making and distributing tangible goods, but it could also sharply increase patent infringement. Indeed, 3D printers present a challenge to patent law that is analogous to the disruption of copyright by MP3 files. This talk explores the implications of 3D printing for patents.

Biography
Deven Desai is a law professor at the Thomas Jefferson School of Law and recently completed serving as Academic Research Counsel at Google, Inc. As a law professor, he teaches trademark, intellectual property theory, business associations, and information privacy law. He is a graduate of the University of California, Berkeley and Yale Law School. He has also spent year as a Visiting Fellow at Princeton University’s Center for Information Technology Policy. Professor Desai’s scholarship examines how business interests and economic theories shape privacy and intellectual property law and where those arguments explain productivity or where they fail to capture society’s interest in the free flow of information and development. His articles include Speech Citizenry and the Market: A Corporate Public Figure Doctrine 98 Minnesota Law Review __ (2013) (forthcoming); Bounded by Brands: An Information Network Approach to Brands, U.C. Davis Law Review (2013) (forthcoming); Beyond Location: Data Security in the 21st Century, Communications of the ACM (January, 2013); Response: An Information Approach to Trademarks, 100 Georgetown Law Journal 2119 (2012); From Trademarks to Brands, 46 Florida Law Review 981 (2012); The Life and Death of Copyright, 2011 Wisconsin Law Review 219 (2011); Brands, Competition, and the Law, 2010 Brigham Young Law Review 1425 (2010) (with Spencer Waller); Privacy? Property?: Reflections on the Implications of a Post-Human World 18 Kansas J. of Law & Public Policy (2009); Property, Persona, and Preservation, 81 Temple Law Review 67 (2008); and Confronting the Genericism Conundrum, 28 Cardozo Law Review 789 (2007) (Sandra L. Rierson, co-author).

Back to top

How Entrepreneurs Came to Own Innovation: The Rhetoric of Economic Risk in High-Tech

Gina Neff, U of Washington/Princeton
Wednesday, April 10, 2013
4:00 PM – 5:00 PM
Watch the video

Description
How did innovation come to be synonymous with entrepreneurship? How did creativity become equated with risk? Perhaps more importantly, how did these concepts lead to advice such as that given by New York Times columnist Thomas Friedman: “Need a Job? Then Invent One?”
This talk will present research on the first wave of employees with dot-com start-ups of the 1990s and 2000s who exhibited entrepreneurial behavior in their jobs--investing time, energy, and other personal resources--when they themselves were employees and not entrepreneurs. I argue that this “venture labor” is part of a longer and broader social shifting of economic risk to individual responsibility and understanding it is of paramount importance for encouraging innovation and, even more important, for creating sustainable work environments in high-tech sectors today.

Biography
Gina Neff is an associate professor of communication at the University of Washington. She studies the contemporary economics of media production and the impact of new technologies on communication, focusing on both high-tech and media industries. Her book Venture Labor: Work and the Burden of Risk in Innovative Industries (MIT 2012) examines the risk and uncertainties borne by New York City’s new media pioneers during the first dot-com boom. She co-directs the Project on Communication Technology and Organizational Practices, a research group studying the roles of communication technology in the work around building design and construction. Her research has been funded by the National Science Foundation, and she is currently at work on a three-year project funded by Intel studying the impact of social media and consumer health technologies on the organization of primary care.
She holds a Ph.D. in sociology from Columbia University, where she remains an external faculty affiliate of the Center on Organizational Innovation. She is currently a fellow at Princeton’s Center for Information Technology Policy and visiting scholar at NYU’s Media, Culture and Communication department. She has held appointments at UC San Diego, UCLA, and Stanford University. In addition to academic outlets, her research and writing have been featured in The New York Times, Christian Science Monitor, Fortune, The American Prospect, and The Nation.Back to top

So You Think Quantum Computing Is Bunk?

Scott Aaronson, MIT
Wednesday, April 10, 2013
4:00 PM – 5:00 PM
Watch the video

Description
In this talk, I'll take an unusual tack in explaining quantum computing to a broad audience. I'll start by assuming, for the sake of argument, that scalable quantum computing is "too crazy to work": i.e., that it must be impossible for some fundamental physical reason. I'll then investigate the sorts of radical additions or changes to current physics that we seem forced to contemplate in order to justify such an assumption. I'll point out the many cases where such changes seem ruled out by existing experiments, or by no-go theorems such as the Bell Inequality. I'll also mention two recent no-go theorems for so-called "epistemic" hidden-variable theories: one due to Pusey, Barrett, and Rudolph, the other to Bouland, Chua, Lowther, and myself. Finally, I'll discuss my 2004 notion of a "Sure/Shor separator," as well as the BosonSampling proposal [A.-Arkhipov 2011] and its recent experimental realizations---which suggest one possible route to falsifying the Extended Church-Turing Thesis more directly than by building a universal quantum computer.

Biography
Scott Aaronson is the TIBCO Career Development Associate Professor of Electrical Engineering and Computer Science at MIT. His research focuses on the capabilities and limits of quantum computers, and computational complexity theory more generally. His book, "Quantum Computing Since Democritus," was recently published by Cambridge University Press; he's also written about quantum computing for Scientific American and the New York Times. He's received the National Science Foundation's Alan T. Waterman Award, as well as MIT's Junior Bose Award for Excellence in Teaching.

Back to top

Platforms, practices, politics: Towards an open history of social media

Jean Burgess, Queensland U of Technology
Wednesday, March 27, 2013
4:00 PM – 5:00 PM
Watch the video

Description
Social media has been with us as a mainstream phenomenon for barely a decade now. That period has seen multiple, distinct paradigm shifts in the business models, uses, and discourses surrounding social media, as well as in approaches to conducting research on and through particular social media platforms. In this paper I draw on recent attempts within media, communication and cultural studies to go beyond static, single-platform snapshots and to develop more synthesized, general accounts of how social media has evolved since the early 2000s. I show how we might identify patterns of change across platforms and over time, and discuss the practical and conceptual challenges of opening up these short but dynamic histories of the proprietary web.

Biography
Jean Burgess is an Associate Professor of Digital Media Studies and Deputy Director of the ARC Centre of Excellence for Creative Industries & Innovation (CCI) at Queensland University of Technology, Australia. Her research focuses on the uses, politics and methodological implications of social and mobile media platforms.

Back to top

The Virtual Lab

Duncan Watts, Microsoft Research New York City
Wednesday, December 5, 2012
4:00 PM – 5:00 PM
Watch the video

Description
Crowdsourcing sites like Amazon's Mechanical Turk are increasingly being used by researchers to construct "virtual labs" in which they can conduct behavioral experiments. In this talk, I describe some recent experiments that showcase the advantages of virtual over traditional physical labs, as well as some of the limitations. I then discuss how this relatively new experimental capability may unfold in the near future, along with some implications for social and behavioral science.

Biography
Duncan Watts is a principal researcher at Microsoft Research and a founding member of the MSR-NYC lab. From 2000-2007, he was a professor of Sociology at Columbia University, and then prior to joining Microsoft, a principal research scientist at Yahoo! Research, where he directed the Human Social Dynamics group . He has also served on the external faculty of the Santa Fe Institute and is currently a visiting fellow at Columbia University and at Nuffield College, Oxford.
His research on social networks and collective dynamics has appeared in a wide range of journals, from Nature, Science, and Physical Review Letters to the American Journal of Sociology and the Harvard Business Review. He is also the author of three books, including Six Degrees: The Science of a Connected Age (W.W. Norton, 2003) and Everything is Obvious*: Once You Know The Answer (Crown Business, 2011).
He holds a B.Sc. in Physics from the Australian Defence Force Academy, from which he also received his officer’s commission in the Royal Australian Navy, and a Ph.D. in Theoretical and Applied Mechanics from Cornell University.

Back to top

The Applicant Auction for Top-Level Domains: Using an auction to efficiently resolve conflicts among applicants

Peter Cramton, University of Maryland
Wednesday, November 28, 2012
4:00 PM – 5:00 PM
Watch the video

Description
The prospect of using auctions to resolve conflicts among parties competing for the same top-level internet domains is described. In such an auction the winner’s payment is divided among the losers, whereas if the conflict is not resolved then ICANN will conduct an auction and retain the winner’s payment. For first-price and second-price sealed-bid auctions, we characterize equilibrium bidding strategies and provide examples, assuming bidders’ valuations are distributed independently and are either symmetrically or asymmetrically distributed. The qualitative properties of equilibria reveal novel features; for example, in a second-price auction a bidder might bid more than her valuation in order to drive up the winner’s payment. Even so, examples indicate that in symmetric cases a bidder’s expected profit is the same in the two auction formats. We then test in the experimental lab two auction formats that extent the setting from a single domain to the actual setting with many domains. The first format is a sequential first-price sealed-bid auction; the second format is a simultaneous ascending clock auction. The framing and subjects were chosen to closely match the actual setting. Subjects were PhD students at the University of Maryland in Economics, Computer Science, and Computer Engineering, with training in game theory and auction theory. Each subject played the role of an actual company (e.g., Google) and bid for domains (e.g., .book) consistent with the company’s applications. Subjects were given instructions explaining the auction and the equilibrium theory for the single-item case in relevant examples. Both formats achieved auction efficiencies of 98% in the lab. This high level of efficiency is especially remarkable in the case with asymmetric distributions—the format performed better than the simple single-item equilibrium despite the presence of budget constraints in the lab. This experiment together with previous results on the robustness of ascending auctions in general and simultaneous ascending clock auctions in particular suggest that the simultaneous ascending clock auction will perform best in this setting.
See “Applicant Auctions for Internet Top-Level Domains: Resolving Conflicts Efficiently” (with Ulrich Gall, Pacharasut Sujarittanonta, and Robert Wilson), Working Paper, University of Maryland, 11 November 2012. [Presentation]

Biography
Peter Cramton is Professor of Economics at the University of Maryland. Since 1983, he has conducted research on auction theory and practice. This research appears in the leading economics journals. The main focus is the design of auctions for many related items. Applications include spectrum auctions, electricity auctions, and treasury auctions. On the practical side, he is Chairman of Market Design Inc., an economics consultancy founded in 1995, focusing on the design of auction markets. He also is Founder and Chairman of Cramton Associates LLC, which since 1993 has provided expert advice on auctions and market design. Since 2001, he has played a lead role in the design and implementation of electricity auctions in France and Belgium, gas auctions in Germany, and the world’s first auction for greenhouse gas emissions held in the UK in 2002. He has advised numerous governments on market design and has advised dozens of bidders in high-stake auction markets. Since 1997, he has advised ISO New England on electricity market design and was a lead designer of New England’s forward capacity auction. He led the design of electricity and gas markets in Colombia, including the Firm Energy Market, the Forward Energy Market, and the Long-term Gas Market. Since June 2006, he played a leading role in the design and development of Ofcom’s spectrum auctions in the UK. He has advised the UK, the US, and Australia on greenhouse gas auction design. He led the development of the FAA’s airport slot auctions for the New York City airports. He received his B.S. in Engineering from Cornell University and his Ph.D. in Business from Stanford University.

Back to top

Miku: Virtual Idol as Media Platform

Ian Condry, MIT
Wednesday, November 14, 2012
4:00 PM – 5:00 PM
Watch the video

Description
Miku Hatsune is Japan's number one virtual idol. Her songs are sold online, she is one of the most requested karaoke downloads, she promotes Toyota in TV commercials, she performs concerts with live bands -- and she doesn't exist. Miku is a voice in music synthesizer software, and her community of users have created something new in the world of popular culture: a crowd-sourced celebrity. Based on fieldwork in Japan and the US, this talk will explore the dynamics of the social in media and the value of collaborative creativity.

Biography
Ian Condry is a cultural anthropologist and associate professor of Comparative Media Studies at MIT. His forthcoming book The Soul of Anime: Collaborative Creativity and Japan's Media Success Story (January 2013, Duke University Press) focuses on Japan's anime creators including participant-observation in studios, fan conventions and toy companies. His first book, Hip-Hop Japan: Rap and the Paths of Cultural Globalization (2006) is based on fieldwork in Tokyo nightclubs and recording studios. More info: http://iancondry.com

Back to top 

Music intelligence & the "Taste Profile" - What computers think of you and your music taste

Brian Whitman, The Echo Nest
Wednesday, October 31, 2012
4:00 PM – 5:00 PM
Watch the video

Description
Over 200 million people now trust an algorithm they’ve never met to listen to and discover music. But music needs a bit more care than collaborative filtering or automated editorial approaches can give, and before we let Facebook automatically make mixtapes for our crushes, we should step back and see what the potential of music analysis is and how we can give it more respect.
For the past 10 years I’ve been working on automatic music analysis, first academically and now as the co-founder and CTO of the Echo Nest, a company you’ve never heard of but powers most music discovery experiences you have on the internet today, from Spotify to Clear Channel to MTV. I’ll show how the interaction between listeners and music is being modeled today, where it is amazing and where it falls flat, and how connections are being made between your music taste and your identity.

Biography
Brian is recognized as a leading scientist in the area of music and text retrieval and natural language processing. He received his doctorate from MIT's Media Lab in 2005 and co-founded The Echo Nest to provide music recommendation, search, playlisting, fingerprinting and personalization technology based on his research to much of the online music industry. As the CTO of the Echo Nest, Brian leads new product development and focuses on future taste profile and music analytic products.

Back to top

Real applications of non-real numbers

Roger Myerson, The University of Chicago
*Special Date & Time*
Thursday, October 18, 2012

1:30 PM – 2:30 PM
No recording available

Description
This paper considers a simple model of credit cycles driven by moral hazard in financial intermediation. Investment advisors or bankers must earn moral-hazard rents, but the cost of these rents can be efficiently spread over a banker's entire career, by promising large back-loaded rewards if the banker achieves a record of consistently successful investments. The dynamic interactions among different generations of bankers can create equilibrium credit cycles with repeated booms and recessions. We find conditions when taxing workers to subsidize bankers can increase investment and employment enough to make the workers better off.
The paper is at http://home.uchicago.edu/~rmyerson/research/index.html

Biography
Roger Myerson is the Glen A. Lloyd Distinguished Service Professor of Economics at the University of Chicago. He has made seminal contributions to the fields of economics and political science. In game theory, he introduced refinements of Nash's equilibrium concept, and he developed techniques to characterize the effects of communication when individuals have different information. His analysis of incentive constraints in economic communication introduced some of the fundamental ideas in mechanism design theory, including the revelation principle and the revenue-equivalence theorem in auctions and bargaining. Professor Myerson has also applied game-theoretic tools to political science, analyzing how political incentives can be affected by different electoral systems and constitutional structures.
Myerson is the author of Game Theory: Analysis of Conflict (1991) and Probability Models for Economic Decisions (2005). He also has published numerous articles in Econometrica, the Journal of Economic Theory, Games and Decisions, and the International Journal of Game Theory, for which he served as an editorial board member for 10 years.
Professor Myerson has a PhD from Harvard University and taught for 25 years in the Kellogg School of Management at Northwestern University before coming to the University of Chicago in 2001. He is a member of the American Academy of Arts and Sciences and of the National Academy of Sciences. In 2007, he was awarded the 2007 Nobel Memorial Prize in Economic Sciences in recognition of his contributions to mechanism design theory.

Back to top

Real applications of non-real numbers

Alex Lubotzky, Hebrew U of Jerusalum
Wednesday, October 10. 2012
4:00 PM – 5:00 PM
Watch the video

Description
The system of real numbers are defined mathematically as a "completion' of the rational numbers. But this is not the only way to do it! In fact there are infinitely many others completions- the so called "p-adic numbers". These numbers were defined for pure mathematical reasons and have been a subject of research for a century. But in the last 3 decades they have found 'real world' applications in computer science, construction of networks, algorithms etc. We will try to tell the story in a way which hopefully will make sense also to non-mathematicians.

Biography
Alex Lubotzky is the Weil Professor of Mathematics at the Hebrew University of Jerusalem and an adjunct prof. of math at Yale University. He got his PhD. from Bar-Ilan University in 1980. Following an army service he joined the Hebrew University in 1983.His main area of research is group theory which he likes to combine with other areas like geometry, number theory, combinatorics and computer science. One of his best known works is the construction of Ramanujan graphs (which are optimal expanders) jointly with Phillips and Sarnak. This opened a world of connections between graph theory and representation theory. Lubotzky is an Honorary Foreign Member of the American Academy of Arts and Science and in 2006 he received an honorary degree from the University of Chicago for his contributions to modern mathematics.

Back to top

Diffusion of Microfinance

Matt Jackson, Stanford
Wednesday, October 3. 2012
4:00 PM – 5:00 PM
Watch the video

Description

We examine how participation in a microfinance program diffuses through social networks, using detailed demographic, social network, and participation data from 43 villages in South India. We exploit exogenous variation in the importance (in a network sense) of the people who were first informed about the program, the "injection points" .Microfinance participation is significantly higher when the injection points have higher eigenvector centrality. We also estimate structural models of diffusion that allow us to (i) determine the relative roles of basic information transmission versus other forms of peer influence, and (ii) distinguish information passing by participants and nonparticipants. We find that participants are significantly more likely to pass informationon to friends and acquaintances than informed non-participants. However, information passing by non-participants is still substantial and significant, accounting for roughly one-third of informedness and participation. We also find that, once we have properly conditioned on an individual being informed, her decision to participate is not significantly affected by the participation of her acquaintances.

Biography
Matthew O. Jackson is the Eberle Professor of Economics at Stanford University and an external faculty member of the Santa Fe Institute and a fellow of CIFAR. Jackson's research interests include game theory, microeconomic theory, and the study of social and economic networks, including diffusion, learning, and network formation. He was at Northwestern and Caltech before joining Stanford, and has a PhD from Stanford and BA from Princeton. Jackson is a Fellow of the Econometric Society and the American Academy of Arts and Sciences, and former Guggenheim Fellow.

Back to top

Duolingo: Learn a Language for Free While Helping to Translate the Web

Luis von Ahn, Carnegie Mellon University
Wednesday, September 19. 2012
4:00 PM – 5:00 PM
Watch the video

Description
I want to translate the Web into every major language: every webpage, every video, and, yes, even Justin Bieber's tweets. With its content split up into hundreds of languages -- and with over 50% of it in English -- most of the Web is inaccessible to most people in the world. This problem is pressing, now more than ever, with millions of people from China, Russia, Latin America and other quickly developing regions entering the Web. In this talk, I introduce my new project, called Duolingo, which aims at breaking this language barrier, and thus making the Web truly "world wide."
We have all seen how systems such as Google Translate are improving every day at translating the gist of things written in other languages. Unfortunately, they are not yet accurate enough for my purpose: Even when what they spit out is intelligible, it's so badly written that I can't read more than a few lines before getting a headache.
With Duolingo, our goal is to encourage people, like you and me, to translate the Web into their native languages.

Biography
Luis von Ahn is the A. Nico Habermann Associate Professor of Computer Science at Carnegie Mellon University. He is working to develop a new area of computer science that he calls Human Computation, which aims to build systems that combine the intelligence of humans and computers to solve large-scale problems that neither can solve alone. An example of his work is reCAPTCHA, in which over one billion people -- 15% of humanity -- have helped digitize books and newspapers. Among his many honors are a MacArthur Fellowship, a Packard Fellowship, a Sloan Research Fellowship, a Microsoft New Faculty Fellowship, the ACM Grace Hopper Award, and CMU's Herbert A. Simon Award for Teaching Excellence and Alan J. Perlis Teaching Award. He has been named one of the "50 Best Brains in Science" by Discover Magazine, one of the 50 most influential people in technology by silicon.com, and one of the "Brilliant 10 Scientists" by Popular Science Magazine.

Back to top

How users evaluate things and each other in social media

Jure Leskovec, Stanford University
Wednesday, September 5. 2012
4:00 PM – 5:00 PM
Watch the video

Description
In a variety of domains, mechanisms for evaluation allow one user to say whether he or she trusts another user, or likes the content they produced, or wants to confer special levels of authority or responsibility on them. We investigate a number of fundamental ways in which user and item characteristics affect the evaluations in online settings. For example, evaluations are not unidimensional but include multiple aspects that all together contribute to user’s overall rating. We investigate methods for modeling attitudes and attributes from online reviews that help us better understand user’s individual preferences. We also examine how to create a composite description of evaluations that accurately reflects some type of cumulative opinion of a community. Natural applications of these investigations include predicting the evaluation outcomes based on user characteristics and to estimate the chance of a favorable overall evaluation from a group knowing only the attributes of the group's members, but not their expressed opinions

Biography
Jure Leskovec is assistant professor of Computer Science at Stanford University where he is a member of the Info Lab and the AI Lab. His research focuses on mining large social and information networks.Problems he investigates are motivated by large scale data, the Web and on-line media. This research has won several awards including best paper awards at KDD (2005, 2007, 2010), WSDM (2011), ICDM (2011) and ASCE J. of Water Resources Planning and Management (2009), ACM KDD dissertation award (2009), Microsoft Research Faculty Fellowship (2011), Alfred P. Sloan Fellowship (2012) and NSF Early Career Development (CAREER) Award (2011). He received his bachelor's degree in computer science from University of Ljubljana, Slovenia, Ph.D. in machine learning from the Carnegie Mellon University and postdoctoral training from Cornell University. You can follow him on Twitter @jure

Back to top

Playing “Hide and Seek” - The hidden genome

Michal Linial, The Hebrew University of Jerusalem, Israel
Wednesday, August 30. 2012
4:00 PM – 5:00 PM
Watch the video

Description
The overwhelming increase in sequencing methodology resulted in the accumulation of millions of DNA sequences. These sequences are collected from thousands of genomes that (ideally) sample the ‘tree of life’. I will briefly discuss the ‘minimal set of instructions’ by which a linear sequence is transformed into a functional protein. What happen when the statistical noise is too high, thus classical procedures to predict protein sequences fail? I will focus on the challenge of identifying short proteins that remain buried in the genomic data. For illustration, I will take you for a ‘treasure hunt’ for short proteins.
Many short proteins share fuzzy features that are common to most animal venom. I will discuss the limitation in using classical tools that are based on string comparison, or pattern finding to identify short proteins. For this task, statistical machine learning methods were useful in identifying hidden bioactive sequences in several genomes. Evidently, such sequences are attractive candidates for novel therapy. The test case of short proteins illustrates the importance of a cycle that starts by a biological hypothesis, then uses a computational formulation and finalizes by an experimental validation. Finally, I will discuss our genomes with respect to our ‘partners’ (viruses, bacteria). Once the interaction of these genomes is considered, the source for the dynamic nature of human evolution becomes evident. Related publications:
• Rappoport N, Karsenty S, Stern A, Linial N, Linial M. (2012) Nucl. Acids Res. 40:D313-D320
• Rappoport N, Linial M. (2012) PLoS Comput Biol. 8:e1002364.
• Naamati G, Askenazi M, Linial M. (2010) Bioinformatics 26:i482-i488.
• Naamati G, Askenazi M, Linial M (2009) Nucl. Acids Res. 37:W363-368.
• Kaplan N, Morpurgo N, Linial M. (2007) J Mol Biol. 369:553-566.

Biography
Michal Linial is a Professor of Biochemistry, The Hebrew University, Jerusalem, Israel and a Director of the SCCB, the Sudarsky Center for Computational Biology.ML had published over 150 scientific papers and abstracts on diverse topics in molecular biology, cellular biology, bioinformatics, neuroscience the integration of tools to improve knowledge extractions. M. Linial has an experimental and computational laboratory. M.L is the leader and the founder of the first established educational program in Israel for Computer Science and Life Science (from 1999) for Undergraduate-Graduate studies. Her expertise in the synapse let to the study of protein families, protein-protein interactions with a global view on protein networks and their regulation. Molecular biology, cell biology and biochemical methods are applied in all research initiated in her laboratory. She and her laboratory are developing new computational and technological tools for large-scale cell biological research M. Linial and her colleagues apply MS based and genomics (DNA Chip) approaches for studying changes in neuronal development, and disease oriented research. She published over 180 scientific papers including book chapters and numerous reviews.The solid informatics approaches are used for large database storage and constant updating of several systems in view of classification, validation and functional predictions. M.L. and her students has been an active participant in NIH structural genomics initiatives and she participated in Structural Genomics effort Task for target selections. She and her colleagues have created several global classification systems that are used by the biomedical and biology communities. Most notably are the ProtoNet, EVEREST, ProTarget and PANDORA, mirror, ClanTox and more. All those developed web systems are provided as an open source for investigators.

Back to top

Dynamic Games with Asymmetric Information: A Framework for Empirical Work

Ariel Pakes, Harvard University
Wednesday, August 29. 2012
4:00 PM – 5:00 PM
Watch the video

Description
We develop a framework for the analysis of dynamic games that can be applied to the analysis of firm which compete in a market whose characteristics evolve over time as a probabilistic function of the actions of the firms competing in that market. Firm's chose their actions to maximize their perceptions of the discounted value of the returns that will accrue to them as a result of those actions. These returns depend on both their own states and their competitor's states. The firms know their own states, but only observe imprecise signals on the states of their competitors. Our goal is to provide a framework capable of analyzing the impact of policy or environmental changes in such a setting. Bayesian perfect Nash equilibria for environments that are rich enough to adequately approximate behavior have computational and informational demands that both; (i) make them impossible for applied researchers to use, and (ii) unlikely to be the best approximation to agent's actual behavior. So we introduce an alternative notion of equilibria which is less demanding of both agents and researchers, while still implying agents "optimize" in a meaningful sense of that word. We show that: (i) there is an artificial intelligence algorithm that makes it relatively easy to compute (at least some of) the resultant equilibria, and (ii) it is relatively easy to use the properties of that equilibria to estimate any unknown parameters of the game. We use the analysis of a de-regulated electric utility market as an example. Two firms each own several generators and bid "supply functions" into the market in every period (a quantity supplied as an increasing function of price). An independent system operator (an ISO) sums the supply curves horizontally and intersects the result with demand to determine the period's price and the quantities to be produced by each firm. The firm's cost of supplying electricity on each of its generators is increasing in the current quantity produced and stochastically increasing in the quantities produced since the last time the firm did maintenance on that generator. Firms do not know the current cost of their competitor's generators but realize that the returns they will earn from the bid on each of its generator will increase the less the quantity supplied by other generators (their own, as well as those of its competitors). This provides incentives for firms to simply shut down some generators without doing maintenance, and to implicitly co-ordinate shutdowns across firms. Consumers pay the price through the resultant increase in the price of electricity. Joint work with Chaim Fershtman

Biography
Ariel Pakes is the Steven McArthur Heller Professor of Economics in the Department of Economics at Harvard University, where he teaches courses in Industrial Organization and in Econometrics. Before coming to Harvard in 1999, he was the Charles and Dorothea Dilley Professor of Economics at Yale University (1997-99). He has held other tenured positions at Yale (1988-97), the University of Wisconsin (1986-88), and the University of Jerusalem (1985-86). Pakes received his doctorate degree from Harvard University in 1980, and he stayed at Harvard as a Lecturer until he took up a position in Jerusalem in 1981. Pakes received the award for the best graduate student advisor at Yale in 1996. Pakes was elected fellow of the American Academy of Arts and Sciences in 2002. He received the Frisch Medal of the Econometric Society in 1986, was elected as a fellow of that society in 1988, and gave the Fisher-Schultz lecture at the World Congress of that society in 2005. He was the Distinguished Fellow of the Industrial Organization Society in 2007. He has been on the editorial boards of the RAND Journal of Economics, Econometrica, Economic Letters and the Journal of Economic Dynamics and Control. He is also a research associate of the NBER, and has been member of the AEA Committee on Government Statistics, the chair of the AEA Census Advisory Panel, and co-editor of a Proceedings of the National Academy of Science issue on "Science, Technology and the Economy". Professor Pakes' research has been in Industrial Organization (I.O.), the Economics of Technological Change and in Econometric Theory. He and his co-authors have focused on developing techniques which allow us to analyze market responses to policy and environmental changes. This includes; econometric work on how to estimate demand and cost systems and then use the estimated parameters to analyze equilibrium responses in different institutional settings, empirical work which uses these techniques to analyze market outcomes in different industries, and theoretical work developing frameworks for the applied analysis of dynamic oligopolies (with and without collusive possibilities, and with and without asymmetric information).

Back to top

Gender, Competitiveness and Career Choices

Muriel Niederle, Stanford University
Wednesday, August 22. 2012
4:00 PM – 5:00 PM
Watch the video

Description
Gender differences in competitiveness are often discussed as potential explanation for gender differences in labor market outcomes. We correlate an incentivized measure of competitiveness with the first important career choice of secondary school students in the Netherlands. At the age of 15, these students have to pick one out of four study profiles, which vary in how prestigious they are. While boys and girls have very similar levels of academic ability, boys are substantially more likely than girls to choose more prestigious profiles. We find that 25% of this gender difference can be attributed to gender differences in competitiveness. This lends support to the extrapolation of laboratory findings on competitiveness to labor market settings. Joint work with Hessel Oosterbeek and Thomas Buser.

Biography
Muriel is a Professor of Economics at Stanford. In her own words: I am an experimental economist, and as such, have some experiments that fall outside my main areas of gender or market design. Most recently, I got interested in k-level models. The first strand of literature I am working on can be broadly thought of as market design. While that includes studying markets that have been designed (such as the National Residency Matching Market), I am also interested redesigning markets, or adding features such as signaling to help markets such as the economics job market work better. Most recently, I have been getting involved in working with the San Francisco Unified School District to help redesign their school choice system. In market design, I have used theory, experiments, as well as data collected by others. My second strand of work is work on gender differences. So far, I have only experimental papers in that are, showing that women may not be as competitive as men, especially when they have to compete against men.

Back to top

Wireless Spectrum Sharing: Opportunities for Interdisciplinary Research

Anant Sahai, University of California, Berkeley
Wednesday, August 8, 2012
Watch the video

Description
Under the current static system of frequency assignment, a great deal of spectrum remains underused. This seeming waste represents an opportunity for frequency-agile cognitive radios to improve performance. Understanding this opportunity forces us to take a closer look at the whole question of "regulatory overhead." Until recently, cognitive radios represented the "Medical Marijuana" of wireless research --- rhetoric on both sides characterized by distrust, wishful thinking, and vested interests, but the underlying "technology" in question was still very much illegal. Regulatory changes were required before research in this area could truly impact practice. Recent steps taken by the FCC in the TV Whitespaces demonstrate that the government is serious about change, and just last month, the President's Council of Advisers on Science and Technology (PCAST) released a report that advocated expanding this approach beyond the TV bands. However, the problem is that while we have a rough sense of what we want to achieve at a high level, as a community, we do not yet know what this regulatory change should entail at the detailed level and more troubling, even how we would recognize the right answer if we saw it.The full scope of the problem weaves together information theory, signal processing, economics, and law in a nontrivial way (and probably also cryptography and social networks). In this talk, I will give an introduction to the opportunity in the context of the TV Whitespaces. I'll use some simulations based on real FCC data and realistic propagation models to give a quantitative sense of the tradeoffs involved, and then show idealized models that enable a conceptual understanding of the "overhead" in the context of spectrum sensing. I will then elucidate what "light handed regulation" could mean in the cognitive radio context, giving a simple criminal-law inspired model to reveal something about the overhead and tradeoffs involved. I'll close with some interesting future research directions.

Biography
Anant Sahai (BS '94 UC Berkeley, MS '96 MIT, PhD '01 MIT) is an Associate Professor in the Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, where he joined the faculty in 2002. He is a member of the Berkeley Wireless Research Center (BWRC) and the Wireless Foundations Center (WiFo). In 2001, he spent a year at the wireless startup Enuvis developing adaptive signal processing algorithms for extremely sensitive GPS receivers implemented using software defined radio. Prior to that, he was a graduate student at the Laboratory for Information and Decision Systems (LIDS) at the Massachusetts Institute of Technology (MIT). His research interests are in wireless communication, decentralized control, and information theory. He is particularly interested in delay, feedback, and complexity from an information-theoretic perspective and in cognitive radio from a regulatory perspective.

Back to top

The Wonders of the Probabilistic Method

Nati Linial, Hebrew University
Wednesday, August 8, 2012
Watch the video

Description
I will try to explain some key principles in modern mathematics which combine ideas from combinatorics and probability. In particular I will emphasize the surprising role that probability theory plays in the study of combinatorics. How it allows us to investigate complicated graphs and networks without having to reveal all the specific details of individual large graphs or networks. This talk is intended for a general audience. The necessary mathematical background is at the level of good high-school education.

Biography
Nati Linial is a professor of computer science at the Hebrew University of Jerusalem. In his own words: I got my undergraduate education in mathematics at the Technion. I did my PhD at the Hebrew University with a thesis in graph theory. Following a postdoctoral period at UCLA math I joined the faculty of the Hebrew University. My main areas of interest are combinatorics, theoretical computer science and bioinformatics. I had about 30 graduate students so far (currently I have 7 PhD students and one MSc student) I am married to Michal, a life-science professor at the Hebrew University. We have three children who are, respectively, an artist, a poet and a budding physicist. I like long-distance running, reading and classical music.

Back to top

Lines, Shading, and the Perception of 3D Shape

Ted Adelson, Massachusetts Institute of Technology
Wednesday, August 1, 2012
Watch the video

Description
Humans can easily see 3D shape from single 2D images, exploiting multiple kinds of information. There are several subfields (in both human vision and computer vision) devoted to the study of particular cues to 3D shape, such as shading, texture, and contours. However, the resulting algorithms remain specialized and fragile (in contrast with the flexibility and robustness of human vision). Recent work in graphics and psychophysics has demonstrated the importance of local orientation structure in conveying 3D shape. This information is fairly stable and reliable across rendering condition. We have developed an exemplar-based system (which we call Shape Collage) that learns to associate image patches with corresponding 3D shape patches.
We train it with synthetic images of “blobby” objects rendered in various ways, including solid texture, Phong shading, and line drawings. Given a new image, it finds the best candidate scene patches and assembles them into a coherent interpretation of the object shape. Our system is the first that can retrieve the shape of naturalistic objects from line drawings. The same system, without modification, works for shape-from-texture and can also do shape-from-shading without requiring Lambertian surfaces. Thus disparate types of image information can be processed by a single mechanism to extract 3D shape.
(Collaborative work with Forrester Cole, Phillip Isola, Fredo Durand, and William Freeman.)

Back to top

Random Graph Models of Kidney Exchange

Al Roth, Harvard University
Wednesday, July 25. 2012
Watch the video

Description
Kidney exchange involves creating a non-monetary marketplace through which incompatible donors and patients can take part in exchanges, so that each patient in the exchange receives a transplant from a compatible donor. I’ll recount the brief history of kidney exchange, explain some of the technical problems that have been overcome, and some which remain.

Back to top

 

Barack Obama and the politics of social media for national policy-making

James Katz, Boston University
Wednesday, October 15

Description
Social media help people do most everything, ranging from meeting new friends and finding new restaurants to overthrowing dictatorships. This includes political campaigning; one need look no further than Barack Obama’s successful presidential campaigns to see how these communication technologies can alter the way politics is conducted. Yet social media have not had much import for setting national policy as part of regular administrative routines. This is the case despite the fact that, since his election in 2008, President Obama has on several occasions proclaimed that he wanted his administration to draw on social media to make the federal government run better. While there have been some modifications to governmental procedures due to the introduction of social media, the Obama administration practices have fallen far short of its leader’s audacious vision. Despite voluminous attention to social media in other spheres of activity, there has been little to point to in terms of successfully drawing on the public to help set national policies. What might account for this? I try to answer this question in my talk by exploring the attempts by the Obama White House to use social media tools and the consequences arising from such attempts. I also suggest some potential reasons behind the particular uses and outcomes that have emerged in terms of presidential-level social media outreach. As part of my conclusion, I outline possible future directions.

Biography
James E. Katz, Ph.D., is the Feld Family Professor of Emerging Media at Boston University’s College of Communication where he directs its Center for Mobile Communication Studies and Division of Emerging Media. His research on the internet, social media and mobile communication has been internationally recognized, and he is frequently invited to address high-level industry, governmental and academic groups on his research findings. His latest book, with Barris and Jain, is The Social Media President: Barack Obama and the Politics of Citizen Engagement on which this talk is based.

Back to top

Where

Microsoft Research New England
First Floor Conference Center
One Memorial Drive, Cambridge, MA

 

Arrival Guidance

Upon arrival, be prepared to show a picture ID and sign the Building Visitor Log when approaching the Lobby Floor Security Desk. Alert them to the name of the event you are attending and ask them to direct you to the appropriate floor. Typically the talks are located in the First Floor Conference Center, however sometimes the location may change.

*Hospitality Notice: Microsoft Research may provide hospitality at this event. Because different universities and legal jurisdictions have differing rules, we rely on you to know whether acceptance of this invitation would be inconsistent with those rules. Accordingly, By accepting our invitation, you confirm that this invitation is compliant with your institution's policies.

If you have any questions or concerns, please send us an email.

 

Past Speaker Series at the MSR Lab

For past Microsoft Research colloquium series presentations, please visit: