Speaker Ben Taskar, Scott Yih, and Raj Rao
Host Ofer Dekel
Affiliation UW CSE, Microsoft Corporation
Date recorded 18 October 2013
2:30 Ben Taskar (UW CSE), "Probabilistic Models of Diversity: Determinantal Point Processes" 3:00 Scott Yih (MSR), "Multi-Relational Latent Semantic Analysis" 3:30 Raj Rao (UW CSE), "Opportunities and Challenges for Machine Learning in Brain-Computer Interfacing"
Probabilistic Models of Diversity: Determinantal Point Processes in Machine Learning, Ben Taskar (UW CSE)
Many real-world problems involve negative interactions; we might want search results to be diverse, sentences in a summary to cover distinct aspects of the subject, or objects in an image to occupy different regions of space. However, traditional structured probabilistic models tend deal poorly with these kinds of situations; Markov random fields, for example, become intractable even to approximate. Determinantal point processes (DPPs), which arise in random matrix theory and quantum physics, behave in a complementary fashion: while they cannot encode positive interactions, they define expressive models of negative correlations that come with surprising and exact algorithms for many types of inference, including conditioning, marginalization, and sampling. I'll present our recent work on a novel factorization and dual representation of DPPs that enables efficient and exact inference for exponentially-sized structured sets. We develop an exact inference algorithm for DPPs conditioned on subset size and derive efficient parameter estimation for DPPs from several types of observations, as well as approximation algorithms for large-scale non-linear DPPs. I'll illustrate the advantages of DPPs on several natural language and computer vision tasks: document summarization, image search and multi-person pose estimation problems in images. Joint work with Alex Kulesza, Jennifer Gillenwater, Raja Affandi and Emily Fox.
Multi-Relational Latent Semantic Analysis, Scott Yih (MSR)
We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a low-rank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state-of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.
Opportunities and Challenges for Machine Learning in Brain-Computer Interfacing, Raj Rao (UW CSE)
The field of brain-computer interfacing has seen rapid advances in recent years, with applications ranging from cochlear implants for the deaf to brain-controlled prosthetic arms for the paralyzed. This talk will provide a brief overview of the various types of brain-computer interfaces (BCIs) and the techniques they use for mapping brain signals to control outputs. I will then highlight some opportunities as well as challenges for machine learning in helping facilitate the transition of BCIs from the laboratory to the real world.
©2013 Microsoft Corporation. All rights reserved.