Sham M. Kakade
Principal Research Scientist
The fourth annual New England Machine Learning Day will be held May 18th, 2015. The event will bring together local academics and researchers in machine learning and its applications. There will be a lively poster session during lunch, like in previous years.
The goal of this workshop in June 2015 is to foster the communication of communities broadly working in the area of data science, with a particular focus of stimulating increased interactions between statisticians, computer scientists, and domain experts in order to ambitiously attack important scientific problems involving big and complex data.
I am a principal research scientist at Microsoft Research, New England, a lab in Cambridge, MA. Previously, I was an associate professor at the Department of Statistics, Wharton, University of Pennsylvania (from 2010-2012), and I was an assistant professor at the Toyota Technological Institute at Chicago. Before this, I did a postdoc in the Computer and Information Science department at the University of Pennsylvania under the supervision of Michael Kearns. I completed my PhD at the Gatsby Unit where my advisor was Peter Dayan. Before Gatsby, I was an undergraduate at Caltech where I did my BS in physics.
The focus of my work is on designing (and implementing) both statistically and computationally efficient algorithms for machine learning, statistics, and artificial intelligence.
Recently, I have been focusing on three areas: 1) designing effective algorithms for estimating probabilistic models with latent structure (such as HMMs, LDA, Mixtures of Gaussians, etc) 2) efficient optimization algorithms in statistical settings (i.e. how fast can we optimize when we are interested in statistical accuracy rather than numerical accuracy?) 3) what is responsible for the recent (and remarkable) successes in deep learning? (this last question is largely an empirical exploration, in both computer vision and speech applications).
More broadly, I am currently interested in probability theory, algebraic and tensor methods, signal processing/information theory, and numerous domain specific settings (with a recent focus on natural language processing and computer vision).
My previous body of work has addressed problems in unsupervised (and representational) learning, concentration of measure, reinforcement learning, statistical learning theory, optimization, algorithmic game theory, and economics. As a graduate student, I focused on reinforcement learning and computational neuroscience. My thesis was on sample complexity issues in reinforcement learning.
Former Interns (in reverse chronological order)
Email: skakade [at] microsoft [dot] com