Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Videos
MobileFusion: Create 3D scans with your mobile phone
MobileFusion: Create 3D scans with your mobile phone
00:02:16 · 24 August 2015

MobileFusion is a research project that turns ordinary mobile phones into 3D scanners without any additional hardware. The resulting 3D scans are detailed enough for 3D printing or use in augmented-reality games.

Stochastic Methods for Complex Performance Measures: A Tale of Two Families
Stochastic Methods for Complex Performance Measures: A Tale of Two Families
Harikrishna Narasimhan
00:20:03 · 26 June 2015
Ordered Stick-breaking Prior for Sequential MCMC Inference of Bayesian Non-Parametric Models
Ordered Stick-breaking Prior for Sequential MCMC Inference of Bayesian Non-Parametric Models
Mrinal Das
00:17:10 · 26 June 2015
X1-Locally Non-linear Embeddings for Extreme Multi-label Learning
X1-Locally Non-linear Embeddings for Extreme Multi-label Learning
Kush Bhatia
00:22:39 · 25 June 2015
Extreme Multi-Label Classification
Extreme Multi-Label Classification
Yashoteja Prabhu
00:23:28 · 25 June 2015
An Introduction to Concentration Inequalities and Statistical Learning Theory
An Introduction to Concentration Inequalities and Statistical Learning Theory
Purushottam Kar
01:30:30 · 25 June 2015

The aim of this tutorial is to introduce tools and techniques that are used to analyze machine learning algorithms in statistical settings. Our focus will be on learning problems such as classification, regression, and ranking. We will look at concentration inequalities and other commonly used techniques such as uniform convergence and symmetrization, and use them to prove learning theoretic guarantees for algorithms in these settings.

The talk will be largely self-contained. However, it would help if the audience could brush up basic probability and statistics concepts such as random variables, events, probability of events, Boole's inequality etc. There are several good resources for these online and I do not wish to recommend one over the other. However, a couple of nice resources are given below

  1. Https://www.khanacademy.org/math/probability
  2. Http://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/
  3. Https://en.wikipedia.org/wiki/Boole'sinequality
Non-Convex Robust PCA - Part 2
Non-Convex Robust PCA - Part 2
Praneeth Netrapalli
00:47:25 · 25 June 2015

In this lecture, we will illustrate a novel technique due to Erdos et al. (2011) which can be used to obtain bounds on eigenvector perturbation in the ℓ norm. Standard techniques give us optimal bounds only for perturbation in the ℓ2 norm. We will further use this technique to propose and analyze a new non-convex algorithm for robust PCA, where the task is to recover a low-rank matrix from sparse corruptions that are of unknown value and support. In the deterministic error setting, our method achieves exact recovery under the same conditions that are required by existing methods (which are based on convex optimization) but is much faster.

Online Learning and Bandits - Part 2
Online Learning and Bandits - Part 2
Aditya Gopalan
01:25:42 · 25 June 2015

The ability to make continual, accurate decisions based on evolving data is key in many of today's data-driven intelligent systems. This tutorial-style talk presents an introduction to the modern study of sequential learning and decision making under uncertainty. The broad objective is to cover modeling frameworks for online prediction and learning, explore algorithms for decision making, and gain an understanding of their performance. Specifically, we will look at multi-armed bandits – models of decision making that capture the explore-vs-exploit tradeoff in learning, regret minimization, non-stochastic or adversarial online learning, and online convex optimization. Time permitting, we will discuss new directions and frontiers in the area of sequential decision making.

Non-Convex Robust PCA
Non-Convex Robust PCA
Praneeth Netrapalli
00:50:19 · 24 June 2015

In this lecture, we will illustrate a novel technique due to Erdos et al. (2011) which can be used to obtain bounds on eigenvector perturbation in the ℓ norm. Standard techniques give us optimal bounds only for perturbation in the ℓ2 norm. We will further use this technique to propose and analyze a new non-convex algorithm for robust PCA, where the task is to recover a low-rank matrix from sparse corruptions that are of unknown value and support. In the deterministic error setting, our method achieves exact recovery under the same conditions that are required by existing methods (which are based on convex optimization) but is much faster.

Online Learning and Bandits - Part 1
Online Learning and Bandits - Part 1
Aditya Gopalan
01:36:45 · 24 June 2015

The ability to make continual, accurate decisions based on evolving data is key in many of today's data-driven intelligent systems. This tutorial-style talk presents an introduction to the modern study of sequential learning and decision making under uncertainty. The broad objective is to cover modeling frameworks for online prediction and learning, explore algorithms for decision making, and gain an understanding of their performance. Specifically, we will look at multi-armed bandits- models of decision making that capture the explore-vs-exploit tradeoff in learning, regret minimization, non-stochastic or adversarial online learning, and online convex optimization. Time permitting, we will discuss new directions and frontiers in the area of sequential decision making.

More videos...