The Curse of Dimensionality for Local Learning

We present a series of arguments supporting the claim that a large class of modern learning algorithms based on local kernels are highly sensitive to the curse of dimensionality. These algorithms include in particular local manifold learning algorithms such as Isomap and LLE, and support vector classifiers with Gaussian or other local kernels. The results show that these algorithms are local in the sense that crucial properties of the learned function at X depend on the neighbors of X in the training set. This makes them highly sensitive to the curse of dimensionality, well studied for classical non-parametric statistical learning algorithms. There is a large class of data distributions for which non-local solutions could be expressed compactly and potentially be learned with few examples, but which will require a large number of local bases and therefore a large number of training examples when using a local learning algorithm. We present evidence that non-local learning is possible without strong prior knowledge.

Speaker Details

Yoshua Bengio received a PhD in Computer Science in 1991 from McGill University, Canada. After two post-doctoral years, one at M.I.T. and one at AT&T Bell Laboratories, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of a book and more than 100 publications, the most cited being in the areas of recurrent neural networks, probabilistic learning algorithms, and pattern recognition. Since ‘2000 he holds a Canada Research Chair in Statistical Learning Algorithms. His current interests include fundamental questions on the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning.

Date:
Speakers:
Yoshua Bengio
Affiliation:
Université de Montréal
    • Portrait of Jeff Running

      Jeff Running