Evaluating Predictive Uncertainty Challenge

  • J. Quiñonero Candela ,
  • C. E. Rasmussen ,
  • F. Sinz ,
  • O. Bousquet ,
  • B. Schölkopf ,
  • Joaquin Quiñonero Candela

in Machine Learning Challenges - Evaluating Predictive Uncertainty, Textual Entailment and Object Recognition Systems

Published by Springer | 2006, Vol 3944 | Machine Learning Challenges - Evaluating Predictive Uncertainty, Textual Entailment and Object Recognition Systems edition

This Chapter presents the PASCAL1 Evaluating Predictive Uncertainty Challenge, introduces the contributed Chapters by the participants who obtained outstanding results, and provides a discussion with some lessons to be learnt. The Challenge was set up to evaluate the ability of Machine Learning algorithms to provide good “probabilistic predictions”, rather than just the usual “point predictions” with no measure of uncertainty, in regression and classification problems. Participants had to compete on a number of regression and classification tasks, and were evaluated by both traditional losses that only take into account point predictions and losses we proposed that evaluate the quality of the probabilistic predictions.