Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Sensible Priors for Sparse Bayesian Learning

Joaquin QuiƱonero Candela, Edward Snelson, and Oliver Williams

Abstract

Sparse Bayesian learning suffers from impractical, overconfident predictions where the uncertainty tends to be maximal around the observations. We propose an alternative treatment that breaks the rigidity of the implied prior through decorrelation, and consequently gives reasonable and intuitive error bars. The attractive computational efficiency is retained; learning leads to sparse solutions. An interesting by-product is the ability to model non-stationarity and input-dependent noise.

Details

Publication typeTechReport
NumberMSR-TR-2007-121
Pages13
InstitutionMicrosoft Research
> Publications > Sensible Priors for Sparse Bayesian Learning