Sensible Priors for Sparse Bayesian Learning

Sparse Bayesian learning suffers from impractical, overconfident predictions where the uncertainty tends to be maximal around the observations. We propose an alternative treatment that breaks the rigidity of the implied prior through decorrelation, and consequently gives reasonable and intuitive error bars. The attractive computational efficiency is retained; learning leads to sparse solutions. An interesting by-product is the ability to model non-stationarity and input-dependent noise.

tr-2007-121.pdf
PDF file

Details

TypeTechReport
NumberMSR-TR-2007-121
Pages13
InstitutionMicrosoft Research
> Publications > Sensible Priors for Sparse Bayesian Learning