Speaker Geoff Webb
Host Xiaodong He
Affiliation Monash University
Date recorded 6 December 2013
I present our work on highly-scalable out-of-core techniques for learning well-calibrated Bayesian network classifiers. Our techniques are based on a novel hybrid generative and discriminative learning paradigm. These algorithms - provide straightforward mechanisms for managing the bias-variance trade-off - have training time that is linear with respect to training set size, - require as few as one and at most four passes through the training data, - allow for incremental learning, - are embarrassingly parallelisable, - support anytime classification, - provide direct well-calibrated prediction of class probabilities, - can learn using arbitrary loss functions, - support direct handling of missing values, and - exhibit robustness to noise in the training data. Despite their computationally efficiency the new algorithms deliver classification accuracy that is competitive with state-of-the-art discriminative learning techniques.
©2013 Microsoft Corporation. All rights reserved.