Improved Information Gain Estimates for Decision Tree Induction

Sebastian Nowozin

Abstract

Ensembles of classification and regression trees remain popular machine

learning methods because they define flexible non-parametric models that

predict well and are computationally efficient both during training and

testing.

During induction of decision trees one aims to find predicates that are

maximally informative about the prediction target.

To select good predicates most approaches estimate an information-theoretic

scoring function, the information gain, both for classification and

regression problems.

We point out that the common estimation procedures are biased and show that by

replacing them with improved estimators of the discrete and the differential

entropy we can obtain better decision trees.

In effect our modifications yield improved predictive performance and are

simple to implement in any decision tree code.

Details

Publication typeInproceedings
Published inICML 2012
URLhttp://arxiv.org/abs/1206.4620
Publisher
> Publications > Improved Information Gain Estimates for Decision Tree Induction