Solutions to exercises in the book

Decision Forests in Computer Vision and Medical Image Analysis

A. Criminisi and J. Shotton

Springer 2013

 

Chapter 4: Classification Forests

  • Excercise 4.1
sw clas exp1_n2.txt /d 2 /t 8  sw clas exp1_n2.txt /d 2 /t 200 /split linear 

 

Using many trees and linear splits reduces artifacts.

 

  • Excercise 4.2
sw clas exp1_n4.txt /d 3 /t 200 /padx 2 /pady 2 sw clas exp1_n4.txt /d 3 /t 200 /padx 2 /pady 2 /split linear 

 

 

The quality of the uncertainty away from training data is affected by the type of split function (weak learner).

 

  • Excercise 4.3
sw clas exp5_n2.txt /d 6 /t 200  sw clas exp5_n2.txt /d 6 /t 200 /split linear 

 

Using linear splits produces a possibly better separating surfaces.

 

  • Excercise 4.4
sw clas exp5_n4.txt /d 8 /t 200 /split linear  sw clas exp5_n4.txt /d 6 /t 200 /split linear 

 

 

Reducing the tree depth may cause underfitting and lower confidence.

 

  • Excercise 4.5
sw clas exp5_n4.txt /d 8 /t 400 /f 500 /split linear  sw clas exp5_n4.txt /d 8 /t 400 /f 3 /split linear 

 

 

Increasing randomness may reduce overall prediction confidence.

 

Chapter 5: Regression Forests

  • Excercise 5.1
sw regression exp2.txt /d 2 /t 100  sw regression exp2.txt /d 6 /t 100 

 

 

Large tree depth may lead to overfitting. 

 

  • Excercise 5.2
sw regression exp3.txt /d 2 /t 400  sw regression exp4.txt /d 2 /t 400 

 

 

Larger training noise yields larger prediction uncertainty (wider pink region).

 

 

  • Excercise 5.3
sw regression exp7.txt /d 4 /t 200  sw regression exp8.txt /d 4 /t 200 

 

 

sw regression exp9.txt /d 4 /t 200  sw regression exp10.txt /d 4 /t 200 

 

 

Non-linear curve fitting in diverse examples. Note the relatively smooth interpolation and extrapolation behaviour. 

 

  • Excercise 5.4
sw regression exp11.txt /d 4 /t 200  sw regression exp11.txt /d 6 /t 200  

 

 

Single function regression does not capture the inherently ambiguous central region. But at least it returns an associated high uncertainty.

 

Chapter 6: Density Forests

  • Excercise 6.1
sw density exp1.txt /d 2 /t 3  sw density exp1.txt /d 4 /t 3 

 

 

Too deep trees may cause overfitting. 

 

  • Excercise 6.2
sw density exp3.txt /d 4 /t 300  sw density exp3.txt /d 6 /t 300 

 

 

Too deep trees may cause overfitting.  

 

  • Excercise 6.3
sw density exp7.txt /d 3 /t 300  sw density exp7.txt /d 5 /t 300 

 

 

Too deep trees may cause overfitting.  

 

  • Excercise 6.4
sw density exp4.txt /d 3 /t 300  sw density exp4.txt /d 5 /t 300 

 

 

Too deep trees may cause overfitting. Some of the visible streaky artifacts are due to the use of axis-aligned weak learners. 

 

Chapter 8: Semi-supervised Classification Forests

  • Excercise 8.1
sw ssclas exp1.txt /d 5 /t 100  sw ssclas exp1.txt /d 5 /t 1 

 

 

Note the larger uncertainty in the central region (left image). A single tree is always over-confident. 

 

  • Excercise 8.2
sw ssclas exp1.txt /d 5 /t 200 /split linear  sw ssclas exp4.txt /d 5 /t 200 /split linear 

 

 

Adding further supervised data in the central region helps increase the prediction confidence .

 

  • Excercise 8.3
sw ssclas exp3.txt /d 5 /t 200 /split linear  sw ssclas exp3.txt /d 6 /t 200 /split linear 

 

 

Confidence decreases with training noise and increases with tree depth.

 

  • Excercise 8.4
sw ssclas exp5.txt /d 5 /t 200 /split linear  sw ssclas exp5.txt /d 5 /t 1 /split linear  

 

 

Single trees are over-confident. Using many random forests produces smooth uncertainty in the transition regions.

 

  • Excercise 8.5
sw ssclas exp9.txt /d 10 /t 200 /a 2 /split linear  sw ssclas exp10.txt /d 10 /t 200 /a 2 /split linear 

 

 

Adding the amount of supervision in regions of low confidence increases the prediction accuracy and the overall confidence.