Large Margin Rank Boundaries for Ordinal Regression

Ralf Herbrich, Thore Graepel, and Klaus Obermayer

January 2000

In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.

PostScript file |

In Advances in Large Margin Classifiers

Publisher MIT Press

All copyrights reserved by MIT Press 2000.

Type | Chapter |

Pages | 115–132 |

Chapter | 7 |

> Publications > Large Margin Rank Boundaries for Ordinal Regression