Large Margin Rank Boundaries for Ordinal Regression

Ralf Herbrich, Thore Graepel, and Klaus Obermayer

Abstract

In contrast to the standard machine learning tasks of classification and metric regression we investigate the problem of predicting variables of ordinal scale, a setting referred to as ordinal regression. This problem arises frequently in the social sciences and in information retrieval where human preferences play a major role. Whilst approaches proposed in statistics rely on a probability model of a latent (unobserved) variable we present a distribution independent risk formulation of ordinal regression which allows us to derive a uniform convergence bound. Applying this bound we present a large margin algorithm that is based on a mapping from objects to scalar utility values thus classifying pairs of objects. We give experimental results for an information retrieval task which show that our algorithm outperforms more naive approaches to ordinal regression such as Support Vector Classification and Support Vector Regression in the case of more than two ranks.

Details

Publication typeInbook
Published inAdvances in Large Margin Classifiers
Pages115–132
Chapter7
PublisherMIT Press
> Publications > Large Margin Rank Boundaries for Ordinal Regression