Simon Corston-Oliver, Michael Gamon, and Chris Brockett
We present a machine learning approach to evaluating the well-formedness of output of a machine translation system, using classifiers that learn to distinguish human reference translations from machine translations. This approach can be used to evaluatean MT system, tracking improvements over time; to aid in the kind of failure analysis that can help guide system development; and to select among alternative output strings. The method presented is fully automated and independent of source language, target language and domain.
Publisher Association for Computational Linguistics
All copyrights reserved by ACL 2001.