Why Initialization Matters for IBM Model 1: Multiple Optima and Non-Strict Convexity

Kristina Toutanova and Michel Galley

Abstract

Contrary to popular belief, we show that the optimal parameters for IBM Model 1 are not unique. We demonstrate that, for a large class of words, IBM Model 1 is indifferent among a continuum of ways to allocate probability mass to their translations. We study the magnitude of the variance in optimal model parameters using a linear programming approach as well as multiple random trials, and demonstrate that it results in variance in test set log-likelihood and alignment error rate.

Details

Publication typeProceedings
Published inProc. of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies
> Publications > Why Initialization Matters for IBM Model 1: Multiple Optima and Non-Strict Convexity