Why Initialization Matters for IBM Model 1: Multiple Optima and Non-Strict Convexity

Contrary to popular belief, we show that the optimal parameters for IBM Model 1 are not unique. We demonstrate that, for a large class of words, IBM Model 1 is indifferent among a continuum of ways to allocate probability mass to their translations. We study the magnitude of the variance in optimal model parameters using a linear programming approach as well as multiple random trials, and demonstrate that it results in variance in test set log-likelihood and alignment error rate.

acl11.pdf
PDF file

In  Proc. of the Annual Meeting of the Association for Computational Linguistics: Human Language Technologies

Details

TypeProceedings
> Publications > Why Initialization Matters for IBM Model 1: Multiple Optima and Non-Strict Convexity