Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Fast large-scale mixture modeling with component-specific data partitions

Bo Thiesson and Chong Wang

Abstract

Remarkably easy implementation and guaranteed convergence has made the EM algorithm one of the most used algorithms for mixture modeling. On the downside, the E-step is linear in both the sample size and the number of mixture components, making it impractical for large-scale data. Based on the variational EM framework, we propose a fast alternative that uses component-specific data partitions to obtain a sub-linear E-step in sample size, while the algorithm still maintains provable convergence. Our approach builds on previous work, but is significantly faster and scales much better in the number of mixture components. We demonstrate this speedup by experiments on large-scale synthetic and real data.

Main paper and supplement:

Details

Publication typeInproceedings
Published inNIPS-2010: Advances in Neural Information Processing Systems 23
PublisherMIT Press
> Publications > Fast large-scale mixture modeling with component-specific data partitions