Fast large-scale mixture modeling with component-specific data partitions

Remarkably easy implementation and guaranteed convergence has made the EM algorithm one of the most used algorithms for mixture modeling. On the downside, the E-step is linear in both the sample size and the number of mixture components, making it impractical for large-scale data. Based on the variational EM framework, we propose a fast alternative that uses component-specific data partitions to obtain a sub-linear E-step in sample size, while the algorithm still maintains provable convergence. Our approach builds on previous work, but is significantly faster and scales much better in the number of mixture components. We demonstrate this speedup by experiments on large-scale synthetic and real data.

Main paper and supplement:

em.main.pdf
PDF file
em.supplement.pdf
PDF file

In  NIPS-2010: Advances in Neural Information Processing Systems 23

Publisher  MIT Press

Details

TypeInproceedings
> Publications > Fast large-scale mixture modeling with component-specific data partitions