Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Scalable stacking and learning for building deep architectures

Li Deng, Dong Yu, and John Platt

Abstract

Deep Neural Networks (DNNs) have shown remarkable success in pattern recognition tasks. However, parallelizing DNN training across computers has been difficult. We present the Deep Stacking Network (DSN), which overcomes the problem of parallelizing learning algorithms for deep architectures. The DSN provides a method of stacking simple processing modules in buiding deep architectures, with a convex learning problem in each module. Additional fine tuning further improves the DSN, while introducing mi- nor non-convexity. Full learning in the DSN is batch-mode, making it amenable to parallel training over many machines and thus be scalable over the potentially huge size of the training data. Experimental results on both the MNIST (image) and TIMIT (speech) classification tasks demonstrate that the DSN learning algorithm developed in this work is not only parallelizable in implementation but it also attains higher classification accuracy than the DNN.

Details

Publication typeInproceedings
Published inICASSP 2012
PublisherIEEE SPS
> Publications > Scalable stacking and learning for building deep architectures