An Introduction to Computational Networks and the Computational Network Toolkit

  • Dong Yu ,
  • Adam Eversole ,
  • Mike Seltzer ,
  • Kaisheng Yao ,
  • Oleksii Kuchaiev ,
  • Yu Zhang ,
  • Frank Seide ,
  • Zhiheng Huang ,
  • Brian Guenter ,
  • Huaming Wang ,
  • Jasha Droppo ,
  • Geoffrey Zweig ,
  • Chris Rossbach ,
  • Jie Gao ,
  • Andreas Stolcke ,
  • Jon Currey ,
  • Malcolm Slaney ,
  • Guoguo Chen ,
  • Amit Agarwal ,
  • Chris Basoglu ,
  • Marko Padmilac ,
  • Alexey Kamenev ,
  • Vladimir Ivanov ,
  • Scott Cypher ,
  • Hari Parthasarathi ,
  • ,
  • Baolin Peng ,
  • Xuedong Huang

MSR-TR-2014-112 |

We introduce computational network (CN), a unified framework for describing arbitrary learning machines, such as deep neural networks (DNNs), convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short term memory (LSTM), logistic regression, and maximum entropy model, that can be illustrated as a series of computational steps. A CN is a directed graph in which each leaf node represents an input value or a parameter and each non-leaf node represents a matrix operation upon its children. We describe algorithms to carry out forward computation and gradient calculation in CN and introduce most popular computation node types used in a typical CN.

We further introduce the computational network toolkit (CNTK), an implementation of CN that supports both GPU and CPU. We describe the architecture and the key components of the CNTK, the command line options to use CNTK, and the network definition and model editing language, and provide sample setups for acoustic model, language model, and spoken language understanding. We also describe the Argon speech recognition decoder as an example to integrate with CNTK.