Deep Beamforming Networks for Multi-Channel Speech Recognition

  • Xiong Xiao ,
  • Shinji Watanabe ,
  • Hakan Erdogan ,
  • Liang Lu ,
  • John Hershey ,
  • Mike Seltzer ,
  • Guoguo Chen ,
  • Yu Zhang ,
  • Michael Mandel ,
  • Dong Yu

Published by IEEE - Institute of Electrical and Electronics Engineers

Despite the significant progress in speech recognition enabled by deep neural networks, poor performance persists in some scenarios. In this work, we focus on far-field speech recognition which remains challenging due to high levels of noise and reverberation in the captured speech signals. We propose to represent the stages of acoustic processing including beamforming, feature extraction, and acoustic modeling, as three components of a single unified computational network. The parameters of a frequency-domain beamformer are first estimated by a network based on features derived from the microphone channels. These filter coefficients are then applied to the array signals to form an enhanced signal. Conventional features are then extracted from this signal and passed to a second network that performs acoustic modeling for classification. The parameters of both the beamforming and acoustic modeling networks are trained jointly using back-propagation with a common crossentropy objective function. In experiments on the AMI meeting corpus, we observed improvements by pre-training each sub-network with a network-specific objective function before joint training of both networks. The proposed method obtained a 3.2% absolute word error rate reduction compared to a conventional pipeline of independent processing stages.