Environment Normalization for Robust Speech Recognition using Direct Cepstral Comparisons

  • Alex Acero

Proc. of the International Conference on Acoustics, Speech, and Signal Processing |

Published by Institute of Electrical and Electronics Engineers, Inc.

In this paper we describe and evaluate a series of new algorithms that compensate for the effects of unknown acoustical environments or changes in environment. The algorithms use compensation vectors that are added to the cepstral representations of speech that is input to a speech recognition system. While these vectors are computed from direct frame-by-frame comparisons of cepstra of speech simultaneously recorded in the training environment and various prototype testing environments, the compensation algorithms do not assume that the acoustical characteristics of the actual testing environment are known. The specific compensation vector applied in a given frame depends on either physical attributes such as SNR or presumed phonetic identity. The compensation algorithms are evaluated using the 1992 ARPA 5000-word WSJKSR corpus. The best system combines phonemebased and SNR-based cepstral compensation with cepstral mean normalization, and provides a 66.8% reduction in error rate over baseline processing when tested using a standard suite of unknown microphones.