Speaker adaptation through speaker specific compensation

This paper describes a new speaker adaptation strategy that we term speaker specific compensation. The basic idea is to transform speech of a speaker in a way that renders it recognizable by a speaker dependent classifier built for another speaker. The compensating filter is learnt as a cepstral vector using labeled speech samples of the speaker. Using some ideas about combining multiple classifiers, we present a new speaker independent speech recognition system that uses a few speaker dependent classifiers along with a bank of cepstral compensating vectors learnt for a large number of speakers. Each of the speaker dependent classifiers is trained on the given speech samples of only one speaker and is never retrained or adapted thereafter. We present some results to illustrate the effectiveness of this speaker specific compensation idea.

speakercompensation-spcom04.pdf
PDF file

In  Proceedings of IEEE SPCOM 2004, International Conference on Signal Processing and Communications, Bangalore, India

Publisher  Institute of Electrical and Electronics Engineers, Inc.
© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.

Details

TypeInproceedings
Pages81–85
> Publications > Speaker adaptation through speaker specific compensation