Convergence of DLLR rapid speaker adaptation algorithms

Discounted Likelihood Linear Regression (DLLR) is a speaker adaptation technique for cases where there is insufficient data for MLLR adaptation. Here, we provide an alternative derivation of DLLR by using a censored EM formulation which postulates additional adaptation data which is hidden. This derivation shows that DLLR, if allowed to converge, provides maximum likelihood solutions. Thus the robustness of DLLR to small amounts of data is obtained by slowing down the convergence of the algorithm and by allowing termination of the algorithm before overtraining occurs. We then show that discounting the observed adaptation data by postulating additional hidden data can also be extended to MAP estimation of MLLR-type adaptation transformations.

gunawardana01__conver_dllr_rapid_adapt_algor.pdf
PDF file
gunawardana01__conver_dllr_rapid_adapt_algor.ps
PostScript file

In  ISCA-ITR Workshop on Adaptation Methods for Speech Recognition

Publisher  International Speech Communication Association
© 2004 ISCA. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the ISCA and/or the author.

Details

TypeInproceedings
> Publications > Convergence of DLLR rapid speaker adaptation algorithms