Convergence of DLLR rapid speaker adaptation algorithms

Asela Gunawardana and William Byrne

Abstract

Discounted Likelihood Linear Regression (DLLR) is a speaker adaptation technique for cases where there is insufficient data for MLLR adaptation. Here, we provide an alternative derivation of DLLR by using a censored EM formulation which postulates additional adaptation data which is hidden. This derivation shows that DLLR, if allowed to converge, provides maximum likelihood solutions. Thus the robustness of DLLR to small amounts of data is obtained by slowing down the convergence of the algorithm and by allowing termination of the algorithm before overtraining occurs. We then show that discounting the observed adaptation data by postulating additional hidden data can also be extended to MAP estimation of MLLR-type adaptation transformations.

Details

Publication typeInproceedings
Published inISCA-ITR Workshop on Adaptation Methods for Speech Recognition
PublisherInternational Speech Communication Association
> Publications > Convergence of DLLR rapid speaker adaptation algorithms