Soft Margin Estimation of Hidden Markov Model Parameters (best student paper)

Proc. Interspeech |

Best Student Paper

We propose a new discriminative learning framework, called soft margin estimation (SME), for estimating parameters of continuous density hidden Markov models. The proposed method makes direct usage of the successful ideas of soft margin in support vector machines to improve generalization capability, and of decision feedback learning in minimum classification error training to enhance model separation in classifier design. We attempt to incorporate frame selection, utterance selection and discriminative separation in a single unified objective function that can be optimized with the well-known generalized probabilistic descent algorithm. We demonstrate the advantage of SME in theory and practice over other state-of-the-art techniques. Tested on a connected digit recognition task, the proposed SME approach achieves a string accuracy of 99.33%. To our knowledge, this is the best result ever reported on the TIDIGITS database.