N-Gram Based Filler Model for Robust Grammar Authoring

Dong Yu, Yun-Cheng Ju, Ye-Yi Wang, and Alex Acero

Abstract

We propose a technique for rapid speech application development that generates robust semantic context-free grammars (CFG) given rigid CFGs as input. Users' speech does not always conform to rigid CFGs, so robust grammars improve the caller's experience. Our system takes a simple CFG and then generates a hybrid ngram/CFG that is written in the W3C SRGS format and thus can run in many standard automatic speech recognition engines. The hybrid network leverages an application-independent word n-gram which can be shared across different applications. In addition, our tool allows developers to provide a few example sentences to adapt the n-gram for improved accuracy. Our experiments show the robust CFG has no loss in accuracy for test utterances that can be covered by the rigid CFG, but offers large improvements for cases where the user's sentence cannot be covered by the rigid CFG. It also has a much better rejection for utterances that contain no slot at all. With a few example sentences for adaptation, our robust CFG can achieve the recognition accuracy close to the class-based n-gram LM customized for the application.

Details

Publication typeInproceedings
Published inInternational Conference on Acoustics, Speech, and Signal Processing.
PagesI565-I568
AddressToulouse, France
PublisherInstitute of Electrical and Electronics Engineers, Inc.
> Publications > N-Gram Based Filler Model for Robust Grammar Authoring