Semantic Confidence Calibration for Spoken Dialog Applications

  • Dong Yu ,
  • Li Deng

Published by IEEE

The success of spoken dialog applications depends strongly on the quality of the semantic confidence measure that determines the selection of the dialog strategy. However, the semantic confidence measure obtained from typical automatic speech recognition engines is not optimized for specific semantic slots and applications. We present our recent work on using a novel maximum entropy model with distribution constraints to calibrate the semantic confidence scores with the inputs of only the raw semantic confidence and the associated raw word confidence scores. We illustrate how features can be constructed from the raw confidence scores with a variable number of words and how the quality of the semantic confidence measure can be further improved by adding another calibration stage for the word confidence measure. We demonstrate the effectiveness of our approach for two types of semantic slots of practical significance. For the ZIP-code semantic slot, the new measure achieves relative 10.6% mean square error (MSE), 19.3% normalized negative loglikelihood (NNLL), and 38.5% equal error rate (EER) reduction. The counterpart of the date-time semantic slot is 37.8%, 38.7%, and 23.1%, respectively.