Semi-Supervised Semantic Tagging for Conversational Understanding Using Markov Topic Regression

  • Asli Celikyilmaz ,
  • Dilek Hakkani-Tür ,
  • Gokhan Tur ,
  • Ruhi Sarikaya

Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics |

Published by Association for Computational Linguistics

Finding concepts in natural language utterances is a challenging task, especially given the scarcity of labeled data for learning semantic ambiguity. Furthermore, data mismatch issues, which arise when the expected test (target) data does not exactly match the training data, aggravate this scarcity problem. To deal with these issues, we describe an efficient semi-supervised learning (SSL) approach which has two components: (i) Markov Topic Regression is a new probabilistic model to cluster words into semantic tags concepts). It can efficiently handle semantic ambiguity by extending standard topic models with two new features. First, it encodes word ngram features from labeled source and unlabeled target data. Second, by going beyond a bag-of-words approach, it takes into account the inherent sequential nature of utterances to learn semantic classes based on context. (ii) Retrospective Learner is a new learning technique that adapts to the unlabeled target data. Our new SSL approach improves semantic tagging performance by 4% absolute over the baseline models, and also compares favorably on semi-supervised syntactic tagging.