The Markov Assumption in Spoken Dialogue Management

  • Tim Paek ,
  • Max Chickering

Proceedings of the 6th SIGDIAL Workshop on Discourse and Dialogue |

The goal of dialogue management in a spoken dialogue system is to take actions based on observations and inferred beliefs. To ensure that the actions optimize the performance or robustness of the system, researchers have turned to reinforcement learning methods to learn policies for action selection. To derive an optimal policy from data, the dynamics of the system is often represented as a Markov Decision Process (MDP), which assumes that the state of the dialogue depends only on the previous state and action. In this paper, we investigate whether constraining the state space by the Markov assumption, especially when the structure of the state space may be unknown, truly affords the highest reward. In a simulation experiment conducted in the context of a dialogue system for interacting with a speech-enabled web browser, models under the Markov assumption did not perform as well as an alternative model which attempts to classify the total reward with accumulating features. We discuss the implications of the study as well as limitations.