Ivan Tashev, Michael L. Seltzer, and Yun-Cheng Ju
22 September 2009
In a hands-busy and eyes-busy activity such as driving, spoken language technology is an important component of the multimodal human-machine interface (HMI) of an in-car infotainment system. Adding speech to the HMI introduces two distinct challenges: accurately acquiring the user’s speech in a noisy car environment, and creating a spoken dialog system that does not require the driver’s full attention.
In Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2009)
Publisher Association for Computing Machinery, Inc.
Copyright © 2007 by the Association for Computing Machinery, Inc. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Publications Dept, ACM Inc., fax +1 (212) 869-0481, or email@example.com. The definitive version of this paper can be found at ACM’s Digital Library --http://www.acm.org/dl/.