Research in speech recognition, language modeling, language understanding, spoken language systems and dialog systems
Our main goal is to build applications that make computers available everywhere, and work with our product-side partners to make this vision a reality. We are interested not only in creating state-of-the-art spoken language components, and also in how these disparate components can come together with other modes of human-computer interaction to form a unified, consistent computing environment. We are pursuing several projects to help us reach our vision of a fully speech-enabled computer.
The Speech & Dialog Group is managed by Geoffrey Zweig.
- Acoustic Modeling: How do we model phones and acoustic variations?
- Dialog and Conversational Systems: How to model interaction between systems and users?
- Language Models using Recurrent Neural Network (RNN)
- Language Understanding: Don't just recognize the words a user spoke, but understand what they mean.
- Noise Robustness: How do we make the system work when background noise is present?
- Voice search. Users can search for information such as a business from your phone.
In the past, the speech technology group has worked on other projects, including:
- Automatic Grammar Induction: How do create grammars to ease the development of spoken language systems?
- (MiPad) Multimodal Interactive Pad. Our first multimodal prototype.
- SALT (Speech Enabled Language Tags): A markup language for the multimodal web
- Intent Understanding. Not recognize the words the user says, but understand what they mean.
- Multimodal Conversational User Interface
- Personalized Language Model for improved accuracy
- (Whisper) Speech Recognition. Our previous dictation-oriented speech recognition project is a state-of-the-art general-purpose speech recognizer.
- (Whistler) Speech Synthesis (Text-to-Speech). We have produced a speech synthesizer so that your computer can talk to you.
- (WhisperID) Speaker Identification . Who is doing the talking?
- Speech Application Programming Interface (SAPI) Development Toolkit. The Whisper speech recognizer can be used by developers to produce applications using speech recognition