Speaker Na Yang
Affiliation Microsoft Intern
Host Arjmand Samuel
Date recorded 2 August 2011
This talk will present an emotion sensor on Windows phones named ‘Listen-n-Feel’, which listens to the phone user’s speech and tells whether the user is happy or sad, based on audio signal features. This phone application can be widely used in social networks, integrated in character-playing games, or used to monitor patients with mental problems or other health care area. Recorded audio data is processed on the cloud and signal features are extracted, both in the time domain and frequency domain. Machine learning method is applied to predict emotions on statistics of speech signal features, with the training data derived from a prosody database. The emotion detection application will also be demoed on the presentation.
©2011 Microsoft Corporation. All rights reserved.