Listen-n-feel: An Emotion Sensor on the Phone Using Speech Processing and Cloud Computing

This talk will present an emotion sensor on Windows phones named ‘Listen-n-Feel’, which listens to the phone user’s speech and tells whether the user is happy or sad, based on audio signal features. This phone application can be widely used in social networks, integrated in character-playing games, or used to monitor patients with mental problems or other health care area. Recorded audio data is processed on the cloud and signal features are extracted, both in the time domain and frequency domain. Machine learning method is applied to predict emotions on statistics of speech signal features, with the training data derived from a prosody database. The emotion detection application will also be demoed on the presentation.

Speaker Details

Na Yang is a Ph.D. student from University of Rochester, supervised by Professor Wendi Heinzelman in the Wireless Communications and Networking Group. Her interests lie in the areas of mobile computing, signal processing and multimedia wireless sensor networks. She has been working on topics ranging from energy-efficient wireless image transmissions, optimized camera and motion sensor placement, to audio signal processing, and has published on top-tier conferences such as IEEE Global Communications Conference (GLOBECOM) and IEEE International Conference on Communications (ICC).

Date:
Speakers:
Na Yang
Affiliation:
Microsoft Intern
    • Portrait of Jeff Running

      Jeff Running