Speaker Youngmoo Kim
Affiliation Drexel University
Host Sumit Basu
Date recorded 30 July 2008
Music is perhaps the richest medium for the expression of human emotions, but the computational analysis and synthesis of emotion through music is still in its infancy. This presentation will describe several research projects that explore the relationship between musical expression and emotion under the common vision of making music more interactive and accessible for both musicians and non-musicians.
The problem of detecting and labeling emotions within music is not only difficult to solve computationally, but is often without well- defined answers. The lack of clear "ground truth" makes it difficult to train systems that rely on quantified labels for supervised machine learning. Recently, there have been many initiatives to use online collaborative games for the collection of label data, and several such games have been proposed to collect labels spanning an excerpt of music. We have developed a new game, MoodSwings, that differs in that it records dynamic (per-second) labels of players' mood ratings of music, in keeping with the unique time-varying quality of musical mood. Furthermore, we believe these types of activities can be designed to simultaneously educate users, particularly K-12 students, about aspects of music information and acoustics. Our lab has created interactive activities illustrating aspects of two different sound and acoustics concepts: musical instrument timbre and the "cocktail party problem" (sound source isolation within mixtures) that also provide an instrument for collecting perceptual data related to these problems with a range of parameter variation that is difficult to achieve for large subject populations using traditional psychoacoustic evaluation.
At the same time, interest in music "performance" games (e.g., Guitar
©2008 Microsoft Corporation. All rights reserved.