Music-specific audio content analysis

Advances in storage technology and audio compression have made possible the storage of large collections of music on personal computers and devices. In order to develop effective tools for browsing and searching these large collections, it is necessary to go beyond treating audio as a monolithic block of samples and create algorithms that in some ways “understand” the musical content. Although initial work in Music Information Retrieval (MIR) mainly used ideas and features from Speech Analysis, a recent trend has been to develop music-specific feature extraction and analysis algorithms. In this talk, I will present some of my work in this area. In addition, I will argue that in some ways music is a better choice for understanding perception than the more traditional areas of image, video and speech analysis.

Speaker Details

George Tzanetakis is currently assistant professor in Computer Science at the University of Victoria in Canada. He received his PhD degree in Computer Science from Princeton University in May 2002 and was a PostDoctoral Fellow at Carnegie Mellon University in 2003. He was also main designer of the Moodlogic audio fingerpinting algorithm. His research deals with all stages of audio content analysis such as feature extraction, segmentation, classification.He is also an active musician and has studied saxophone performance, music theory and composition. More information can be found at: http://www.cs.uvic.ca/~gtzan.

Date:
Speakers:
George Tzanetakis
Affiliation:
University of Victoria
    • Portrait of Jeff Running

      Jeff Running