Share on Facebook Tweet on Twitter Share on LinkedIn Share by email
Murat Akbacak

Murat Akbacak

Murat Akbacak is a senior scientist at Microsoft. Prior to joining speech group at Microsoft, Murat was with Speech Technology and Research (STAR) Laboratory at SRI International from 2006 to 2012. He earned his Ph.D. and M.Sc. degrees, in 2009 and 2003 respectively, in Electrical Engineering at the University of Colorado at Boulder where he proposed Environmental Sniffing framework, a novel acoustic condition tracking approach, and applied to robust ASR and audio indexing/search in noisy and acoustically heterogeneous collections. During his PhD, he also proposed a novel approach that employs multilingual hybrid keyword representations during audio search in limited-resource languages. At SRI, he worked in several US government-funded projects including DARPA GALE, TRANSTAC, RATS, and IARPA ALADDIN and BEST. He was leading the keyword spotting effort under RATS project and the multimedia speech and audio processing effort under ALADDIN projects. His areas of expertise include multilingual and robust speech recognition and retrieval, language modeling, conversational understanding, acoustic condition tracking, acoustic event detection in multimedia recordings, dialect and language recognition, with the greatest impact being the introduction of robust and adaptive speech retrieval methods. He has authored over thirty peer-reviewed papers in these areas, and has been a continuous reviewer of all major speech processing conferences and journals since 2006. During his PhD, he founded the ISCA student branch, which has been very successful in engaging graduate students in ISCA activities. He was a local organizer at Spoken Language Technology (SLT) Workshop in Berkeley in 2010. He is co-chairing the same workshop taking place in South Lake Tahoe in 2014. He has been serving as Associate Editor for IEEE Signal Processing Society since 2014. He has been an Adjunct Professor at Electrical and Computer Science department UT Dallas and a PhD co-advisor at Stanford University since 2011.



Book chapters:

  • M. Akbacak, J.H.L. Hansen, “DSP in Mobile and Vehicle Systems'', Springer Publishing, 2006.

  • J.H.L. Hansen, X. Zhang, M. Akbacak, U. Yapanel, B. Pellom, W. Ward, “DSP in Mobile and Vehicle Systems'', Kluwer Publishing, 2004. 


  • M. Akbacak, J.H.L. Hansen, “Spoken Proper Name Retrieval for Limited Resource Languages Using Multilingual Hybrid Representations'', IEEE Trans. Audio, Speech and Language Processing, vol. 18, no. 6, pp. 1486-1495, Aug. 2010.

  • M. Akbacak, J.H.L. Hansen, “Environmental Sniffing: Noise Knowledge Estimation for Robust Speech Systems'', IEEE Trans. Audio, Speech and Language Processing, vol. 15, no. 2, pp. 465-477, Feb. 2007.


  • M. Akbacak, D. Hakkani-Tur, G. Tur, "Rapidly Building Domain-Specific Entity-Centric
    Language Models Using Semantic Web Knowledge Sources", Interspeech Conference, 2014.

  • P. Schulam, M. Akbacak, "Diagnostic Techniques for Spoken Keyword Discovery", Interspeech Conference, 2014.

  • S. Pancoast, M. Akbacak, "Softening Quantization in Bag-of-Audio-Words", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2014.

  • D. Castan, M. Akbacak, "Indexing Multimedia Documents with Acoustic Concept Recognition Lattices", Interspeech Conference, 2013. 

  • D. Castan, M. Akbacak, "Segmental-GMM Approach based on Acoustic Concept Segmentation", Speech, Language, and Audio in Multimedia (SLAM) Workshop, Interspeech Conference, 2013.

  • M. Akbacak, L. Burget, W. Wang, J. Van Hout, "Rich System Combination for Keyword Spotting in Noisy and Acoustically Heterogeneous Audio Streams", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.

  • S. Pancoast, M. Akbacak, "N-gram Extension for Bag-of-Audio Words", IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.

  • J. van Hout, M. Akbacak, D. Castan, E. Yeh, M. Sanchez, "Extracting Spoken and Acoustic Concepts for Multimedia Event Detection", accepted to IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2013.

  • M. Akbacak, R. C. Bolles, J. B. Burns, M. Eliot, A. Heller, J. A. Herson, D. C. Koelma, X. Li, M. Mazloom, G. K. Myers, R. Nallapati, S. Pancoast, R. Revatia, A. W. M. Smeulders, P. Sharma, C. G. M. Snoek, C. Sun, R. Trichet, K. E. A. van de Sande, and E. Yeh, “The TRECVID SESAME MED system," in Proc. TRECVID Workshop, Gaithersburg, USA, 2012.

  • S. Pancoast, M. Akbacak, M. Sanchez, “Supervised Acoustic Concept Extraction for Multimedia Event Detection," ACM Multimedia Workshop on Audio and Multimedia Methods for Large-Scale Video Analysis (AMVA), 2012.

  • S. Pancoast, M. Akbacak, “Bag-of-Audio-Words Approach for Multimedia Event Classification," Interspeech Conference, 2012.

  • P. Karanasou, L. Burget, D. Vergyri, M. Akbacak, A. Mandal, “Discriminatively trained phoneme confusion model for keyword spotting,", Interspeech Conference, 2012.

  • M. Akbacak, D. Vergyri, A. Stolcke, N. Scheffer, and A. Mandal, “Effective Arabic Dialect Classification Using Diverse Phonotactic Models," in Proceedings of Interspeech, 2011.

  • A. Mandal, D. Vergyri, M. Akbacak, C. Richey, and A. Kathol, “Acoustic data sharing for Afghan and Persian Languages," in Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011.

  • J. Zheng, A. Mandal, X. Lei, M. Frandsen, N. F. Ayan, D. Vergyri, W. Wang, M. Akbacak, and K. Precoda, “Implementing SRI's pashto speech-to-speech translation system on a smartphone," in 2010 IEEE Workshop on Spoken Language Technology, Berkeley, CA, pp. 121--126, 2010.

  • A. Stolcke, M. Akbacak, L. Ferrer, S. Kajarekar, C. Richey, N. Scheffer, and E. Shriberg, “Improving language recognition with multilingual phone recognition and speaker adaptation transforms," in Proc. Odyssey Speaker and Language Recognition Workshop, (Brno, Czech Republic), pp. 256--262, June 2010.

  • Y. Uchida, S. Sakazawa, M. Agrawal, M. Akbacak, “KDDI Labs and SRI International at TRECVID 2010: Content-Based Copy Detection'', in NIST TRECVID 2010 Evaluation Workshop, November 2010, Gaithersburg, MD.

  • M. Akbacak, H. Franco, M. Frandsen, S. Hasan, H. Jameel, A. Kathol, S. Khadivi, X. Lei, A. Mandal, S. Mansour, K. Precoda, C. Richey, D. Vergyri, W. Wang, M. Yang, and J. Zheng, “Recent advances in SRI's IraqComm(tm) Iraqi Arabic-English speech-to-speech translation system," in Proc. IEEE ICASSP, (Taipei), pp. 4809--4813, April 2009.

  • M. Akbacak, D. Vergyri, and A. Stolcke, “Open-vocabulary spoken term detection using graphone-based hybrid recognition systems,'' in Proc. IEEE ICASSP, (Las Vegas), pp. 5240--5243, March 2008.

  • E. Shriberg, L. Ferrer, S. Kajarekar, N. Scheffer, A. Stolcke, M. Akbacak, “Detecting Nonnative Speech Using Speaker Recognition Approaches'', Odyssey Speaker and Language Recognition Workshop 2008, Stellenbosch, South Africa.

  • D. Vergyri, I. Shafran, A. Stolcke, R. R. Gadde, M. Akbacak, B. Roark, Wang, “The SRI/OGI 2006 Spoken Term Detection System'', Interspeech/Eurospeech 2007, Antwerp, Belgium.

  • Wooil Kim, M. Akbacak, J. H. L. Hansen, “Advances in SpeechFind: CRSS-UTD Spoken Document Retrieval System," ACM SIGIR 2007 Workshop on Searching Spontaneous Conversational Speech, pp.23-28, Amsterdam, Netherlands, July 2007.

  • M. Akbacak, J.H.L. Hansen, “A Dynamic Fusion Method for Robust Multilingual Spoken Document Retrieval Systems Having Tiered Resources'', Interspeech/ICSLP 2006, Pittsburgh, PA, USA, September 2006.

  • M. Akbacak, J.H.L. Hansen, “Spoken Proper Name Retrieval In Audio Streams for Limited Resource Languages via Lattice Based Search Using Hybrid Representations'', ICASSP 2006, Toulouse, France, May 2006.

  • M. Akbacak, Y. Gao, L. Gu, Hong-Kwang Jeff Kuo, “Rapid Transition to New Spoken Dialogue Domains: Language Model Training Using Knowledge from Previous Domain Applications and Web Text Resources.'', EUROSPEECH 2005, Lisbon, Portugal, September 2005.

  • M. Akbacak, J.H.L. Hansen, “General Issues in Environmental Noise Tracking for Robust In-Vehicle Speech Applications: Supervised vs Unsupervised Acoustic Noise Analysis'', 2005 Workshop on DSP for in-Vehicle and Mobile Systems, Sesimbra, Portugal, September, 2005.

  • J.H.L. Hansen, M. Akbacak, U. Yapanel, X. Zhang, B. Pellom, W. Ward, “Robust Speech Prcessing for In-Vehicle Voice Navigation Systems'', 18th International Congress on Acoustics (ICA2004), Kyoto, Japan, April, 2004.

  • M. Akbacak, J.H.L. Hansen, “Environmental Sniffing: Robust Digit Recognition in an In-vehicle Environment'', pp. 2177-2180, EUROSPEECH 2003, Geneva, Switzerland, September 2003.

  • M. Akbacak, J. H.L. Hansen, “Environmental Sniffing: Noise Knowledge Estimation for Robust Speech Systems'', vol. 2, pp. 113-116, ICASSP 2003.

  • J.H.L. Hansen, X. Zhang, M. Akbacak, U. Yapanel, B. Pellom, W. Ward, “CU-Move: Advances in In-Vehicle Speech Systems for Route Navigation'', paper 6.5 (pp. 1-6), IEEE Workshop in DSP in Mobile and Vehicular Systems, Nagoya, Japan, April, 2003.

  • J.H.L. Hansen, B. Zhou, M. Akbacak, R. Sarikaya, B.L. Pellom, "Audio Stream Phrase Recognition for a National Gallery of the Spoken Word: One Small Step", ICSLP 2000, pp. 1089-1092.



Sunnyvale, CA 94089                                    

Microsoft Silicon Valley