Publication Details
Application of speaker- and language identification state-of-the-art techniques for emotion recognition
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
- http://pdn.sciencedirect.com/science?_ob=MiamiImageURL&_cid=271578&_user=640830&_pii=S0167639311000082&_check=y&_origin=search&_zone=rslt_list_item&_coverDate=2011-12-31&wchp=dGLbVlS-zSkWz&md5=2a79c3d171cd13a3689408115666e2ef/1-s2.0-S0167639311000082-main
- http://www.fit.vutbr.cz/research/groups/speech/publi/2011/kockman_article_speech%20communication_53_elsevier2011.pdf PDF
Emotion recognition; Gaussian mixture models; Maximum-mutual-information; Intersession variability compensation; Score-level fusion
Authors of this article show that feature extraction and statistical modeling methods that are usually used in speaker and language recognition can be successfully used for emotion recognition as well.
This article describes our efforts of transferring feature extraction and statistical modeling techniques from the fields of speaker and language identification to the related field of emotion recognition. We give detailed insight to our acoustic and prosodic feature extraction and show how to apply Gaussian Mixture Modeling techniques on top of it. We focus on different flavors of Gaussian Mixture Models (GMMs), including more sophisticated approaches like discriminative training using Maximum-Mutual-Information (MMI) criterion and InterSession Variability (ISV) compensation. Both techniques show superior performance in language and speaker identification. Furthermore, we combine multiple system outputs by score-level fusion to exploit the complementary information in diverse systems. Our proposal is evaluated with several experiments on the FAU Aibo Emotion Corpus containing non-acted spontaneous emotional speech. Within the Interspeech 2009 Emotion Challenge we could achieve the best results for the 5-class task of the Open Performance Sub-Challenge with an unweighted average recall of 41.7%. Further additional experiments on the acted Berlin Database of Emotional Speech show the capability of intersession variability compensation for emotion recognition.
@ARTICLE{FITPUB9676, author = "Marcel Kockmann and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}", title = "Application of speaker- and language identification state-of-the-art techniques for emotion recognition", pages = "1172--1185", booktitle = "Speech Communication", journal = "Speech Communication", volume = 53, number = 9, year = 2011, publisher = "Elsevier Science", ISSN = "0167-6393", doi = "10.1016/j.specom.2011.01.007", language = "english", url = "https://www.fit.vut.cz/research/publication/9676" }