Publication Details
Non-parametric Speaker Turn Segmentation of Meeting Data
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
speech processing, feature extraction, speaker detection, meeting data
This paper describes the non-parametric Speaker Turn Segmentation extracted from the meeting Data.
An extension of conventional speaker segmentation framework is presented for a scenario in which a number of microphones record the activity of speakers present at a meeting (one microphone per speaker). Although each microphone can receive speech from both the participant wearing the microphone (local speech) and other participants (cross-talk), the recorded audio can be broadly classified in three ways: local speech, cross-talk, and silence. This paper proposes a technique which takes into account cross-correlations, values of its maxima, and energy differences as features to identify and segment speaker turns. In particular, we have used classical cross-correlation functions, time smoothing and in part temporal constraints to sharpen and disambiguate timing differences between microphone channels that may be dominated by noise and reverberation. Experimental results show that proposed technique can be successively used for speaker segmentation of data collected from a number of different setups.
@INPROCEEDINGS{FITPUB7978, author = "Petr Motl\'{i}\v{c}ek and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}", title = "Non-parametric Speaker Turn Segmentation of Meeting Data", pages = "657--660", booktitle = "Interspeech'2005 - Eurospeech - 9th European Conference on Speech Communication and Technology", journal = "European Speech Communication", volume = 2005, number = 9, year = 2005, location = "Lisabon, PT", publisher = "International Speech Communication Association", ISSN = "1018-4074", language = "english", url = "https://www.fit.vut.cz/research/publication/7978" }