Publication Details
Speaker embeddings by modeling channel-wise correlations
Rohdin Johan A., Dr. (DCGM FIT BUT)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
speaker recognition, style-transfer, deep learning
Speaker embeddings extracted with deep 2D convolutional neural networks are typically modeled as projections of first and second order statistics of channel-frequency pairs onto a linear layer, using either average or attentive pooling along the time axis. In this paper we examine an alternative pooling method, where pairwise correlations between channels for given frequencies are used as statistics. The method is inspired by style-transfer methods in computer vision, where the style of an image, modeled by the matrix of channel-wise correlations, is transferred to another image, in order to produce a new image having the style of the first and the content of the second. By drawing analogies between image style and speaker characteristics, and between image content and phonetic sequence, we explore the use of such channel-wise correlations features to train a ResNet architecture in an end-to-end fashion. Our experiments on VoxCeleb demonstrate the effectiveness of the proposed pooling method in speaker recognition.
@INPROCEEDINGS{FITPUB12596, author = "Themos Stafylakis and A. Johan Rohdin and Luk\'{a}\v{s} Burget", title = "Speaker embeddings by modeling channel-wise correlations", pages = "501--505", booktitle = "Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH", journal = "Proceedings of Interspeech - on-line", volume = 2021, number = 8, year = 2021, location = "Brno, CZ", publisher = "International Speech Communication Association", ISSN = "1990-9772", doi = "10.21437/Interspeech.2021-1442", language = "english", url = "https://www.fit.vut.cz/research/publication/12596" }