Publication Details
Multisv: Dataset for Far-Field Multi-Channel Speaker Verification
Plchot Oldřich, Ing., Ph.D. (DCGM FIT BUT)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
Multi-channel, speaker verification, MultiSV, dataset, beamforming
Motivated by unconsolidated data situation and the lack of a standard benchmark in the field, we complement our previous efforts and present a comprehensive corpus designed for training and evaluating text-independent multi-channel speaker verification systems. It can be readily used also for experiments with dereverberation, denoising, and speech enhancement. We tackled the ever-present problem of the lack of multi-channel training data by utilizing data simulation on top of clean parts of the Voxceleb corpus. The development and evaluation trials are based on a retransmitted Voices Obscured in Complex Environmental Settings (VOiCES) corpus, which we modified to provide multi-channel trials. We publish full recipes that create the dataset from public sources as the MultiSV dataset, and we provide results with two of our multi-channel speaker verification systems with neural network-based beamforming based either on predicting ideal binary masks or the more recent Conv-TasNet.
@INPROCEEDINGS{FITPUB12785, author = "Ladislav Mo\v{s}ner and Old\v{r}ich Plchot and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}", title = "Multisv: Dataset for Far-Field Multi-Channel Speaker Verification", pages = "7977--7981", booktitle = "ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings", year = 2022, location = "Singapore, SG", publisher = "IEEE Signal Processing Society", ISBN = "978-1-6654-0540-9", doi = "10.1109/ICASSP43922.2022.9746833", language = "english", url = "https://www.fit.vut.cz/research/publication/12785" }