Publication Details
Robust Speaker Recognition Over Varying Channels
Brümmer Niko (Agnitio)
Reynolds Douglas (MIT)
Kenny Patrick (CRIM)
Pelecanos Jason (IBM Watson)
Vogt Robbie (QUT)
Castaldo Fabio (POLITO)
Dehak Najim (CRIM)
Dehak Reda (EPITA)
Glembek Ondřej, Ing., Ph.D. (DCGM FIT BUT)
Karam Zahi (MIT)
Noecker John Jr. (DUQ)
Na Hye Young (GMU)
Costin Ciprian C. (UAIC)
Hubeika Valiantsina, Ing. (DCGM FIT BUT)
Kajarekar Sachin, Ph.D. (SRI)
Scheffer Nicolas (SRI)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
speaker recognition
The report is on Robust Speaker Recognition Over Varying Channels
Nowadays, speaker recognition is relatively mature with the basic scheme, where speaker model is trained using target speaker speech and speech from large number of non-target speakers. However, the speech from non-target speakers is typically used only for finding general speech distribution (e.g. UBM). It is not used to find the "directions" important for discriminating between speakers. This scheme is reliable when the training and test data come from the same channel. All current speaker recognition systems are however prone to errors when the channel changes (for example from IP telephone to mobile). In speaker recognition, the "channel" variability can include also to linguistic content of the message, emotions, etc. - all these factors should not be considered by a speaker recognition system. Several techniques, such as feature mapping, eigen-channel adaptation and NAP (nuisance attribute projection) have been devised in the past years to overcome the channel variability. These techniques make use of the large amount of data from many speakers to find and ignore directions with high with-in speaker variability. However, these techniques still do not utilize the data to directly search for directions important for discriminating between speakers.
In an attempt to overcome the above mentioned problem, the research will be concentrate on utilizing the large amount of training data currently available to research community to derive the information, that can help discriminate among speakers and discard the information that can not. We propose direct identification of directions in model parameter space that are the most important for discrimination between speakers. According to our experience from speech and language recognition, the use of discriminative training should significantly improve the performance of acoustic SID system. We also expect that discriminative training will make the explicit modeling of channel variability needless.
The research will be based on an excellent baseline - the STBU system for NIST 2006 SRE evaluations (NIST rules prohibit us to disclose the exact position of the system in the evaluations).
The data to be used during the workshop will include NIST SRE data (telephone) but we will not overhear the requests from the security/defense community and evaluate the investigated techniques also on other data sources (meetings, web-radio, etc) as well as on cross-channel conditions.
@TECHREPORT{FITPUB8893, author = "Luk\'{a}\v{s} Burget and Niko Br{\"{u}}mmer and Douglas Reynolds and Patrick Kenny and Jason Pelecanos and Robbie Vogt and Fabio Castaldo and Najim Dehak and Reda Dehak and Ond\v{r}ej Glembek and Zahi Karam and Jr. John Noecker and Young Hye Na and C. Ciprian Costin and Valiantsina Hubeika and Sachin Kajarekar and Nicolas Scheffer and Jan \v{C}ernock\'{y}", title = "Robust Speaker Recognition Over Varying Channels", pages = 81, year = 2008, location = "Baltimore, US", publisher = "Johns Hopkins University", language = "english", url = "https://www.fit.vut.cz/research/publication/8893" }