Publication Details
Neural Target Speech Extraction: An overview
Delcroix Marc (NTT)
Ochiai Tsubasa (NTT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
Kinoshita Keisuke (NTT)
Yu Dong (Tencent AI Lab)
neural, speech, extraction
Humans can listen to a target speaker even in challenging acoustic conditions that have noise, reverberation, and interfering speakers. This phenomenon is known as the cocktail party effect . For decades, researchers have focused on approaching the listening ability of humans. One critical issue is handling interfering speakers because the target and nontarget speech signals share similar characteristics, complicating their discrimination. Target speech/speaker extraction (TSE) isolates the speech signal of a target speaker from a mixture of several speakers, with or without noises and reverberations, using clues that identify the speaker in the mixture. Such clues might be a spatial clue indicating the direction of the target speaker, a video of the speaker's lips, and a prerecorded enrollment utterance from which the speaker's voice characteristics can be derived. TSE is an emerging field of research that has received increased attention in recent years because it offers a practical approach to the cocktail party problem and involves such aspects of signal processing as audio, visual, and array processing as well as deep learning. This article focuses on recent neural-based approaches and presents an in-depth overview of TSE. We guide readers through the different major approaches, emphasizing the similarities among frameworks and discussing potential future directions.
@ARTICLE{FITPUB13059, author = "Kate\v{r}ina \v{Z}mol\'{i}kov\'{a} and Marc Delcroix and Tsubasa Ochiai and Jan \v{C}ernock\'{y} and Keisuke Kinoshita and Dong Yu", title = "Neural Target Speech Extraction: An overview", pages = "8--29", journal = "IEEE Signal Processing Magazine", volume = 40, number = 3, year = 2023, ISSN = "1558-0792", doi = "10.1109/MSP.2023.3240008", language = "english", url = "https://www.fit.vut.cz/research/publication/13059" }