Publication Details
Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Watanabe Shinji, Dr. (JHU)
Astudillo Ramon (IBM Watson)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
cycle-consistency, self-supervision, sequence-tosequence, speech recognition
Self-supervised ASR-TTS models suffer in out-of-domain data conditions. Here we propose an enhanced ASR-TTS (EAT) model that incorporates two main features: 1) The ASR!TTS direction is equipped with a language model reward to penalize the ASR hypotheses before forwarding it to TTS. 2) In the TTS!ASR direction, a hyper-parameter is introduced to scale the attention context from synthesized speech before sending it to ASR to handle out-ofdomain data. Training strategies and the effectiveness of the EAT model are explored under out-of-domain data conditions. The results show that EAT reduces the performance gap between supervised and self-supervised training significantly by absolute 2.6% and 2.7% on Librispeech and BABEL respectively.
@INPROCEEDINGS{FITPUB12524, author = "K. Murali Baskar and Luk\'{a}\v{s} Burget and Shinji Watanabe and Ramon Astudillo and Jan \v{C}ernock\'{y}", title = "Eat: Enhanced ASR-TTS for Self-Supervised Speech Recognition", pages = "6753--6757", booktitle = "ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", year = 2021, location = "Toronto, Ontario, CA", publisher = "IEEE Signal Processing Society", ISBN = "978-1-7281-7605-5", doi = "10.1109/ICASSP39728.2021.9413375", language = "english", url = "https://www.fit.vut.cz/research/publication/12524" }