Publication Details
Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources
Irie Kazuki (RWTH)
Beck Eugen (RWTH)
Schlüter Ralf, Dr., AD (RWTH)
Ney Hermann (RWTH)
speech recognition
Classically, automatic speech recognition (ASR) models are decomposed into acoustic models and language models (LM). LMs usually exploit the linguistic structure on a purely textual level and usually contribute strongly to an ASR systems performance. LMs are estimated on large amounts of textual data covering the target domain. However, most utterances cover more speci c topics, e.g. in uencing the vocabulary used. Therefore, it's desirable to have the LM adjusted to an utterance's topic. Previous work achieves this by crawling extra data from the web or by using signi cant amounts of previous speech data to train topic-speci c LM on. We propose a way of adapting the LM directly using the target utterance to be recognized. The corresponding adaptation needs to be done in an unsupervised or automatically supervised way based on the speech input. To deal with corresponding errors robustly, we employ topic encodings from the recently proposed Subspace Multinomial Model. This model also avoids any need of explicit topic labelling during training or recognition, making the proposed method straight-forward to use. We demonstrate the performance of the method on the Librispeech corpus, which consists of read ction books, and we discuss it's behaviour qualitatively.
@INPROCEEDINGS{FITPUB12102, author = "Karel Bene\v{s} and Kazuki Irie and Eugen Beck and Ralf Schl{\"{u}}ter and Hermann Ney", title = "Unsupervised Language Model Adaptation for Speech Recognition with no Extra Resources", pages = "954--957", booktitle = "Proceedings of DAGA 2019", year = 2019, location = "Rostock, DE", publisher = "DEGA Head office, Deutsche Gesellschaft f{\"{u}}r Akustik", ISBN = "978-3-939296-14-0", language = "english", url = "https://www.fit.vut.cz/research/publication/12102" }