Publication Details
Recurrent neural network based language model
Karafiát Martin, Ing., Ph.D. (DCGM FIT BUT)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
Khudanpur Sanjeev (JHU)
language modeling, recurrent neural networks, speech recognition
This paper is on new application to speech recognition, the recurrent neural network based language model (RNN LM).
A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity.
@INPROCEEDINGS{FITPUB9362, author = "Tom\'{a}\v{s} Mikolov and Martin Karafi\'{a}t and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y} and Sanjeev Khudanpur", title = "Recurrent neural network based language model", pages = "1045--1048", booktitle = "Proceedings of the 11th Annual Conference of the International Speech Communication Association (INTERSPEECH 2010)", journal = "Proceedings of Interspeech - on-line", volume = 2010, number = 9, year = 2010, location = "Makuhari, Chiba, JP", publisher = "International Speech Communication Association", ISBN = "978-1-61782-123-3", ISSN = "1990-9772", language = "english", url = "https://www.fit.vut.cz/research/publication/9362" }