Publication Details
Strategies for Training Large Scale Neural Network Language Models
Deoras Anoop (JHU)
Povey Daniel (JHU)
Burget Lukáš, doc. Ing., Ph.D. (DCGM FIT BUT)
Černocký Jan, prof. Dr. Ing. (DCGM FIT BUT)
recurrent neural network, language model, speech recognition, maximum entropy
Techniques for effective training of recurrent neural network based language models are described, and new state-of-the-art results on standard speech recognition task are reported.
We describe how to effectively train neural network based language models on large data sets. Fast convergence during training and better overall performance is observed when the training data are sorted by their relevance. We introduce hash-based implementation of a maximum entropy model, that can be trained as a part of the neural network model. This leads to significant reduction of computational complexity. We achieved around 10% relative reduction of word error rate on English Broadcast News speech recognition task, against large 4-gram model trained on 400M tokens.
@INPROCEEDINGS{FITPUB9775, author = "Tom\'{a}\v{s} Mikolov and Anoop Deoras and Daniel Povey and Luk\'{a}\v{s} Burget and Jan \v{C}ernock\'{y}", title = "Strategies for Training Large Scale Neural Network Language Models", pages = "196--201", booktitle = "Proceedings of ASRU 2011", year = 2011, location = "Hilton Waikoloa Village, Big Island, Hawaii, US", publisher = "IEEE Signal Processing Society", ISBN = "978-1-4673-0366-8", language = "english", url = "https://www.fit.vut.cz/research/publication/9775" }