Publication Details
Parameter-Efficient Tuning With Adaptive Bottlenecks For Automatic Speech Recognition
Prasad Amrutha (DCGM FIT BUT)
Khalil Driss (IDIAP)
Madikeri Srikanth (IDIAP)
Demuynck Kris (IDLab - imec)
Motlíček Petr, doc. Ing., Ph.D. (DCGM FIT BUT)
ASR, XLSR, Adapters, ATC
Transfer learning from large multilingual pretrained models, like XLSR, has become the new paradigm for Automatic Speech Recognition (ASR). Considering their ever-increasing size, fine-tuning all the weights has become impractical when the computing budget is limited. Adapters are lightweight trainable modules inserted between layers while the pretrained part is kept frozen. They form a parameter-efficient fine-tuning method, but they still require a large bottleneck size to match standard fine-tuning performance. In this paper, we propose ABSADAPTER, a method to further reduce the parameter budget for equal task performance. Specifically, ABSADAPTER uses an Adaptive Bottleneck Scheduler to redistribute the adapter's weights to the layers that need adaptation the most. By training only 8% of the XLSR model, ABSADAPTER achieves close to standard fine-tuning performance on a domain-shifted Air-Traffic Communication (ATC) ASR task.
@INPROCEEDINGS{FITPUB13160, author = "Geoffroy Vanderreydt and Amrutha Prasad and Driss Khalil and Srikanth Madikeri and Kris Demuynck and Petr Motl\'{i}\v{c}ek", title = "Parameter-Efficient Tuning With Adaptive Bottlenecks For Automatic Speech Recognition", pages = "1--7", booktitle = "Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", year = 2023, location = "Taipei, TW", publisher = "IEEE Signal Processing Society", ISBN = "979-8-3503-0689-7", doi = "10.1109/ASRU57964.2023.10389769", language = "english", url = "https://www.fit.vut.cz/research/publication/13160" }