Publication Details
Speech and Language Recognition with Low-rank Adaptation of Pretrained Models
Madikeri Srikanth
KHALIL, D.
Motlíček Petr, doc. Ing., Ph.D. (DCGM)
SCHUEPBACH, C.
parameter reduction, language identification, speech recognition, wav2vec2.0
Finetuning large pretrained models demands considerable
computational resources, posing practical constraints. Major-
ity of the total number of parameters in these models are used
by fully connected layers. In this work, we consider applying
a semi-orthogonal constraint, followed by full finetuning to the
fully connected layers reduces model parameters significantly
without sacrificing efficacy in downstream tasks. Specifically,
we consider wav2vec2.0 XLS-R and Whisper models for Auto-
matic Speech Recognition and Language Recognition. Our re-
sults show that we can reduce the model size by approximately
24% during both training and inference time with 0.7% absolute
drop in performance for XLS-R and no drop in performance for
Whisper for ASR. In combination with performance-efficient
training with low-rank adapters, the resource requirements for
training can be further reduced by up to 90%.
@inproceedings{BUT193370,
author="PRASAD, A. and MADIKERI, S. and KHALIL, D. and MOTLÍČEK, P. and SCHUEPBACH, C.",
title="Speech and Language Recognition with Low-rank Adaptation of Pretrained Models",
booktitle="Proceedings of Interspeech",
year="2024",
journal="Proceedings of Interspeech",
volume="2024",
number="9",
pages="2825--2829",
publisher="International Speech Communication Association",
address="Kos Island",
doi="10.21437/Interspeech.2024-2187",
issn="1990-9772",
url="https://www.isca-archive.org/interspeech_2024/prasad24_interspeech.html"
}