Publication Details

Self-supervised Pre-training of Text Recognizers

KIŠŠ Martin and HRADIŠ Michal. Self-supervised Pre-training of Text Recognizers. In: Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024. Lecture Notes in Computer Science, vol. 14807. Atény: Springer Nature Switzerland AG, 2024, pp. 218-235. ISBN 978-3-031-70545-8. Available from: https://link.springer.com/chapter/10.1007/978-3-031-70546-5_13
Czech title
Self-supervised předtrénování rozpoznávačů textu
Type
conference paper
Language
english
Authors
Kišš Martin, Ing. (DCGM FIT BUT)
Hradiš Michal, Ing., Ph.D. (DCGM FIT BUT)
URL
Keywords

Self-supervised learning, Text Recognition, Pre-training, OCR, HTR

Abstract

In this paper, we investigate self-supervised pre-training methods for document text recognition. Nowadays, large unlabeled datasets can be collected for many research tasks, including text recognition, but it is costly to annotate them. Therefore, methods utilizing unlabeled data are researched. We study self-supervised pre-training methods based on masked label prediction using three different approaches - Feature Quantization, VQ-VAE, and Post-Quantized AE. We also investigate joint-embedding approaches with VICReg and NT-Xent objectives, for which we propose an image shifting technique to prevent model collapse where it relies solely on positional encoding while completely ignoring the input image. We perform our experiments on historical handwritten (Bentham) and historical printed datasets mainly to investigate the benefits of the self-supervised pre-training techniques with different amounts of annotated target domain data. We use transfer learning as strong baselines. The evaluation shows that the self-supervised pretraining on data from the target domain is very effective, but it struggles to outperform transfer learning from closely related domains. This paper is one of the first researches exploring self-supervised pre-training in document text recognition, and we believe that it will become a cornerstone for future research in this area. We made our implementation of the investigated methods publicly available at https://github.com/DCGM/pero-pretraining.

Published
2024
Pages
218-235
Proceedings
Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024
Series
Lecture Notes in Computer Science
Volume
14807
Conference
International Conference on Document Analysis and Recognition, Atény, Řecko, GR
ISBN
978-3-031-70545-8
Publisher
Springer Nature Switzerland AG
Place
Atény, GR
DOI
EID Scopus
BibTeX
@INPROCEEDINGS{FITPUB13208,
   author = "Martin Ki\v{s}\v{s} and Michal Hradi\v{s}",
   title = "Self-supervised Pre-training of Text Recognizers",
   pages = "218--235",
   booktitle = "Barney Smith, E.H., Liwicki, M., Peng, L. (eds) Document Analysis and Recognition - ICDAR 2024",
   series = "Lecture Notes in Computer Science",
   volume = 14807,
   year = 2024,
   location = "At\'{e}ny, GR",
   publisher = "Springer Nature Switzerland AG",
   ISBN = "978-3-031-70545-8",
   doi = "10.1007/978-3-031-70546-5\_13",
   language = "english",
   url = "https://www.fit.vut.cz/research/publication/13208"
}
Back to top