Publication Details
TG2: text-guided transformer GAN for restoring document readability and perceived quality
Generative adversarial networks, Attention neural networks, Textual document restoration, Text inpainting
Most image enhancement methods focused on restoration of digitized textual documents are limited to cases where the text information is still preserved in the input image, which may often not be the case. In this work, we propose a novel generative document restoration method which allows conditioning the restoration on a guiding signal in form of target text transcription and which does not need paired high- and low-quality images for training. We introduce a neural network architecture with an implicit text-to-image alignment module. We demonstrate good results on inpainting, debinarization and deblurring tasks, and we show that the trained models can be used to manually alter text in document images.A user study shows that that human observers confuse the outputs of the proposed enhancement method with reference high-quality images in as many as 30% of cases.
@ARTICLE{FITPUB12333, author = "Old\v{r}ich Kodym and Michal Hradi\v{s}", title = "TG2: text-guided transformer GAN for restoring document readability and perceived quality", pages = "1--14", booktitle = "International Journal on Document Analysis and Recognition", journal = "International Journal on Document Analysis and Recognition (IJDAR)", volume = 2021, number = 1, year = 2021, publisher = "Springer Verlag", ISSN = "1433-2825", doi = "10.1007/s10032-021-00387-z", language = "english", url = "https://www.fit.vut.cz/research/publication/12333" }