CharFormer: A Glyph Fusion based Attentive Framework for High-precision Character Image Denoising
author: Hao Xu, Jilin University conference: ACM International Conference on Multimedia 2022 (ACM MM, 2022)
Figure 1: Character image denoising examples. (a) Character images with different degradation. (b) Correct denoising results. (c) Denoising results with incorrect glyphs, where the incorrect parts of glyphs are highlighted in red boxes.
ABSTRACT
- Current methods only focus on pixel-level information and ignore critical features of a character, such as its glyph, resulting in character-glyph damage during the denoising process.
- we introduce a novel generic framework based on glyph fusion and attention mechanisms, for precisely recovering character images without changing their inherent glyphs.
- CharFormer introduces a parallel target task for capturing additional information and injecting it into the image denoising backbone, which will maintain the consistency of character glyphs during character image denoising.
- Moreover, we utilize attention-based networks for global-local feature interaction, which will help to deal with blind denoising and enhance denoising performance.
PROPOSED CHARFORMER
Figure 2: The overall architecture of the proposed CharFormer, where different blocks and layers are distinguished by colours.
Framework Architecture
1. Input Projector.
- The input projector applies a 3 Γ 3 convolution layer with LeakyReLU, which aims to extract the shallow features of the input character image.
2. Deep Feature Extractor.
Figure 3: Details of CFB connections for demonstrating how each component of CFB works and the feature transformation procedure.
- The deep feature extractor is developed into a U-shaped encoder-decoder structure, which is composed of π CharFormer blocks (CFB).
- Note that each CFB parallelly organizes two components, RSAB and GSNB, to learn denoising information and glyph features, respectively, as Figure 3 shows.
3. Output Projector.
- The output feature map of the deep feature extractor consists of two parts due to the specific structure of CFBs, where $πΉ_{π πΈπΆ}$ will be fed into the output projector for clean character reconstruction
- For pixel level image reconstruction, we introduce the pixel loss for the reconstructed character image $πΌ_{π }$ as:
- The perceptual loss $L_{π} (Β·)$ is proposed by considering the feature level information comparison and the global discrepancy.
- Based on a VGG16 model ππΊπΊ(Β·) pretrained on the ImageNet dataset, we define perceptual loss for πΌπ
as:
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
4. Additional Feature Corrector.
- By inputting the ground truth of the noisy character image $πΌ_{πΊπ}$ , we obtain the skeletonized binary image $πΌ_{πΊπ_{π}} β R^{HΓW}$.
- We utilize the same loss functions in this module with the output projector, thus, we have:
CharFormer Block
1. Residual Self-Attention Block.
- We aim to take advantage of the self-attention mechanism for capturing long-range spatial dependencies and apply a regular convolution layer for improving the translational equivariance of the network.
2. Glyph Structural Network Block.
- GSNB is designed to extract the glyph information that will be injected into the backbone network RSABs, it stacks 3 Γ 3 convolution layers in a residual block.
3. Fused Attention.
- Inspired by CBAM (Convolutional Block Attention Module), Given an input feature $πΉ_{π}$ , the output $πΉπ΄(πΉ_{π})$ of the fused attention layer is: where $π_{π}$, $π_{π }$ are the channel and spatial attention, respectively.
4. Overall Loss Function
Finally, we define the overall loss function for CharFormer as:
EXPERIMENTS
- Thus, we obtain π·ππ‘ππ ππ‘1 which involves noisy raw images with uneven backgrounds.
- We also generate π·ππ‘ππ ππ‘2 by adding mixed Gaussian and speckle noise (noise variance π = 5) on the ground truth of document-level images.
- π·ππ‘ππ ππ‘3 simulates a blind denoising scenario by randomly adding mixed Gaussian and speckle noise (noise variance π = [10, 50]) to the ground truth of these character-level images, where the ground truth images are manually annotated by five philologists.
- we define the noisy raw character images as π·ππ‘ππ ππ‘4 to provide cases with complex noise.
Qualitative comparisons
Validation of Glyph Information Extraction.
The former two images refer to the noisy character image and its ground truth and the last image is the character skeleton, where we can find that the glyphs are extracted properly.