Next Article in Journal
Inequalities for the Casorati Curvature of Totally Real Spacelike Submanifolds in Statistical Manifolds of Type Para-Kähler Space Forms
Previous Article in Journal
The Feature Description of Formal Context Based on the Relationships among Concept Lattices
Previous Article in Special Issue
Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary
Article

S2A: Scale-Attention-Aware Networks for Video Super-Resolution

1
College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
2
Tsinghua Shenzhen International Graduate School, Tsinghua University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Academic Editor: Amelia Carolina Sparavigna
Received: 13 September 2021 / Revised: 17 October 2021 / Accepted: 19 October 2021 / Published: 25 October 2021
(This article belongs to the Special Issue Advances in Image Fusion)
Convolutional Neural Networks (CNNs) have been widely used in video super-resolution (VSR). Most existing VSR methods focus on how to utilize the information of multiple frames, while neglecting the feature correlations of the intermediate features, thus limiting the feature expression of the models. To address this problem, we propose a novel SAA network, that is, Scale-and-Attention-Aware Networks, to apply different attention to different temporal-length streams, while further exploring both spatial and channel attention on separate streams with a newly proposed Criss-Cross Channel Attention Module (C3AM). Experiments on public VSR datasets demonstrate the superiority of our method over other state-of-the-art methods in terms of both quantitative and qualitative metrics. View Full-Text
Keywords: scale-and-attention-aware; criss-cross channel attention; video super-resolution scale-and-attention-aware; criss-cross channel attention; video super-resolution
Show Figures

Figure 1

MDPI and ACS Style

Guo, T.; Dai, T.; Liu, L.; Zhu, Z.; Xia, S.-T. S2A: Scale-Attention-Aware Networks for Video Super-Resolution. Entropy 2021, 23, 1398. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111398

AMA Style

Guo T, Dai T, Liu L, Zhu Z, Xia S-T. S2A: Scale-Attention-Aware Networks for Video Super-Resolution. Entropy. 2021; 23(11):1398. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111398

Chicago/Turabian Style

Guo, Taian, Tao Dai, Ling Liu, Zexuan Zhu, and Shu-Tao Xia. 2021. "S2A: Scale-Attention-Aware Networks for Video Super-Resolution" Entropy 23, no. 11: 1398. https://0-doi-org.brum.beds.ac.uk/10.3390/e23111398

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop