remotesensing-logo

Journal Browser

Journal Browser

Multi-Task Deep Learning for Image Fusion and Segmentation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 9419

Special Issue Editors

Pacific Northwest National Laboratory, 1100 Dexter Avenue North, Suite 500, Seattle, WA 98109, USA
Interests: few-shot learning; remote sensing/hyperspectral imaging; natural language processing; AI safety and security; equivariant architectures and applications of harmonic analysis to problems in machine learning
Natural Resources and Ecosystem Services, Institute for Global Environmental Strategies, Hayama 240-0115, Japan
Interests: geographic information systems (GIS); remote sensing; spatial modeling; and data mining for urban and environmental analysis and planning; mapping urban land cover (green space; impervious surfaces; etc.); monitoring forest health using fine resolution satellite imagery
Special Issues, Collections and Topics in MDPI journals
Department of Geographic Information Science, Nanjing University, Nanjing 210046, China
Interests: remote sensing image information extraction; object-based image analysis (OBIA); machine learning or data mining with applications in remote sensing and geospatial analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Typically, a deep learning model is trained to perform a single task with high accuracy; for example classifying images.  Multi-task deep learning is a technique in machine learning where a deep model is trained to perform several tasks (e.g., classify an image, segment out the object, and predict the depth) with different metrics and a collection of shared representations.  By training the model across several related tasks, the model develops features which are less prone to overfitting on the training data and thus generalizes better.  This technique has shown great success in image and textual analysis.  In this special issue, we consider the applicability of this technique to problems arising in remote sensing such as scene segmentation, image fusion, image registration, object detection, super resolution, and anomaly detection.

Dr. Doster Timothy J
Dr. Brian Alan Johnson
Dr. Lei Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-task learning
  • Image fusion
  • Pansharpening
  • Segmentation
  • Image registration
  • Parameter sharing
  • Remote sensing
  • Convolution neural networks
  • Domain adaptation

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

26 pages, 6129 KiB  
Article
MSDRN: Pansharpening of Multispectral Images via Multi-Scale Deep Residual Network
by Wenqing Wang, Zhiqiang Zhou, Han Liu and Guo Xie
Remote Sens. 2021, 13(6), 1200; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061200 - 21 Mar 2021
Cited by 21 | Viewed by 2634
Abstract
In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods [...] Read more.
In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods that are based on convolutional neural networks (CNN) with different architectures have been introduced by prior works. However, different scale information of the source images is not considered by these methods, which may lead to the loss of high-frequency details in the fused image. This paper proposes a pansharpening method of MS images via multi-scale deep residual network (MSDRN). The proposed method constructs a multi-level network to make better use of the scale information of the source images. Moreover, residual learning is introduced into the network to further improve the ability of feature extraction and simplify the learning process. A series of experiments are conducted on the QuickBird and GeoEye-1 datasets. Experimental results demonstrate that the MSDRN achieves a superior or competitive fusion performance to the state-of-the-art methods in both visual evaluation and quantitative evaluation. Full article
(This article belongs to the Special Issue Multi-Task Deep Learning for Image Fusion and Segmentation)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 3836 KiB  
Review
Multi-Exposure Image Fusion Techniques: A Comprehensive Review
by Fang Xu, Jinghong Liu, Yueming Song, Hui Sun and Xuan Wang
Remote Sens. 2022, 14(3), 771; https://0-doi-org.brum.beds.ac.uk/10.3390/rs14030771 - 07 Feb 2022
Cited by 24 | Viewed by 5703
Abstract
Multi-exposure image fusion (MEF) is emerging as a research hotspot in the fields of image processing and computer vision, which can integrate images with multiple exposure levels into a full exposure image of high quality. It is an economical and effective way to [...] Read more.
Multi-exposure image fusion (MEF) is emerging as a research hotspot in the fields of image processing and computer vision, which can integrate images with multiple exposure levels into a full exposure image of high quality. It is an economical and effective way to improve the dynamic range of the imaging system and has broad application prospects. In recent years, with the further development of image representation theories such as multi-scale analysis and deep learning, significant progress has been achieved in this field. This paper comprehensively investigates the current research status of MEF methods. The relevant theories and key technologies for constructing MEF models are analyzed and categorized. The representative MEF methods in each category are introduced and summarized. Then, based on the multi-exposure image sequences in static and dynamic scenes, we present a comparative study for 18 representative MEF approaches using nine commonly used objective fusion metrics. Finally, the key issues of current MEF research are discussed, and a development trend for future research is put forward. Full article
(This article belongs to the Special Issue Multi-Task Deep Learning for Image Fusion and Segmentation)
Show Figures

Figure 1

Back to TopTop