Special Issue "Multi-Task Deep Learning for Image Fusion and Segmentation"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing Image Processing".

Deadline for manuscript submissions: 31 January 2022.

Special Issue Editors

Dr. Doster Timothy J
E-Mail Website
Guest Editor
Pacific Northwest National Laboratory, 1100 Dexter Avenue North, Suite 500, Seattle, WA 98109 USA
Interests: few-shot learning; remote sensing/hyperspectral imaging; natural language processing; AI safety and security; equivariant architectures and applications of harmonic analysis to problems in machine learning
Dr. Brian Alan Johnson
E-Mail Website1 Website2
Guest Editor
Natural Resources and Ecosystem Services, Institute for Global Environmental Strategies, Kanagawa 240-0115, Japan
Interests: Geographic Information Systems (GIS), remote sensing, spatial modeling, and data mining for urban and environmental analysis and planning; mapping urban land cover (green space, impervious surfaces, etc.) and monitoring forest health using fine resolution satellite imagery
Special Issues and Collections in MDPI journals
Dr. Lei Ma
E-Mail Website
Guest Editor
Department of Geographic Information Science, Nanjing University, Nanjing, 210046, China
Interests: remote sensing image information extraction; object-based image analysis (OBIA); machine learning or data mining with applications in remote sensing and geospatial analysis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Typically, a deep learning model is trained to perform a single task with high accuracy; for example classifying images.  Multi-task deep learning is a technique in machine learning where a deep model is trained to perform several tasks (e.g., classify an image, segment out the object, and predict the depth) with different metrics and a collection of shared representations.  By training the model across several related tasks, the model develops features which are less prone to overfitting on the training data and thus generalizes better.  This technique has shown great success in image and textual analysis.  In this special issue, we consider the applicability of this technique to problems arising in remote sensing such as scene segmentation, image fusion, image registration, object detection, super resolution, and anomaly detection.

Dr. Doster Timothy J
Dr. Brian Alan Johnson
Dr. Lei Ma
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Multi-task learning
  • Image fusion
  • Pansharpening
  • Segmentation
  • Image registration
  • Parameter sharing
  • Remote sensing
  • Convolution neural networks
  • Domain adaptation

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
MSDRN: Pansharpening of Multispectral Images via Multi-Scale Deep Residual Network
Remote Sens. 2021, 13(6), 1200; https://0-doi-org.brum.beds.ac.uk/10.3390/rs13061200 - 21 Mar 2021
Viewed by 813
Abstract
In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods [...] Read more.
In order to acquire a high resolution multispectral (HRMS) image with the same spectral resolution as multispectral (MS) image and the same spatial resolution as panchromatic (PAN) image, pansharpening, a typical and hot image fusion topic, has been well researched. Various pansharpening methods that are based on convolutional neural networks (CNN) with different architectures have been introduced by prior works. However, different scale information of the source images is not considered by these methods, which may lead to the loss of high-frequency details in the fused image. This paper proposes a pansharpening method of MS images via multi-scale deep residual network (MSDRN). The proposed method constructs a multi-level network to make better use of the scale information of the source images. Moreover, residual learning is introduced into the network to further improve the ability of feature extraction and simplify the learning process. A series of experiments are conducted on the QuickBird and GeoEye-1 datasets. Experimental results demonstrate that the MSDRN achieves a superior or competitive fusion performance to the state-of-the-art methods in both visual evaluation and quantitative evaluation. Full article
(This article belongs to the Special Issue Multi-Task Deep Learning for Image Fusion and Segmentation)
Show Figures

Figure 1

Back to TopTop