sensors-logo

Journal Browser

Journal Browser

Advanced Trustworthy and Privacy Preserved Image Processing and Pattern Recognition Methods for Biomedical and Clinical Applications

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (15 May 2022) | Viewed by 14936

Special Issue Editors

National Heart and Lung Institute, Imperial College London, South Kensington, London SW7 2AZ, UK
Interests: medical image analysis; multimodal information fusion; data synthesis; data harmonisation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Barcelona, 08007 Barcelona, Spain
Interests: Medical Image Analysis; Segmentation, Data Synthesis; Federated Learning; Data Preprocessing

E-Mail Website
Guest Editor
School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, UK
Interests: Medical Image Analysis; Data Science and Decisions; Deep and Federated Learning; Explainable AI

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Barcelona, 08007 Barcelona, Spain
Interests: medical image analysis; machine learning; predictive modelling; data management

Special Issue Information

Dear Colleagues,

In complex clinical decision-making settings, several collaborative learning frameworks have recently been investigated for enhancing the application of machine and deep learning algorithm effectiveness. For example, current federated learning techniques enable privacy-preserving model training across different sites with distributed data storage, model compression improves the applicability of a trained model to different edge devices with different computing power, and the performance and robustness of independently trained models can be improved using state-of-the-art knowledge distillation methods. The development of explainable AI and transfer learning is also making significant strides toward improving the transparency and transferability of the current machine and deep learning models. The goal of this Special Issue is to compile the most recent and advanced machine and deep learning research findings that aid in collaborative decision making for biomedical and clinical applications. Any effort to bridge the gap between various sensors, imaging devices, data, models, and machine and deep learning approaches across sites will be welcomed, but the relationship between machine and deep learning methods and downstream diagnostic objectives should be clearly stated.

We welcome articles including, but are not limited to the following topics:

  • Federated learning methods for biomedical imaging and post-processing;
  • Transfer learning and domain-adaptation;
  • Information fusion, joint prediction using multi-modality imaging and other types of data;
  • Object detection, recognition, classification, segmentation, registration and reconstruction, enhancement of biomedical imaging data;
  • Cross-modality image synthesis;
  • Transparency, explainability, privacy preserved and interpretability of deep learning models.

Dr. Guang Yang
Dr. Oliver Diaz
Dr. Giorgos Papanastasiou
Dr. Karim Lekadir
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • machine learning
  • deep learning
  • federated learning
  • segmentation
  • detection
  • image synthesis
  • reconstruction
  • transfer learning
  • XAI

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

20 pages, 3234 KiB  
Article
Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
by Chengjia Wang, Guang Yang and Giorgos Papanastasiou
Sensors 2022, 22(6), 2125; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062125 - 09 Mar 2022
Cited by 3 | Viewed by 2501
Abstract
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration [...] Read more.
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting. Full article
Show Figures

Graphical abstract

12 pages, 1044 KiB  
Article
Two-Stage Segmentation Framework Based on Distance Transformation
by Xiaoyang Huang, Zhi Lin, Yudi Jiao, Moon-Tong Chan, Shaohui Huang and Liansheng Wang
Sensors 2022, 22(1), 250; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010250 - 30 Dec 2021
Cited by 5 | Viewed by 2102
Abstract
With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in [...] Read more.
With the rise of deep learning, using deep learning to segment lesions and assist in diagnosis has become an effective means to promote clinical medical analysis. However, the partial volume effect of organ tissues leads to unclear and blurred edges of ROI in medical images, making it challenging to achieve high-accuracy segmentation of lesions or organs. In this paper, we assume that the distance map obtained by performing distance transformation on the ROI edge can be used as a weight map to make the network pay more attention to the learning of the ROI edge region. To this end, we design a novel framework to flexibly embed the distance map into the two-stage network to improve left atrium MRI segmentation performance. Furthermore, a series of distance map generation methods are proposed and studied to reasonably explore how to express the weight of assisting network learning. We conduct thorough experiments to verify the effectiveness of the proposed segmentation framework, and experimental results demonstrate that our hypothesis is feasible. Full article
Show Figures

Figure 1

17 pages, 2738 KiB  
Article
DCNet: Densely Connected Deep Convolutional Encoder–Decoder Network for Nasopharyngeal Carcinoma Segmentation
by Yang Li, Guanghui Han and Xiujian Liu
Sensors 2021, 21(23), 7877; https://0-doi-org.brum.beds.ac.uk/10.3390/s21237877 - 26 Nov 2021
Cited by 8 | Viewed by 1848
Abstract
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly [...] Read more.
Nasopharyngeal Carcinoma segmentation in magnetic resonance imagery (MRI) is vital to radiotherapy. Exact dose delivery hinges on an accurate delineation of the gross tumor volume (GTV). However, the large-scale variation in tumor volume is intractable, and the performance of current models is mostly unsatisfactory with indistinguishable and blurred boundaries of segmentation results of tiny tumor volume. To address the problem, we propose a densely connected deep convolutional network consisting of an encoder network and a corresponding decoder network, which extracts high-level semantic features from different levels and uses low-level spatial features concurrently to obtain fine-grained segmented masks. Skip-connection architecture is involved and modified to propagate spatial information to the decoder network. Preliminary experiments are conducted on 30 patients. Experimental results show our model outperforms all baseline models, with improvements of 4.17%. An ablation study is performed, and the effectiveness of the novel loss function is validated. Full article
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 1536 KiB  
Review
Multiple Sclerosis Diagnosis Using Machine Learning and Deep Learning: Challenges and Opportunities
by Nida Aslam, Irfan Ullah Khan, Asma Bashamakh, Fatima A. Alghool, Menna Aboulnour, Noorah M. Alsuwayan, Rawa’a K. Alturaif, Samiha Brahimi, Sumayh S. Aljameel and Kholoud Al Ghamdi
Sensors 2022, 22(20), 7856; https://0-doi-org.brum.beds.ac.uk/10.3390/s22207856 - 16 Oct 2022
Cited by 17 | Viewed by 6870
Abstract
Multiple Sclerosis (MS) is a disease that impacts the central nervous system (CNS), which can lead to brain, spinal cord, and optic nerve problems. A total of 2.8 million are estimated to suffer from MS. Globally, a new case of MS is reported [...] Read more.
Multiple Sclerosis (MS) is a disease that impacts the central nervous system (CNS), which can lead to brain, spinal cord, and optic nerve problems. A total of 2.8 million are estimated to suffer from MS. Globally, a new case of MS is reported every five minutes. In this review, we discuss the proposed approaches to diagnosing MS using machine learning (ML) published between 2011 and 2022. Numerous models have been developed using different types of data, including magnetic resonance imaging (MRI) and clinical data. We identified the methods that achieved the best results in diagnosing MS. The most implemented approaches are SVM, RF, and CNN. Moreover, we discussed the challenges and opportunities in MS diagnosis to improve AI systems to enable researchers and practitioners to enhance their approaches and improve the automated diagnosis of MS. The challenges faced by automated MS diagnosis include difficulty distinguishing the disease from other diseases showing similar symptoms, protecting the confidentiality of the patients’ data, achieving reliable ML models that are also easily understood by non-experts, and the difficulty of collecting a large reliable dataset. Moreover, we discussed several opportunities in the field such as the implementation of secure platforms, employing better AI solutions, developing better disease prognosis systems, combining more than one data type for better MS prediction and using OCT data for diagnosis, utilizing larger, multi-center datasets to improve the reliability of the developed models, and commercialization. Full article
Show Figures

Figure 1

Back to TopTop