sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence for Biomedical Sensing, Analysis and Treatment

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (30 November 2021) | Viewed by 10991

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77004, USA
Interests: Machine Learning; Artificial Intelligence; Computer Vision; Medical Image Analysis

E-Mail Website
Guest Editor
Department of Computer Science & Computer Engineering, University of Arkansas, Fayetteville, AR 72701, USA
Interests: Image Understanding; Video Understanding; Computer Vision; Robotics; Machine Learning; Deep Learning; Reinforcement Learning; Biomedical Imaging; SingleCell-RNA

Special Issue Information

Dear colleagues,

Deep learning and artificial intelligence (AI) have emerged as irreplaceable tools for extracting relevant information from medical and healthcare data with unprecedented quantity and complexity. AI has facilitated answering critical clinical questions at sub-cellular, tissue, organ, and behavioral levels. The journal promotes cutting-edge research on innovation in biomedical imaging, biomedicine, healthcare, including original ideas, improvements in systems, applied information technology. We are looking for novelty in the methodological and/or theoretical content and/or practical applications of submitted papers. While methodological submission solves some scientific issues in specific domains, theoretical papers should mainly target novelty (fundamental, general, and formal topics) in the areas of AI, including artificial intelligence, machine learning, deep learning, and computer vision. This Special Issue provides a forum for broad and diverse audiences to discuss recent advances, challenges, and opportunities at the nexus of AI, biomedicine, and healthcare. All submissions should refer to real-world medical domains, considered and discussed at the proper depth, from both the technical and medical points of view. We strongly encourage all submissions with a clinical assessment of the usefulness and potential impact.

We invite authors to submit high-quality papers whose topics include, but are not limited to, the following categories:

Medical image analysis

Healthcare informatics

Digital pathology

Biological cell analysis

Computational medicine

Drug discovery

Biomarker discovery

Disease fingerprints

Computational genetics

AI/Machine learning in medicine, medically oriented human biology, and healthcare

AI-based modeling and management of healthcare pathways and clinical guidelines

AI-based clinical decision making

AI in medical and healthcare education

Natural language processing in medicine and healthcare

Knowledge-based and agent-based systems

Automated reasoning and meta-reasoning in medicine.

Dr. Hien Van Nguyen
Dr. Thi Hoang Ngan Le
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • computer vision
  • image processing
  • medical imaging
  • biomedical sensing
  • medical image analysis
  • medicine
  • computational medicine
  • health informatics

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4057 KiB  
Article
Automated Breast Cancer Detection Models Based on Transfer Learning
by Madallah Alruwaili and Walaa Gouda
Sensors 2022, 22(3), 876; https://0-doi-org.brum.beds.ac.uk/10.3390/s22030876 - 24 Jan 2022
Cited by 35 | Viewed by 3936
Abstract
Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which [...] Read more.
Breast cancer is among the leading causes of mortality for females across the planet. It is essential for the well-being of women to develop early detection and diagnosis techniques. In mammography, focus has contributed to the use of deep learning (DL) models, which have been utilized by radiologists to enhance the needed processes to overcome the shortcomings of human observers. The transfer learning method is being used to distinguish malignant and benign breast cancer by fine-tuning multiple pre-trained models. In this study, we introduce a framework focused on the principle of transfer learning. In addition, a mixture of augmentation strategies were used to prevent overfitting and produce stable outcomes by increasing the number of mammographic images; including several rotation combinations, scaling, and shifting. On the Mammographic Image Analysis Society (MIAS) dataset, the proposed system was evaluated and achieved an accuracy of 89.5% using (residual network-50) ResNet50, and achieved an accuracy of 70% using the Nasnet-Mobile network. The proposed system demonstrated that pre-trained classification networks are significantly more effective and efficient, making them more acceptable for medical imaging, particularly for small training datasets. Full article
Show Figures

Figure 1

15 pages, 2275 KiB  
Article
Multi-Task Model for Esophageal Lesion Analysis Using Endoscopic Images: Classification with Image Retrieval and Segmentation with Attention
by Xiaoyuan Yu, Suigu Tang, Chak Fong Cheang, Hon Ho Yu and I Cheong Choi
Sensors 2022, 22(1), 283; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010283 - 31 Dec 2021
Cited by 11 | Viewed by 2336
Abstract
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role [...] Read more.
The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions. Full article
Show Figures

Figure 1

18 pages, 5297 KiB  
Article
Real-Time PPG Signal Conditioning with Long Short-Term Memory (LSTM) Network for Wearable Devices
by Marek Wójcikowski
Sensors 2022, 22(1), 164; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010164 - 27 Dec 2021
Cited by 9 | Viewed by 3552
Abstract
This paper presents an algorithm for real-time detection of the heart rate measured on a person’s wrist using a wearable device with a photoplethysmographic (PPG) sensor and accelerometer. The proposed algorithm consists of an appropriately trained LSTM network and the Time-Domain Heart Rate [...] Read more.
This paper presents an algorithm for real-time detection of the heart rate measured on a person’s wrist using a wearable device with a photoplethysmographic (PPG) sensor and accelerometer. The proposed algorithm consists of an appropriately trained LSTM network and the Time-Domain Heart Rate (TDHR) algorithm for peak detection in the PPG waveform. The Long Short-Term Memory (LSTM) network uses the signals from the accelerometer to improve the shape of the PPG input signal in a time domain that is distorted by body movements. Multiple variants of the LSTM network have been evaluated, including taking their complexity and computational cost into consideration. Adding the LSTM network caused additional computational effort, but the performance results of the whole algorithm are much better, outperforming the other algorithms from the literature. Full article
Show Figures

Figure 1

Back to TopTop