sensors-logo

Journal Browser

Journal Browser

Biomedical Image and Signals for Treatment Monitoring

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 May 2022) | Viewed by 22916

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
Interests: object identification; biomedical measurements; statistical analysis

E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
Interests: augumented reality; MR/AR/VR/XR; medical image analysis; image segmentation; image registration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Electrical and Computer Engineering, Technical University of Crete, Chania 731 00, Greece
Interests: digital image and signal processing; biomedical applications
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, AGH University of Science and Technology, 30-059 Krakow, Poland
Interests: signal processing; speech recognition; voice analysis; voice pathology detection; classification; regression; decision making

Special Issue Information

Dear Colleagues,

There is great interest in the biomedical applications from academic and industrial researchers worldwide, driven by the outstanding benefits from their implementation and application with engancement of sensitivity, accuracy and safety in diagnostics, treatment and therapeutic processes. Accurate patient monitoring by acquiring various biomedical signals and processing acquired biomedical images is currently in a position to pave the way to monitor the current human state of health and adapt patient-specific personalized medicine, enabling support of the diagnostic process, treatment monitoring and combining different strategies.

In this Special Issue of Sensors, we expect contributions from a broad community of scientists working on diverse applications of image and signal processing in medicine and biology. We encourage interdisciplinary teams to contribute and publish their achievements and breakthrough solutions for biomedical research, diagnostics and therapeutic approaches.

Prof. Dr. Janusz Gajda
Prof. Dr. Andrzej Skalski
Prof. Dr. Michalis Zervakis
Dr. Daria Hemmerling
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • biomedical images processing
  • biomedical signals
  • augumented/mixed reality
  • personalized medicine
  • diagnostic process supporting
  • biomedical engineering

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3019 KiB  
Article
Two-Stage Classification Model for the Prediction of Heart Disease Using IoMT and Artificial Intelligence
by S. Manimurugan, Saad Almutairi, Majed Mohammed Aborokbah, C. Narmatha, Subramaniam Ganesan, Naveen Chilamkurti, Riyadh A. Alzaheb and Hani Almoamari
Sensors 2022, 22(2), 476; https://0-doi-org.brum.beds.ac.uk/10.3390/s22020476 - 09 Jan 2022
Cited by 34 | Viewed by 3637
Abstract
Internet of Things (IoT) technology has recently been applied in healthcare systems as an Internet of Medical Things (IoMT) to collect sensor information for the diagnosis and prognosis of heart disease. The main objective of the proposed research is to classify data and [...] Read more.
Internet of Things (IoT) technology has recently been applied in healthcare systems as an Internet of Medical Things (IoMT) to collect sensor information for the diagnosis and prognosis of heart disease. The main objective of the proposed research is to classify data and predict heart disease using medical data and medical images. The proposed model is a medical data classification and prediction model that operates in two stages. If the result from the first stage is efficient in predicting heart disease, there is no need for stage two. In the first stage, data gathered from medical sensors affixed to the patient’s body were classified; then, in stage two, echocardiogram image classification was performed for heart disease prediction. A hybrid linear discriminant analysis with the modified ant lion optimization (HLDA-MALO) technique was used for sensor data classification, while a hybrid Faster R-CNN with SE-ResNet-101 modelwass used for echocardiogram image classification. Both classification methods were carried out, and the classification findings were consolidated and validated to predict heart disease. The HLDA-MALO method obtained 96.85% accuracy in detecting normal sensor data, and 98.31% accuracy in detecting abnormal sensor data. The proposed hybrid Faster R-CNN with SE-ResNeXt-101 transfer learning model performed better in classifying echocardiogram images, with 98.06% precision, 98.95% recall, 96.32% specificity, a 99.02% F-score, and maximum accuracy of 99.15%. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

15 pages, 1371 KiB  
Article
Performance Analysis of Deep-Neural-Network-Based Automatic Diagnosis of Diabetic Retinopathy
by Hassan Tariq, Muhammad Rashid, Asfa Javed, Eeman Zafar, Saud S. Alotaibi and Muhammad Yousuf Irfan Zia
Sensors 2022, 22(1), 205; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010205 - 29 Dec 2021
Cited by 29 | Viewed by 2987
Abstract
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early [...] Read more.
Diabetic retinopathy (DR) is a human eye disease that affects people who are suffering from diabetes. It causes damage to their eyes, including vision loss. It is treatable; however, it takes a long time to diagnose and may require many eye exams. Early detection of DR may prevent or delay the vision loss. Therefore, a robust, automatic and computer-based diagnosis of DR is essential. Currently, deep neural networks are being utilized in numerous medical areas to diagnose various diseases. Consequently, deep transfer learning is utilized in this article. We employ five convolutional-neural-network-based designs (AlexNet, GoogleNet, Inception V4, Inception ResNet V2 and ResNeXt-50). A collection of DR pictures is created. Subsequently, the created collections are labeled with an appropriate treatment approach. This automates the diagnosis and assists patients through subsequent therapies. Furthermore, in order to identify the severity of DR retina pictures, we use our own dataset to train deep convolutional neural networks (CNNs). Experimental results reveal that the pre-trained model Se-ResNeXt-50 obtains the best classification accuracy of 97.53% for our dataset out of all pre-trained models. Moreover, we perform five different experiments on each CNN architecture. As a result, a minimum accuracy of 84.01% is achieved for a five-degree classification. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

20 pages, 3092 KiB  
Article
Advances in Thermal Image Analysis for the Detection of Pregnancy in Horses Using Infrared Thermography
by Małgorzata Domino, Marta Borowska, Natalia Kozłowska, Łukasz Zdrojkowski, Tomasz Jasiński, Graham Smyth and Małgorzata Maśko
Sensors 2022, 22(1), 191; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010191 - 28 Dec 2021
Cited by 15 | Viewed by 2436
Abstract
Infrared thermography (IRT) was applied as a potentially useful tool in the detection of pregnancy in equids, especially native or wildlife. IRT measures heat emission from the body surface, which increases with the progression of pregnancy as blood flow and metabolic activity in [...] Read more.
Infrared thermography (IRT) was applied as a potentially useful tool in the detection of pregnancy in equids, especially native or wildlife. IRT measures heat emission from the body surface, which increases with the progression of pregnancy as blood flow and metabolic activity in the uterine and fetal tissues increase. Conventional IRT imaging is promising; however, with specific limitations considered, this study aimed to develop novel digital processing methods for thermal images of pregnant mares to detect pregnancy earlier with higher accuracy. In the current study, 40 mares were divided into non-pregnant and pregnant groups and imaged using IRT. Thermal images were transformed into four color models (RGB, YUV, YIQ, HSB) and 10 color components were separated. From each color component, features of image texture were obtained using Histogram Statistics and Grey-Level Run-Length Matrix algorithms. The most informative color/feature combinations were selected for further investigation, and the accuracy of pregnancy detection was calculated. The image texture features in the RGB and YIQ color models reflecting increased heterogeneity of image texture seem to be applicable as potential indicators of pregnancy. Their application in IRT-based pregnancy detection in mares allows for earlier recognition of pregnant mares with higher accuracy than the conventional IRT imaging technique. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

19 pages, 5359 KiB  
Article
Experimental Study on Wound Area Measurement with Mobile Devices
by Filipe Ferreira, Ivan Miguel Pires, Vasco Ponciano, Mónica Costa, María Vanessa Villasana, Nuno M. Garcia, Eftim Zdravevski, Petre Lameski, Ivan Chorbev, Martin Mihajlov and Vladimir Trajkovik
Sensors 2021, 21(17), 5762; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175762 - 26 Aug 2021
Cited by 9 | Viewed by 4467
Abstract
Healthcare treatments might benefit from advances in artificial intelligence and technological equipment such as smartphones and smartwatches. The presence of cameras in these devices with increasingly robust and precise pattern recognition techniques can facilitate the estimation of the wound area and other telemedicine [...] Read more.
Healthcare treatments might benefit from advances in artificial intelligence and technological equipment such as smartphones and smartwatches. The presence of cameras in these devices with increasingly robust and precise pattern recognition techniques can facilitate the estimation of the wound area and other telemedicine measurements. Currently, telemedicine is vital to the maintenance of the quality of the treatments remotely. This study proposes a method for measuring the wound area with mobile devices. The proposed approach relies on a multi-step process consisting of image capture, conversion to grayscale, blurring, application of a threshold with segmentation, identification of the wound part, dilation and erosion of the detected wound section, identification of accurate data related to the image, and measurement of the wound area. The proposed method was implemented with the OpenCV framework. Thus, it is a solution for healthcare systems by which to investigate and treat people with skin-related diseases. The proof-of-concept was performed with a static dataset of camera images on a desktop computer. After we validated the approach’s feasibility, we implemented the method in a mobile application that allows for communication between patients, caregivers, and healthcare professionals. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

18 pages, 2093 KiB  
Article
Gender and Age Estimation Methods Based on Speech Using Deep Neural Networks
by Damian Kwasny and Daria Hemmerling
Sensors 2021, 21(14), 4785; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144785 - 13 Jul 2021
Cited by 32 | Viewed by 5652
Abstract
The speech signal contains a vast spectrum of information about the speaker such as speakers’ gender, age, accent, or health state. In this paper, we explored different approaches to automatic speaker’s gender classification and age estimation system using speech signals. We applied various [...] Read more.
The speech signal contains a vast spectrum of information about the speaker such as speakers’ gender, age, accent, or health state. In this paper, we explored different approaches to automatic speaker’s gender classification and age estimation system using speech signals. We applied various Deep Neural Network-based embedder architectures such as x-vector and d-vector to age estimation and gender classification tasks. Furthermore, we have applied a transfer learning-based training scheme with pre-training the embedder network for a speaker recognition task using the Vox-Celeb1 dataset and then fine-tuning it for the joint age estimation and gender classification task. The best performing system achieves new state-of-the-art results on the age estimation task using popular TIMIT dataset with a mean absolute error (MAE) of 5.12 years for male and 5.29 years for female speakers and a root-mean square error (RMSE) of 7.24 and 8.12 years for male and female speakers, respectively, and an overall gender recognition accuracy of 99.60%. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

14 pages, 2661 KiB  
Article
Semi-Supervised Deep Learning-Based Image Registration Method with Volume Penalty for Real-Time Breast Tumor Bed Localization
by Marek Wodzinski, Izabela Ciepiela, Tomasz Kuszewski, Piotr Kedzierawski and Andrzej Skalski
Sensors 2021, 21(12), 4085; https://0-doi-org.brum.beds.ac.uk/10.3390/s21124085 - 14 Jun 2021
Cited by 15 | Viewed by 2515
Abstract
Breast-conserving surgery requires supportive radiotherapy to prevent cancer recurrence. However, the task of localizing the tumor bed to be irradiated is not trivial. The automatic image registration could significantly aid the tumor bed localization and lower the radiation dose delivered to the surrounding [...] Read more.
Breast-conserving surgery requires supportive radiotherapy to prevent cancer recurrence. However, the task of localizing the tumor bed to be irradiated is not trivial. The automatic image registration could significantly aid the tumor bed localization and lower the radiation dose delivered to the surrounding healthy tissues. This study proposes a novel image registration method dedicated to breast tumor bed localization addressing the problem of missing data due to tumor resection that may be applied to real-time radiotherapy planning. We propose a deep learning-based nonrigid image registration method based on a modified U-Net architecture. The algorithm works simultaneously on several image resolutions to handle large deformations. Moreover, we propose a dedicated volume penalty that introduces the medical knowledge about tumor resection into the registration process. The proposed method may be useful for improving real-time radiation therapy planning after the tumor resection and, thus, lower the surrounding healthy tissues’ irradiation. The data used in this study consist of 30 computed tomography scans acquired in patients with diagnosed breast cancer, before and after tumor surgery. The method is evaluated using the target registration error between manually annotated landmarks, the ratio of tumor volume, and the subjective visual assessment. We compare the proposed method to several other approaches and show that both the multilevel approach and the volume regularization improve the registration results. The mean target registration error is below 6.5 mm, and the relative volume ratio is close to zero. The registration time below 1 s enables the real-time processing. These results show improvements compared to the classical, iterative methods or other learning-based approaches that do not introduce the knowledge about tumor resection into the registration process. In future research, we plan to propose a method dedicated to automatic localization of missing regions that may be used to automatically segment tumors in the source image and scars in the target image. Full article
(This article belongs to the Special Issue Biomedical Image and Signals for Treatment Monitoring)
Show Figures

Figure 1

Back to TopTop