sensors-logo

Journal Browser

Journal Browser

Physiological Sound Acquisition and Processing (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 February 2024) | Viewed by 9552

Special Issue Editor


E-Mail Website
Guest Editor
Department of Informatics Engineering, University of Coimbra, Coimbra, Portugal
Interests: music information retrieval; music emotion recognition; medical informatics; feature engineering; applied machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Auscultation is a long-established clinical practice to listen to the internal sounds of the body, particularly heart, respiratory, and abdominal sounds. Nevertheless, despite its multiple benefits, conventional auscultation has some associated drawbacks—for example, it needs to be performed by an expert, especially when aiming to detect abnormal sounds; it is somewhat subjective, and thus has inherent inter-listener variability; it is conditioned by the limits of human audition and training; and it does not allow for continuous monitoring. Automated computer-aided analysis of physiological sounds could potentially overcome these limitations, as current work on the detection and classification of heart, respiratory, and bowel sounds suggest.

This Special Issue will publish high-quality original research on the automated analysis of clinically relevant physiological sounds. Both original research and review articles on topics including but not limited to the following are welcome:

  • Sound segmentation and classification algorithms;
  • Audio signal processing, feature engineering, and machine learning/deep learning approaches;
  • Sizeable, quality and public datasets and big data;
  • New sensors and acquisition systems (e.g., wearable, portable and p-health systems) targeting continuous and remote monitoring;
  • Single- and multi-modal approaches and information fusion in multi-channel and multi-sensor settings;
  • Impact of acquisition settings (e.g., clinical or non-clinical environments, heterogeneous sound acquisition equipment, robustness in noisy environments);
  • Personalization and population stratification;
  • Applications of physiological sound processing in healthcare, clinical studies, and decision-support systems.

Prof. Dr. Rui Pedro Paiva
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • physiological sound processing
  • heart sound
  • respiratory sound
  • abdominal sound
  • audio feature engineering
  • machine learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

17 pages, 3905 KiB  
Article
SonicGuard Sensor—A Multichannel Acoustic Sensor for Long-Term Monitoring of Abdominal Sounds Examined through a Qualification Study
by Zahra Mansour, Verena Uslar, Dirk Weyhe, Danilo Hollosi and Nils Strodthoff
Sensors 2024, 24(6), 1843; https://0-doi-org.brum.beds.ac.uk/10.3390/s24061843 - 13 Mar 2024
Viewed by 663
Abstract
Auscultation is a fundamental diagnostic technique that provides valuable diagnostic information about different parts of the body. With the increasing prevalence of digital stethoscopes and telehealth applications, there is a growing trend towards digitizing the capture of bodily sounds, thereby enabling subsequent analysis [...] Read more.
Auscultation is a fundamental diagnostic technique that provides valuable diagnostic information about different parts of the body. With the increasing prevalence of digital stethoscopes and telehealth applications, there is a growing trend towards digitizing the capture of bodily sounds, thereby enabling subsequent analysis using machine learning algorithms. This study introduces the SonicGuard sensor, which is a multichannel acoustic sensor designed for long-term recordings of bodily sounds. We conducted a series of qualification tests, with a specific focus on bowel sounds ranging from controlled experimental environments to phantom measurements and real patient recordings. These tests demonstrate the effectiveness of the proposed sensor setup. The results show that the SonicGuard sensor is comparable to commercially available digital stethoscopes, which are considered the gold standard in the field. This development opens up possibilities for collecting and analyzing bodily sound datasets using machine learning techniques in the future. Full article
(This article belongs to the Special Issue Physiological Sound Acquisition and Processing (Volume II))
Show Figures

Figure 1

18 pages, 4674 KiB  
Article
A Wearable Multi-Sensor Array Enables the Recording of Heart Sounds in Homecare
by Noemi Giordano, Samanta Rosati, Gabriella Balestra and Marco Knaflitz
Sensors 2023, 23(13), 6241; https://0-doi-org.brum.beds.ac.uk/10.3390/s23136241 - 7 Jul 2023
Cited by 3 | Viewed by 1531
Abstract
The home monitoring of patients affected by chronic heart failure (CHF) is of key importance in preventing acute episodes. Nevertheless, no wearable technological solution exists to date. A possibility could be offered by Cardiac Time Intervals extracted from simultaneous recordings of electrocardiographic (ECG) [...] Read more.
The home monitoring of patients affected by chronic heart failure (CHF) is of key importance in preventing acute episodes. Nevertheless, no wearable technological solution exists to date. A possibility could be offered by Cardiac Time Intervals extracted from simultaneous recordings of electrocardiographic (ECG) and phonocardiographic (PCG) signals. Nevertheless, the recording of a good-quality PCG signal requires accurate positioning of the stethoscope over the chest, which is unfeasible for a naïve user as the patient. In this work, we propose a solution based on multi-source PCG. We designed a flexible multi-sensor array to enable the recording of heart sounds by inexperienced users. The multi-sensor array is based on a flexible Printed Circuit Board mounting 48 microphones with a high spatial resolution, three electrodes to record an ECG and a Magneto-Inertial Measurement Unit. We validated the usability over a sample population of 42 inexperienced volunteers and found that all subjects could record signals of good to excellent quality. Moreover, we found that the multi-sensor array is suitable for use on a wide population of at-risk patients regardless of their body characteristics. Based on the promising findings of this study, we believe that the described device could enable the home monitoring of CHF patients soon. Full article
(This article belongs to the Special Issue Physiological Sound Acquisition and Processing (Volume II))
Show Figures

Figure 1

13 pages, 1779 KiB  
Article
Automated Lung Sound Classification Using a Hybrid CNN-LSTM Network and Focal Loss Function
by Georgios Petmezas, Grigorios-Aris Cheimariotis, Leandros Stefanopoulos, Bruno Rocha, Rui Pedro Paiva, Aggelos K. Katsaggelos and Nicos Maglaveras
Sensors 2022, 22(3), 1232; https://0-doi-org.brum.beds.ac.uk/10.3390/s22031232 - 6 Feb 2022
Cited by 62 | Viewed by 6748
Abstract
Respiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient’s quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is [...] Read more.
Respiratory diseases constitute one of the leading causes of death worldwide and directly affect the patient’s quality of life. Early diagnosis and patient monitoring, which conventionally include lung auscultation, are essential for the efficient management of respiratory diseases. Manual lung sound interpretation is a subjective and time-consuming process that requires high medical expertise. The capabilities that deep learning offers could be exploited in order that robust lung sound classification models can be designed. In this paper, we propose a novel hybrid neural model that implements the focal loss (FL) function to deal with training data imbalance. Features initially extracted from short-time Fourier transform (STFT) spectrograms via a convolutional neural network (CNN) are given as input to a long short-term memory (LSTM) network that memorizes the temporal dependencies between data and classifies four types of lung sounds, including normal, crackles, wheezes, and both crackles and wheezes. The model was trained and tested on the ICBHI 2017 Respiratory Sound Database and achieved state-of-the-art results using three different data splitting strategies—namely, sensitivity 47.37%, specificity 82.46%, score 64.92% and accuracy 73.69% for the official 60/40 split, sensitivity 52.78%, specificity 84.26%, score 68.52% and accuracy 76.39% using interpatient 10-fold cross validation, and sensitivity 60.29% and accuracy 74.57% using leave-one-out cross validation. Full article
(This article belongs to the Special Issue Physiological Sound Acquisition and Processing (Volume II))
Show Figures

Figure 1

Back to TopTop