sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Fusion in Body Sensor Networks

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (30 September 2019) | Viewed by 7784

Special Issue Editors


E-Mail Website
Guest Editor
Department of Informatics, Electronics, Modelling and Systems, University of Calabria, Via P. Bucci, 41C, Arcavacata di Rende, 87036 Rende, Italy
Interests: high-level programming methodologies and frameworks for body sensor networks; collaborative and cloud-assisted body sensor networks; pattern recognition and knowledge discovery algorithms on physiological signals; human activity recognition; ECG analysis; emotion recognition; interoperability on the Internet-of-Things
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Shenzhen Institutes of Advanced Technology, Chines Academy of Science, Nanshan, Shenzhen 518055, China
Interests: body sensor networks, wearable sensing, data fusion, medical big data

E-Mail Website
Guest Editor
School of Electrical Engineering and Computer Science, Washington State University, 355 Spokane Street, Pullman, WA 99164-2752, USA
Interests: pervasive computing, machine learning, algorithm design, Internet-of-Things, mobile health

E-Mail Website
Guest Editor
The BioRobotics Institute, Scuola Superiore Sant’Anna, Piazza Martiri della Libertà 33, 56124 Pisa, Italy
Interests: wearable sensors; machine learning; activity recognition; inertial sensors; movement analysis; gait parameters estimation; automatic early detection of gait alterations; sports bioengineering; mobile health
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Multi-sensor data fusion comprises methodologies, algorithms and techniques to capture, from multiple sources, a unified picture of the observed phenomenon. In the context of body sensor networks (BSNs), the objective of sensor data fusion is the integration of multiple, heterogeneous, noisy and error-affected signals to obtain more accurate and comprehensive information on a subject’s health and psycho-physiological status.

Since their appearance, BSNs were considered a potentially disruptive shift of our health and social lifestyle. The key aspect of BSNs are wireless unobstructive wearable sensing units attached to the human body that allow, in mobility, continuous and real-time physiological monitoring to enable diverse applications such as (i) prevention, early detection, and monitoring of diseases and other medical conditions; (ii) elderly assistance at home; (iii) sport and training; (iv) physical activity and gesture detection; and (v) emotion recognition.

However, although the increasing diffusion of smart wearable sensing devices, the design and implementation of effective (and power-efficient) BSN applications remains challenging. Physiological signals are acquired, processed, and streamed by resource-constrained devices with limited processing capabilities, energy availability, and storage capacity that altogether hinder signal processing, pattern recognition, and machine learning performance. Multi-sensor data fusion applied to redundant or complementary signals is seen as an effective solution to infer accurate information from such corrupted, noisy, or error-affected signals. Nevertheless, the current evolution trend of BSNs to multi-device, multi-modal sensing systems makes data fusion a complex task that has only recently started to be approached with systematic and reusable methods and technical solutions.

This Special Issue aims to provide a report of recent research results related to methodologies, algorithms and techniques of “Multi-Sensor Fusion in Body Sensor Networks”.

Dr. Raffaele Gravina
Prof. Ye Li
Prof. Hassan Ghasemzadeh
Dr. Andrea Mannini
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • distributed multi-sensor fusion algorithms 
  • collaborative multi-sensor fusion
  • multi-level algorithms for multi-sensor fusion in BSNs
  • multi-modal fusion for cognitive services
  • power-efficient sensor fusion in BSNs
  • multi-sensor fusion for early diseases detection in BSNs
  • multi-sensor fusion for human activity recognition applications
  • multi-sensor fusion for emotion recognition applications

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 1113 KiB  
Article
Exploring Deep Physiological Models for Nociceptive Pain Recognition
by Patrick Thiam, Peter Bellmann, Hans A. Kestler and Friedhelm Schwenker
Sensors 2019, 19(20), 4503; https://0-doi-org.brum.beds.ac.uk/10.3390/s19204503 - 17 Oct 2019
Cited by 49 | Viewed by 4368
Abstract
Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that [...] Read more.
Standard feature engineering involves manually designing measurable descriptors based on some expert knowledge in the domain of application, followed by the selection of the best performing set of designed features for the subsequent optimisation of an inference model. Several studies have shown that this whole manual process can be efficiently replaced by deep learning approaches which are characterised by the integration of feature engineering, feature selection and inference model optimisation into a single learning process. In the following work, deep learning architectures are designed for the assessment of measurable physiological channels in order to perform an accurate classification of different levels of artificially induced nociceptive pain. In contrast to previous works, which rely on carefully designed sets of hand-crafted features, the current work aims at building competitive pain intensity inference models through autonomous feature learning, based on deep neural networks. The assessment of the designed deep learning architectures is based on the BioVid Heat Pain Database (Part A) and experimental validation demonstrates that the proposed uni-modal architecture for the electrodermal activity (EDA) and the deep fusion approaches significantly outperform previous methods reported in the literature, with respective average performances of 84.57 % and 84.40 % for the binary classification experiment consisting of the discrimination between the baseline and the pain tolerance level ( T 0 vs. T 4 ) in a Leave-One-Subject-Out (LOSO) cross-validation evaluation setting. Moreover, the experimental results clearly show the relevance of the proposed approaches, which also offer more flexibility in the case of transfer learning due to the modular nature of deep neural networks. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion in Body Sensor Networks)
Show Figures

Graphical abstract

22 pages, 3171 KiB  
Article
Fusing Object Information and Inertial Data for Activity Recognition
by Alexander Diete and Heiner Stuckenschmidt
Sensors 2019, 19(19), 4119; https://0-doi-org.brum.beds.ac.uk/10.3390/s19194119 - 23 Sep 2019
Cited by 7 | Viewed by 2727
Abstract
In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are [...] Read more.
In the field of pervasive computing, wearable devices have been widely used for recognizing human activities. One important area in this research is the recognition of activities of daily living where especially inertial sensors and interaction sensors (like RFID tags with scanners) are popular choices as data sources. Using interaction sensors, however, has one drawback: they may not differentiate between proper interaction and simple touching of an object. A positive signal from an interaction sensor is not necessarily caused by a performed activity e.g., when an object is only touched but no interaction occurred afterwards. There are, however, many scenarios like medicine intake that rely heavily on correctly recognized activities. In our work, we aim to address this limitation and present a multimodal egocentric-based activity recognition approach. Our solution relies on object detection that recognizes activity-critical objects in a frame. As it is infeasible to always expect a high quality camera view, we enrich the vision features with inertial sensor data that monitors the users’ arm movement. This way we try to overcome the drawbacks of each respective sensor. We present our results of combining inertial and video features to recognize human activities on different types of scenarios where we achieve an F 1 -measure of up to 79.6%. Full article
(This article belongs to the Special Issue Multi-Sensor Fusion in Body Sensor Networks)
Show Figures

Figure 1

Back to TopTop