sensors-logo

Journal Browser

Journal Browser

Multi-Sensor Systems for Object Tracking

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (10 March 2022) | Viewed by 25894

Special Issue Editor


E-Mail Website
Guest Editor
Department of Applied Computer Science, AGH University of Science and Technology, 30-059 Kraków, Poland
Interests: pattern recognition; signal processing; computer vision
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Homogeneous and heterogeneous multisensor systems are among the most popular and affordable solutions for object tracking. Sensor-based object tracking can be applied not only to individuals (motion capture, wearable sensors) and autonomous vehicles (self-driving cars and robots) but also to monitoring personnel and traffic flow in flats, buildings or even whole cities. Depending on their application, those sensors might be vision-based, inertial measurement units (IMU), LIDARs, and many, many others.

This Special Issue is aimed at representing the latest advances in multisensor systems for object tracking. We welcome contributions in all fields of sensor-based object tracking, including new systems, signal processing algorithms, as well as new applications. Those include but are not limited to:

  • Simultaneous localization and mapping (SLAM);
  • Motion capture;
  • Autonomous vehicles;
  • Ubiquitous sensors;
  • Wearable sensors;
  • Computer vision;
  • Inertial measurement units (IMU);
  • LiDAR

Dr. Tomasz Hachaj
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Object tracking
  • Simultaneous localization and mapping (SLAM)
  • Motion capture
  • Autonomous vehicles
  • Ubiquitous sensors
  • Wearable sensors
  • Computer vision
  • Inertial measurement units (IMU)
  • LiDAR

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 1803 KiB  
Article
Kinematic Analysis of 360° Turning in Stroke Survivors Using Wearable Motion Sensors
by Masoud Abdollahi, Pranav Madhav Kuber, Michael Shiraishi, Rahul Soangra and Ehsan Rashedi
Sensors 2022, 22(1), 385; https://0-doi-org.brum.beds.ac.uk/10.3390/s22010385 - 05 Jan 2022
Cited by 11 | Viewed by 2681
Abstract
Background: A stroke often bequeaths surviving patients with impaired neuromusculoskeletal systems subjecting them to increased risk of injury (e.g., due to falls) even during activities of daily living. The risk of injuries to such individuals can be related to alterations in their movement. [...] Read more.
Background: A stroke often bequeaths surviving patients with impaired neuromusculoskeletal systems subjecting them to increased risk of injury (e.g., due to falls) even during activities of daily living. The risk of injuries to such individuals can be related to alterations in their movement. Using inertial sensors to record the digital biomarkers during turning could reveal the relevant turning alterations. Objectives: In this study, movement alterations in stroke survivors (SS) were studied and compared to healthy individuals (HI) in the entire turning task due to its requirement of synergistic application of multiple bodily systems. Methods: The motion of 28 participants (14 SS, 14 HI) during turning was captured using a set of four Inertial Measurement Units, placed on their sternum, sacrum, and both shanks. The motion signals were segmented using the temporal and spatial segmentation of the data from the leading and trailing shanks. Several kinematic parameters, including the range of motion and angular velocity of the four body segments, turning time, the number of cycles involved in the turning task, and portion of the stance phase while turning, were extracted for each participant. Results: The results of temporal processing of the data and comparison between the SS and HI showed that SS had more cycles involved in turning, turn duration, stance phase, range of motion in flexion–extension, and lateral bending for sternum and sacrum (p-value < 0.035). However, HI exhibited larger angular velocity in flexion–extension for all four segments. The results of the spatial processing, in agreement with the prior method, showed no difference between the range of motion in flexion–extension of both shanks (p-value > 0.08). However, it revealed that the angular velocity of the shanks of leading and trailing legs in the direction of turn was more extensive in the HI (p-value < 0.01). Conclusions: The changes in upper/lower body segments of SS could be adequately identified and quantified by IMU sensors. The identified kinematic changes in SS, such as the lower flexion–extension angular velocity of the four body segments and larger lateral bending range of motion in sternum and sacrum compared to HI in turning, could be due to the lack of proper core stability and effect of turning on vestibular system of the participants. This research could facilitate the development of a targeted and efficient rehabilitation program focusing on the affected aspects of turning movement for the stroke community. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

20 pages, 4774 KiB  
Article
A Safety Warning Algorithm Based on Axis Aligned Bounding Box Method to Prevent Onsite Accidents of Mobile Construction Machineries
by Cynthia Changxin Wang, Mudan Wang, Jun Sun and Mohammad Mojtahedi
Sensors 2021, 21(21), 7075; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217075 - 25 Oct 2021
Cited by 8 | Viewed by 2190
Abstract
Mobile construction machineries are accident-prone on a dynamic construction site, as the site environment is constantly changing and continuous safety monitoring by human beings is impossible. These accidents usually happen in the form of machinery overturning or collapsing into risk areas, including the [...] Read more.
Mobile construction machineries are accident-prone on a dynamic construction site, as the site environment is constantly changing and continuous safety monitoring by human beings is impossible. These accidents usually happen in the form of machinery overturning or collapsing into risk areas, including the foundation pit, slopes, or soft soil area. Therefore, preventing mobile construction machineries from entering risk areas is the key. However, currently, there is a lack of practical safety management techniques to achieve this. Utilizing a wireless sensor device to collect the location information of mobile construction machineries, this research develops a safety warning algorithm to prevent the machineries moving into risk area and reduces onsite overturning or collapsing accidents. A modified axis aligned bounding box method is proposed according to the movement patterns of mobile construction machineries, and the warning algorithm is developed based on the onsite safety management regulations. The algorithm is validated in a real case simulation when machinery enters the warning zone. The simulation results showed that the overall algorithm combining the location sensing technology and the modified bounding box method could detect risk and give warnings in a timely manner. This algorithm can be implemented for the safety monitoring of mobile construction machineries in daily onsite management. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

24 pages, 5400 KiB  
Article
CNN-Based Classifier as an Offline Trigger for the CREDO Experiment
by Marcin Piekarczyk, Olaf Bar, Łukasz Bibrzycki, Michał Niedźwiecki, Krzysztof Rzecki, Sławomir Stuglik, Thomas Andersen, Nikolay M. Budnev, David E. Alvarez-Castillo, Kévin Almeida Cheminant, Dariusz Góra, Alok C. Gupta, Bohdan Hnatyk, Piotr Homola, Robert Kamiński, Marcin Kasztelan, Marek Knap, Péter Kovács, Bartosz Łozowski, Justyna Miszczyk, Alona Mozgova, Vahab Nazari, Maciej Pawlik, Matías Rosas, Oleksandr Sushchov, Katarzyna Smelcerz, Karel Smolek, Jarosław Stasielak, Tadeusz Wibig, Krzysztof W. Woźniak and Jilberto Zamora-Saaadd Show full author list remove Hide full author list
Sensors 2021, 21(14), 4804; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144804 - 14 Jul 2021
Cited by 12 | Viewed by 4138
Abstract
Gamification is known to enhance users’ participation in education and research projects that follow the citizen science paradigm. The Cosmic Ray Extremely Distributed Observatory (CREDO) experiment is designed for the large-scale study of various radiation forms that continuously reach the Earth from space, [...] Read more.
Gamification is known to enhance users’ participation in education and research projects that follow the citizen science paradigm. The Cosmic Ray Extremely Distributed Observatory (CREDO) experiment is designed for the large-scale study of various radiation forms that continuously reach the Earth from space, collectively known as cosmic rays. The CREDO Detector app relies on a network of involved users and is now working worldwide across phones and other CMOS sensor-equipped devices. To broaden the user base and activate current users, CREDO extensively uses the gamification solutions like the periodical Particle Hunters Competition. However, the adverse effect of gamification is that the number of artefacts, i.e., signals unrelated to cosmic ray detection or openly related to cheating, substantially increases. To tag the artefacts appearing in the CREDO database we propose the method based on machine learning. The approach involves training the Convolutional Neural Network (CNN) to recognise the morphological difference between signals and artefacts. As a result we obtain the CNN-based trigger which is able to mimic the signal vs. artefact assignments of human annotators as closely as possible. To enhance the method, the input image signal is adaptively thresholded and then transformed using Daubechies wavelets. In this exploratory study, we use wavelet transforms to amplify distinctive image features. As a result, we obtain a very good recognition ratio of almost 99% for both signal and artefacts. The proposed solution allows eliminating the manual supervision of the competition process. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

19 pages, 1432 KiB  
Article
SLAM-OR: Simultaneous Localization, Mapping and Object Recognition Using Video Sensors Data in Open Environments from the Sparse Points Cloud
by Patryk Mazurek and Tomasz Hachaj
Sensors 2021, 21(14), 4734; https://0-doi-org.brum.beds.ac.uk/10.3390/s21144734 - 11 Jul 2021
Cited by 10 | Viewed by 4647
Abstract
In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current [...] Read more.
In this paper, we propose a novel approach that enables simultaneous localization, mapping (SLAM) and objects recognition using visual sensors data in open environments that is capable to work on sparse data point clouds. In the proposed algorithm the ORB-SLAM uses the current and previous monocular visual sensors video frame to determine observer position and to determine a cloud of points that represent objects in the environment, while the deep neural network uses the current frame to detect and recognize objects (OR). In the next step, the sparse point cloud returned from the SLAM algorithm is compared with the area recognized by the OR network. Because each point from the 3D map has its counterpart in the current frame, therefore the filtration of points matching the area recognized by the OR algorithm is performed. The clustering algorithm determines areas in which points are densely distributed in order to detect spatial positions of objects detected by OR. Then by using principal component analysis (PCA)—based heuristic we estimate bounding boxes of detected objects. The image processing pipeline that uses sparse point clouds generated by SLAM in order to determine positions of objects recognized by deep neural network and mentioned PCA heuristic are main novelties of our solution. In contrary to state-of-the-art approaches, our algorithm does not require any additional calculations like generation of dense point clouds for objects positioning, which highly simplifies the task. We have evaluated our research on large benchmark dataset using various state-of-the-art OR architectures (YOLO, MobileNet, RetinaNet) and clustering algorithms (DBSCAN and OPTICS) obtaining promising results. Both our source codes and evaluation data sets are available for download, so our results can be easily reproduced. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

19 pages, 8116 KiB  
Article
Design and Implementation of a Position, Speed and Orientation Fuzzy Controller Using a Motion Capture System to Operate a Wheelchair Prototype
by Mauro Callejas-Cuervo, Aura Ximena González-Cely and Teodiano Bastos-Filho
Sensors 2021, 21(13), 4344; https://0-doi-org.brum.beds.ac.uk/10.3390/s21134344 - 25 Jun 2021
Cited by 9 | Viewed by 2919
Abstract
The design and implementation of an electronic system that involves head movements to operate a prototype that can simulate future movements of a wheelchair was developed here. The controller design collects head-movements data through a MEMS sensor-based motion capture system. The research was [...] Read more.
The design and implementation of an electronic system that involves head movements to operate a prototype that can simulate future movements of a wheelchair was developed here. The controller design collects head-movements data through a MEMS sensor-based motion capture system. The research was divided into four stages: First, the instrumentation of the system using hardware and software; second, the mathematical modeling using the theory of dynamic systems; third, the automatic control of position, speed, and orientation with constant and variable speed; finally, system verification using both an electronic controller test protocol and user experience. The system involved a graphical interface for the user to interact with it by executing all the controllers in real time. Through the System Usability Scale (SUS), a score of 78 out of 100 points was obtained from the qualification of 10 users who validated the system, giving a connotation of “very good”. Users accepted the system with the recommendation to improve safety by using laser sensors instead of ultrasonic range modules to enhance obstacle detection. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

15 pages, 5099 KiB  
Article
An Open-Source Platform for Human Pose Estimation and Tracking Using a Heterogeneous Multi-Sensor System
by Ashok Kumar Patil, Adithya Balasubramanyam, Jae Yeong Ryu, Bharatesh Chakravarthi and Young Ho Chai
Sensors 2021, 21(7), 2340; https://0-doi-org.brum.beds.ac.uk/10.3390/s21072340 - 27 Mar 2021
Cited by 15 | Viewed by 5393
Abstract
Human pose estimation and tracking in real-time from multi-sensor systems is essential for many applications. Combining multiple heterogeneous sensors increases opportunities to improve human motion tracking. Using only a single sensor type, e.g., inertial sensors, human pose estimation accuracy is affected by sensor [...] Read more.
Human pose estimation and tracking in real-time from multi-sensor systems is essential for many applications. Combining multiple heterogeneous sensors increases opportunities to improve human motion tracking. Using only a single sensor type, e.g., inertial sensors, human pose estimation accuracy is affected by sensor drift over longer periods. This paper proposes a human motion tracking system using lidar and inertial sensors to estimate 3D human pose in real-time. Human motion tracking includes human detection and estimation of height, skeletal parameters, position, and orientation by fusing lidar and inertial sensor data. Finally, the estimated data are reconstructed on a virtual 3D avatar. The proposed human pose tracking system was developed using open-source platform APIs. Experimental results verified the proposed human position tracking accuracy in real-time and were in good agreement with current multi-sensor systems. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

16 pages, 3597 KiB  
Article
Relation3DMOT: Exploiting Deep Affinity for 3D Multi-Object Tracking from View Aggregation
by Can Chen, Luca Zanotti Fragonara and Antonios Tsourdos
Sensors 2021, 21(6), 2113; https://0-doi-org.brum.beds.ac.uk/10.3390/s21062113 - 17 Mar 2021
Cited by 2 | Viewed by 2380
Abstract
Autonomous systems need to localize and track surrounding objects in 3D space for safe motion planning. As a result, 3D multi-object tracking (MOT) plays a vital role in autonomous navigation. Most MOT methods use a tracking-by-detection pipeline, which includes both the object detection [...] Read more.
Autonomous systems need to localize and track surrounding objects in 3D space for safe motion planning. As a result, 3D multi-object tracking (MOT) plays a vital role in autonomous navigation. Most MOT methods use a tracking-by-detection pipeline, which includes both the object detection and data association tasks. However, many approaches detect objects in 2D RGB sequences for tracking, which lacks reliability when localizing objects in 3D space. Furthermore, it is still challenging to learn discriminative features for temporally consistent detection in different frames, and the affinity matrix is typically learned from independent object features without considering the feature interaction between detected objects in the different frames. To settle these problems, we first employ a joint feature extractor to fuse the appearance feature and the motion feature captured from 2D RGB images and 3D point clouds, and then we propose a novel convolutional operation, named RelationConv, to better exploit the correlation between each pair of objects in the adjacent frames and learn a deep affinity matrix for further data association. We finally provide extensive evaluation to reveal that our proposed model achieves state-of-the-art performance on the KITTI tracking benchmark. Full article
(This article belongs to the Special Issue Multi-Sensor Systems for Object Tracking)
Show Figures

Figure 1

Back to TopTop