sensors-logo

Journal Browser

Journal Browser

Kinect Sensor and Its Application

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 15 September 2024 | Viewed by 37664

Special Issue Editor


E-Mail Website
Guest Editor
Department of Orthopaedic Surgery, University of California San Francisco, San Francisco, CA 94143, USA
Interests: 3D computer vision; biomechanics; human motion analysis; image processing; RGB-D cameras; virtual reality

Special Issue Information

Dear Colleagues,

The release of the Microsoft Kinect sensor in 2010 revolutionized active 3D sensing. Although originally intended for the gaming community, the Kinect early on found its place in the research and commercial development. Its relatively high accuracy, ease of use, AI-enabled body and facial tracking, multi-microphone sound capture, and affordability have sparked novel applications in rehabilitation, telemedicine, surveillance, 3D scanning, and many other areas. The Kinect has thus become a synonym for real-time 3D sensing and helped to solve research problems that 2D vision alone could not. The interaction via Kinect created a new paradigm for human–computer interaction, especially in the area of mixed reality. To date, three generations of Kinect camera have been released, including Kinect for Xbox 360 (v1), Kinect for Xbox One (v2), and most recently Azure Kinect DK.

This Special Issue seeks submissions of original research papers describing novel applications with Kinect sensors that focus on its sensing properties, 3D measurements, multi-modal data fusion, point cloud segmentation, object recognition, human–computer interaction (HCI), and user experience (UX) in various areas, from biomechanics to mixed reality. The submitted paper should include previously unpublished work that demonstrates novel research contributions relevant to Sensors journal topics.

Dr. Gregorij Kurillo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D Measurement
  • Computer Vision
  • Depth Sensor
  • Data Fusion
  • Human–Machine Interaction
  • Microsoft Kinect
  • Mixed Reality

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

16 pages, 5308 KiB  
Article
Efficient Model-Based Anthropometry under Clothing Using Low-Cost Depth Sensors
by Byoung-Keon D. Park, Hayoung Jung, Sheila M. Ebert, Brian D. Corner and Matthew P. Reed
Sensors 2024, 24(5), 1350; https://0-doi-org.brum.beds.ac.uk/10.3390/s24051350 - 20 Feb 2024
Viewed by 556
Abstract
Measuring human body dimensions is critical for many engineering and product design domains. Nonetheless, acquiring body dimension data for populations using typical anthropometric methods poses challenges due to the time-consuming nature of manual methods. The measurement process for three-dimensional (3D) whole-body scanning can [...] Read more.
Measuring human body dimensions is critical for many engineering and product design domains. Nonetheless, acquiring body dimension data for populations using typical anthropometric methods poses challenges due to the time-consuming nature of manual methods. The measurement process for three-dimensional (3D) whole-body scanning can be much faster, but 3D scanning typically requires subjects to change into tight-fitting clothing, which increases time and cost and introduces privacy concerns. To address these and other issues in current anthropometry techniques, a measurement system was developed based on portable, low-cost depth cameras. Point-cloud data from the sensors are fit using a model-based method, Inscribed Fitting, which finds the most likely body shape in the statistical body shape space and providing accurate estimates of body characteristics. To evaluate the system, 144 young adults were measured manually and with two levels of military ensembles using the system. The results showed that the prediction accuracy for the clothed scans remained at a similar level to the accuracy for the minimally clad scans. This approach will enable rapid measurement of clothed populations with reduced time compared to manual and typical scan-based methods. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

17 pages, 3388 KiB  
Article
Assessment of ADHD Subtypes Using Motion Tracking Recognition Based on Stroop Color–Word Tests
by Chao Li, David Delgado-Gómez, Aaron Sujar, Ping Wang, Marina Martin-Moratinos, Marcos Bella-Fernández, Antonio Eduardo Masó-Besga, Inmaculada Peñuelas-Calvo, Juan Ardoy-Cuadros, Paula Hernández-Liebo and Hilario Blasco-Fontecilla
Sensors 2024, 24(2), 323; https://0-doi-org.brum.beds.ac.uk/10.3390/s24020323 - 05 Jan 2024
Viewed by 852
Abstract
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder known for its significant heterogeneity and varied symptom presentation. Describing the different subtypes as predominantly inattentive (ADHD–I), combined (ADHD–C), and hyperactive–impulsive (ADHD–H) relies primarily on clinical observations, which can be subjective. To address the need for [...] Read more.
Attention-Deficit/Hyperactivity Disorder (ADHD) is a neurodevelopmental disorder known for its significant heterogeneity and varied symptom presentation. Describing the different subtypes as predominantly inattentive (ADHD–I), combined (ADHD–C), and hyperactive–impulsive (ADHD–H) relies primarily on clinical observations, which can be subjective. To address the need for more objective diagnostic methods, this pilot study implemented a Microsoft Kinect-based Stroop Color–Word Test (KSWCT) with the objective of investigating the potential differences in executive function and motor control between different subtypes in a group of children and adolescents with ADHD. A series of linear mixture modeling were used to encompass the performance accuracy, reaction times, and extraneous movements during the tests. Our findings suggested that age plays a critical role, and older subjects showed improvements in KSWCT performance; however, no significant divergence in activity level between the subtypes (ADHD–I and ADHD–H/C) was established. Patients with ADHD–H/C showed tendencies toward deficits in motor planning and executive control, exhibited by shorter reaction times for incorrect responses and more difficulty suppressing erroneous responses. This study provides preliminary evidence of unique executive characteristics among ADHD subtypes, advances our understanding of the heterogeneity of the disorder, and lays the foundation for the development of refined and objective diagnostic tools for ADHD. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

28 pages, 16322 KiB  
Article
How the Processing Mode Influences Azure Kinect Body Tracking Results
by Linda Büker, Vincent Quinten, Michel Hackbarth, Sandra Hellmers, Rebecca Diekmann and Andreas Hein
Sensors 2023, 23(2), 878; https://0-doi-org.brum.beds.ac.uk/10.3390/s23020878 - 12 Jan 2023
Cited by 5 | Viewed by 2418
Abstract
The Azure Kinect DK is an RGB-D-camera popular in research and studies with humans. For good scientific practice, it is relevant that Azure Kinect yields consistent and reproducible results. We noticed the yielded results were inconsistent. Therefore, we examined 100 body tracking runs [...] Read more.
The Azure Kinect DK is an RGB-D-camera popular in research and studies with humans. For good scientific practice, it is relevant that Azure Kinect yields consistent and reproducible results. We noticed the yielded results were inconsistent. Therefore, we examined 100 body tracking runs per processing mode provided by the Azure Kinect Body Tracking SDK on two different computers using a prerecorded video. We compared those runs with respect to spatiotemporal progression (spatial distribution of joint positions per processing mode and run), derived parameters (bone length), and differences between the computers. We found a previously undocumented converging behavior of joint positions at the start of the body tracking. Euclidean distances of joint positions varied clinically relevantly with up to 87 mm between runs for CUDA and TensorRT; CPU and DirectML had no differences on the same computer. Additionally, we found noticeable differences between two computers. Therefore, we recommend choosing the processing mode carefully, reporting the processing mode, and performing all analyses on the same computer to ensure reproducible results when using Azure Kinect and its body tracking in research. Consequently, results from previous studies with Azure Kinect should be reevaluated, and until then, their findings should be interpreted with caution. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

22 pages, 12804 KiB  
Article
Validation of Angle Estimation Based on Body Tracking Data from RGB-D and RGB Cameras for Biomechanical Assessment
by Thiago Buarque de Gusmão Lafayette, Victor Hugo de Lima Kunst, Pedro Vanderlei de Sousa Melo, Paulo de Oliveira Guedes, João Marcelo Xavier Natário Teixeira, Cínthia Rodrigues de Vasconcelos, Veronica Teichrieb and Alana Elza Fontes da Gama
Sensors 2023, 23(1), 3; https://0-doi-org.brum.beds.ac.uk/10.3390/s23010003 - 20 Dec 2022
Cited by 7 | Viewed by 2822
Abstract
Motion analysis is an area with several applications for health, sports, and entertainment. The high cost of state-of-the-art equipment in the health field makes it unfeasible to apply this technique in the clinics’ routines. In this vein, RGB-D and RGB equipment, which have [...] Read more.
Motion analysis is an area with several applications for health, sports, and entertainment. The high cost of state-of-the-art equipment in the health field makes it unfeasible to apply this technique in the clinics’ routines. In this vein, RGB-D and RGB equipment, which have joint tracking tools, are tested with portable and low-cost solutions to enable computational motion analysis. The recent release of Google MediaPipe, a joint inference tracking technique that uses conventional RGB cameras, can be considered a milestone due to its ability to estimate depth coordinates in planar images. In light of this, this work aims to evaluate the measurement of angular variation from RGB-D and RGB sensor data against the Qualisys Tracking Manager gold standard. A total of 60 recordings were performed for each upper and lower limb movement in two different position configurations concerning the sensors. Google’s MediaPipe usage obtained close results compared to Kinect V2 sensor in the inherent aspects of absolute error, RMS, and correlation to the gold standard, presenting lower dispersion values and error metrics, which is more positive. In the comparison with equipment commonly used in physical evaluations, MediaPipe had an error within the error range of short- and long-arm goniometers. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

15 pages, 2851 KiB  
Article
Spatio-Temporal Calibration of Multiple Kinect Cameras Using 3D Human Pose
by Nadav Eichler, Hagit Hel-Or and Ilan Shimshoni
Sensors 2022, 22(22), 8900; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228900 - 17 Nov 2022
Cited by 4 | Viewed by 2993
Abstract
RGB and depth cameras are extensively used for the 3D tracking of human pose and motion. Typically, these cameras calculate a set of 3D points representing the human body as a skeletal structure. The tracking capabilities of a single camera are often affected [...] Read more.
RGB and depth cameras are extensively used for the 3D tracking of human pose and motion. Typically, these cameras calculate a set of 3D points representing the human body as a skeletal structure. The tracking capabilities of a single camera are often affected by noise and inaccuracies due to occluded body parts. Multiple-camera setups offer a solution to maximize coverage of the captured human body and to minimize occlusions. According to best practices, fusing information across multiple cameras typically requires spatio-temporal calibration. First, the cameras must synchronize their internal clocks. This is typically performed by physically connecting the cameras to each other using an external device or cable. Second, the pose of each camera relative to the other cameras must be calculated (Extrinsic Calibration). The state-of-the-art methods use specialized calibration session and devices such as a checkerboard to perform calibration. In this paper, we introduce an approach to the spatio-temporal calibration of multiple cameras which is designed to run on-the-fly without specialized devices or equipment requiring only the motion of the human body in the scene. As an example, the system is implemented and evaluated using Microsoft Azure Kinect. The study shows that the accuracy and robustness of this approach is on par with the state-of-the-art practices. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

18 pages, 13035 KiB  
Article
Reflectance Measurement Method Based on Sensor Fusion of Frame-Based Hyperspectral Imager and Time-of-Flight Depth Camera
by Samuli Rahkonen, Leevi Lind, Anna-Maria Raita-Hakola, Sampsa Kiiskinen and Ilkka Pölönen
Sensors 2022, 22(22), 8668; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228668 - 10 Nov 2022
Cited by 2 | Viewed by 2092
Abstract
Hyperspectral imaging and distance data have previously been used in aerial, forestry, agricultural, and medical imaging applications. Extracting meaningful information from a combination of different imaging modalities is difficult, as the image sensor fusion requires knowing the optical properties of the sensors, selecting [...] Read more.
Hyperspectral imaging and distance data have previously been used in aerial, forestry, agricultural, and medical imaging applications. Extracting meaningful information from a combination of different imaging modalities is difficult, as the image sensor fusion requires knowing the optical properties of the sensors, selecting the right optics and finding the sensors’ mutual reference frame through calibration. In this research we demonstrate a method for fusing data from Fabry–Perot interferometer hyperspectral camera and a Kinect V2 time-of-flight depth sensing camera. We created an experimental application to demonstrate utilizing the depth augmented hyperspectral data to measure emission angle dependent reflectance from a multi-view inferred point cloud. We determined the intrinsic and extrinsic camera parameters through calibration, used global and local registration algorithms to combine point clouds from different viewpoints, created a dense point cloud and determined the angle dependent reflectances from it. The method could successfully combine the 3D point cloud data and hyperspectral data from different viewpoints of a reference colorchecker board. The point cloud registrations gained 0.290.36 fitness for inlier point correspondences and RMSE was approx. 2, which refers a quite reliable registration result. The RMSE of the measured reflectances between the front view and side views of the targets varied between 0.01 and 0.05 on average and the spectral angle between 1.5 and 3.2 degrees. The results suggest that changing emission angle has very small effect on the surface reflectance intensity and spectrum shapes, which was expected with the used colorchecker. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

26 pages, 3894 KiB  
Article
Assessment Tasks and Virtual Exergames for Remote Monitoring of Parkinson’s Disease: An Integrated Approach Based on Azure Kinect
by Gianluca Amprimo, Giulia Masi, Lorenzo Priano, Corrado Azzaro, Federica Galli, Giuseppe Pettiti, Alessandro Mauro and Claudia Ferraris
Sensors 2022, 22(21), 8173; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218173 - 25 Oct 2022
Cited by 9 | Viewed by 2046
Abstract
Motor impairments are among the most relevant, evident, and disabling symptoms of Parkinson’s disease that adversely affect quality of life, resulting in limited autonomy, independence, and safety. Recent studies have demonstrated the benefits of physiotherapy and rehabilitation programs specifically targeted to the needs [...] Read more.
Motor impairments are among the most relevant, evident, and disabling symptoms of Parkinson’s disease that adversely affect quality of life, resulting in limited autonomy, independence, and safety. Recent studies have demonstrated the benefits of physiotherapy and rehabilitation programs specifically targeted to the needs of Parkinsonian patients in supporting drug treatments and improving motor control and coordination. However, due to the expected increase in patients in the coming years, traditional rehabilitation pathways in healthcare facilities could become unsustainable. Consequently, new strategies are needed, in which technologies play a key role in enabling more frequent, comprehensive, and out-of-hospital follow-up. The paper proposes a vision-based solution using the new Azure Kinect DK sensor to implement an integrated approach for remote assessment, monitoring, and rehabilitation of Parkinsonian patients, exploiting non-invasive 3D tracking of body movements to objectively and automatically characterize both standard evaluative motor tasks and virtual exergames. An experimental test involving 20 parkinsonian subjects and 15 healthy controls was organized. Preliminary results show the system’s ability to quantify specific and statistically significant (p < 0.05) features of motor performance, easily monitor changes as the disease progresses over time, and at the same time permit the use of exergames in virtual reality both for training and as a support for motor condition assessment (for example, detecting an average reduction in arm swing asymmetry of about 14% after arm training). The main innovation relies precisely on the integration of evaluative and rehabilitative aspects, which could be used as a closed loop to design new protocols for remote management of patients tailored to their actual conditions. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

22 pages, 29561 KiB  
Article
HoloKinect: Holographic 3D Video Conferencing
by Stephen Siemonsma and Tyler Bell
Sensors 2022, 22(21), 8118; https://0-doi-org.brum.beds.ac.uk/10.3390/s22218118 - 23 Oct 2022
Cited by 3 | Viewed by 2313
Abstract
Recent world events have caused a dramatic rise in the use of video conferencing solutions such as Zoom and FaceTime. Although 3D capture and display technologies are becoming common in consumer products (e.g., Apple iPhone TrueDepth sensors, Microsoft Kinect devices, and Meta Quest [...] Read more.
Recent world events have caused a dramatic rise in the use of video conferencing solutions such as Zoom and FaceTime. Although 3D capture and display technologies are becoming common in consumer products (e.g., Apple iPhone TrueDepth sensors, Microsoft Kinect devices, and Meta Quest VR headsets), 3D telecommunication has not yet seen any appreciable adoption. Researchers have made great progress in developing advanced 3D telepresence systems, but often with burdensome hardware and network requirements. In this work, we present HoloKinect, an open-source, user-friendly, and GPU-accelerated platform for enabling live, two-way 3D video conferencing on commodity hardware and a standard broadband internet connection. A Microsoft Azure Kinect serves as the capture device and a Looking Glass Portrait multiscopically displays the final reconstructed 3D mesh for a hologram-like effect. HoloKinect packs color and depth information into a single video stream, leveraging multiwavelength depth (MWD) encoding to store depth maps in standard RGB video frames. The video stream is compressed with highly optimized and hardware-accelerated video codecs such as H.264. A search of the depth and video encoding parameter space was performed to analyze the quantitative and qualitative losses resulting from HoloKinect’s lossy compression scheme. Visual results were acceptable at all tested bitrates (3–30 Mbps), while the best results were achieved with higher video bitrates and full 4:4:4 chroma sampling. RMSE values of the recovered depth measurements were low across all settings permutations. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

12 pages, 2925 KiB  
Communication
A Simple Method to Optimally Select Upper-Limb Joint Angle Trajectories from Two Kinect Sensors during the Twisting Task for Posture Analysis
by Pin-Ling Liu, Chien-Chi Chang, Li Li and Xu Xu
Sensors 2022, 22(19), 7662; https://0-doi-org.brum.beds.ac.uk/10.3390/s22197662 - 09 Oct 2022
Cited by 3 | Viewed by 1502
Abstract
A trunk-twisting posture is strongly associated with physical discomfort. Measurement of joint kinematics to assess physical exposure to injuries is important. However, using a single Kinect sensor to track the upper-limb joint angle trajectories during twisting tasks in the workplace is challenging due [...] Read more.
A trunk-twisting posture is strongly associated with physical discomfort. Measurement of joint kinematics to assess physical exposure to injuries is important. However, using a single Kinect sensor to track the upper-limb joint angle trajectories during twisting tasks in the workplace is challenging due to sensor view occlusions. This study provides and validates a simple method to optimally select the upper-limb joint angle data from two Kinect sensors at different viewing angles during the twisting task, so the errors of trajectory estimation can be improved. Twelve healthy participants performed a rightward twisting task. The tracking errors of the upper-limb joint angle trajectories of two Kinect sensors during the twisting task were estimated based on concurrent data collected using a conventional motion tracking system. The error values were applied to generate the error trendlines of two Kinect sensors using third-order polynomial regressions. The intersections between two error trendlines were used to define the optimal data selection points for data integration. The finding indicates that integrating the outputs from two Kinect sensor datasets using the proposed method can be more robust than using a single sensor for upper-limb joint angle trajectory estimations during the twisting task. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

16 pages, 960 KiB  
Article
Agrast-6: Abridged VGG-Based Reflected Lightweight Architecture for Binary Segmentation of Depth Images Captured by Kinect
by Karolis Ryselis, Tomas Blažauskas, Robertas Damaševičius and Rytis Maskeliūnas
Sensors 2022, 22(17), 6354; https://0-doi-org.brum.beds.ac.uk/10.3390/s22176354 - 24 Aug 2022
Cited by 1 | Viewed by 1628
Abstract
Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more [...] Read more.
Binary object segmentation is a sub-area of semantic segmentation that could be used for a variety of applications. Semantic segmentation models could be applied to solve binary segmentation problems by introducing only two classes, but the models to solve this problem are more complex than actually required. This leads to very long training times, since there are usually tens of millions of parameters to learn in this category of convolutional neural networks (CNNs). This article introduces a novel abridged VGG-16 and SegNet-inspired reflected architecture adapted for binary segmentation tasks. The architecture has 27 times fewer parameters than SegNet but yields 86% segmentation cross-intersection accuracy and 93% binary accuracy. The proposed architecture is evaluated on a large dataset of depth images collected using the Kinect device, achieving an accuracy of 99.25% in human body shape segmentation and 87% in gender recognition tasks. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

11 pages, 863 KiB  
Article
Automatic Personality Assessment through Movement Analysis
by David Delgado-Gómez, Antonio Eduardo Masó-Besga, David Aguado, Victor J. Rubio, Aaron Sujar and Sofia Bayona
Sensors 2022, 22(10), 3949; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103949 - 23 May 2022
Cited by 2 | Viewed by 2415
Abstract
Obtaining accurate and objective assessments of an individual’s personality is vital in many areas including education, medicine, sports and management. Currently, most personality assessments are conducted using scales and questionnaires. Unfortunately, it has been observed that both scales and questionnaires present various drawbacks. [...] Read more.
Obtaining accurate and objective assessments of an individual’s personality is vital in many areas including education, medicine, sports and management. Currently, most personality assessments are conducted using scales and questionnaires. Unfortunately, it has been observed that both scales and questionnaires present various drawbacks. Their limitations include the lack of veracity in the answers, limitations in the number of times they can be administered, or cultural biases. To solve these problems, several articles have been published in recent years proposing the use of movements that participants make during their evaluation as personality predictors. In this work, a multiple linear regression model was developed to assess the examinee’s personality based on their movements. Movements were captured with the low-cost Microsoft Kinect camera, which facilitates its acceptance and implementation. To evaluate the performance of the proposed system, a pilot study was conducted aimed at assessing the personality traits defined by the Big-Five Personality Model. It was observed that the traits that best fit the model are Extroversion and Conscientiousness. In addition, several patterns that characterize the five personality traits were identified. These results show that it is feasible to assess an individual’s personality through his or her movements and open up pathways for several research. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

22 pages, 4368 KiB  
Article
3D Kinect Camera Scheme with Time-Series Deep-Learning Algorithms for Classification and Prediction of Lung Tumor Motility
by Utumporn Puangragsa, Jiraporn Setakornnukul, Pittaya Dankulchai and Pattarapong Phasukkit
Sensors 2022, 22(8), 2918; https://0-doi-org.brum.beds.ac.uk/10.3390/s22082918 - 11 Apr 2022
Cited by 4 | Viewed by 2258
Abstract
This paper proposes a time-series deep-learning 3D Kinect camera scheme to classify the respiratory phases with a lung tumor and predict the lung tumor displacement. Specifically, the proposed scheme is driven by two time-series deep-learning algorithmic models: the respiratory-phase classification model and the [...] Read more.
This paper proposes a time-series deep-learning 3D Kinect camera scheme to classify the respiratory phases with a lung tumor and predict the lung tumor displacement. Specifically, the proposed scheme is driven by two time-series deep-learning algorithmic models: the respiratory-phase classification model and the regression-based prediction model. To assess the performance of the proposed scheme, the classification and prediction models were tested with four categories of datasets: patient-based datasets with regular and irregular breathing patterns; and pseudopatient-based datasets with regular and irregular breathing patterns. In this study, ‘pseudopatients’ refer to a dynamic thorax phantom with a lung tumor programmed with varying breathing patterns and breaths per minute. The total accuracy of the respiratory-phase classification model was 100%, 100%, 100%, and 92.44% for the four dataset categories, with a corresponding mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2) of 1.2–1.6%, 0.65–0.8%, and 0.97–0.98, respectively. The results demonstrate that the time-series deep-learning classification and regression-based prediction models can classify the respiratory phases and predict the lung tumor displacement with high accuracy. Essentially, the novelty of this research lies in the use of a low-cost 3D Kinect camera with time-series deep-learning algorithms in the medical field to efficiently classify the respiratory phase and predict the lung tumor displacement. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

13 pages, 878 KiB  
Article
Kinect v2-Assisted Semi-Automated Method to Assess Upper Limb Motor Performance in Children
by Celia Francisco-Martínez, José A. Padilla-Medina, Juan Prado-Olivarez, Francisco J. Pérez-Pinal, Alejandro I. Barranco-Gutiérrez and Juan J. Martínez-Nolasco
Sensors 2022, 22(6), 2258; https://0-doi-org.brum.beds.ac.uk/10.3390/s22062258 - 15 Mar 2022
Cited by 9 | Viewed by 2631
Abstract
The interruption of rehabilitation activities caused by the COVID-19 lockdown has significant health negative consequences for the population with physical disabilities. Thus, measuring the range of motion (ROM) using remotely taken photographs, which are then sent to specialists for formal assessment, has been [...] Read more.
The interruption of rehabilitation activities caused by the COVID-19 lockdown has significant health negative consequences for the population with physical disabilities. Thus, measuring the range of motion (ROM) using remotely taken photographs, which are then sent to specialists for formal assessment, has been recommended. Currently, low-cost Kinect motion capture sensors with a natural user interface are the most feasible implementations for upper limb motion analysis. An active range of motion (AROM) measuring system based on a Kinect v2 sensor for upper limb motion analysis using Fugl-Meyer Assessment (FMA) scoring is described in this paper. Two test groups of children, each having eighteen participants, were analyzed in the experimental stage, where upper limbs’ AROM and motor performance were assessed using FMA. Participants in the control group (mean age of 7.83 ± 2.54 years) had no cognitive impairment or upper limb musculoskeletal problems. The study test group comprised children aged 8.28 ± 2.32 years with spastic hemiparesis. A total of 30 samples of elbow flexion and 30 samples of shoulder abduction of both limbs for each participant were analyzed using the Kinect v2 sensor at 30 Hz. In both upper limbs, no significant differences (p < 0.05) in the measured angles and FMA assessments were observed between those obtained using the described Kinect v2-based system and those obtained directly using a universal goniometer. The measurement error achieved by the proposed system was less than ±1° compared to the specialist’s measurements. According to the obtained results, the developed measuring system is a good alternative and an effective tool for FMA assessment of AROM and motor performance of upper limbs, while avoiding direct contact in both healthy children and children with spastic hemiparesis. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

16 pages, 9571 KiB  
Article
Colored Point Cloud Registration by Depth Filtering
by Ouk Choi and Wonjun Hwang
Sensors 2021, 21(21), 7023; https://0-doi-org.brum.beds.ac.uk/10.3390/s21217023 - 23 Oct 2021
Cited by 5 | Viewed by 2179
Abstract
In the last stage of colored point cloud registration, depth measurement errors hinder the achievement of accurate and visually plausible alignments. Recently, an algorithm has been proposed to extend the Iterative Closest Point (ICP) algorithm to refine the measured depth values instead of [...] Read more.
In the last stage of colored point cloud registration, depth measurement errors hinder the achievement of accurate and visually plausible alignments. Recently, an algorithm has been proposed to extend the Iterative Closest Point (ICP) algorithm to refine the measured depth values instead of the pose between point clouds. However, the algorithm suffers from numerical instability, so a postprocessing step is needed to restrict erroneous output depth values. In this paper, we present a new algorithm with improved numerical stability. Unlike the previous algorithm heavily relying on point-to-plane distances, our algorithm constructs a cost function based on an adaptive combination of two different projected distances to prevent numerical instability. We address the problem of registering a source point cloud to the union of the source and reference point clouds. This extension allows all source points to be processed in a unified filtering framework, irrespective of the existence of their corresponding points in the reference point cloud. The extension also improves the numerical stability of using the point-to-plane distances. The experiments show that the proposed algorithm improves the registration accuracy and provides high-quality alignments of colored point clouds. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

13 pages, 2323 KiB  
Article
Proof-of-Concept of a Sensor-Based Evaluation Method for Better Sensitivity of Upper-Extremity Motor Function Assessment
by Seung-Hee Lee, Ye-Ji Hwang, Hwang-Jae Lee, Yun-Hee Kim, Matjaž Ogrinc, Etienne Burdet and Jong-Hyun Kim
Sensors 2021, 21(17), 5926; https://0-doi-org.brum.beds.ac.uk/10.3390/s21175926 - 03 Sep 2021
Cited by 7 | Viewed by 2132
Abstract
In rehabilitation, the Fugl–Meyer assessment (FMA) is a typical clinical instrument to assess upper-extremity motor function of stroke patients, but it cannot measure fine changes of motor function (both in recovery and deterioration) due to its limited sensitivity. This paper introduces a sensor-based [...] Read more.
In rehabilitation, the Fugl–Meyer assessment (FMA) is a typical clinical instrument to assess upper-extremity motor function of stroke patients, but it cannot measure fine changes of motor function (both in recovery and deterioration) due to its limited sensitivity. This paper introduces a sensor-based automated FMA system that addresses this limitation with a continuous rating algorithm. The system consists of a depth sensor (Kinect V2) and an algorithm to rate the continuous FM scale based on fuzzy inference. Using a binary logic based classification method developed from a linguistic scoring guideline of FMA, we designed fuzzy input/output variables, fuzzy rules, membership functions, and a defuzzification method for several representative FMA tests. A pilot trial with nine stroke patients was performed to test the feasibility of the proposed approach. The continuous FM scale from the proposed algorithm exhibited a high correlation with the clinician rated scores and the results showed the possibility of more sensitive upper-extremity motor function assessment. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

Other

Jump to: Research

37 pages, 2832 KiB  
Systematic Review
The Reliability of the Microsoft Kinect and Ambulatory Sensor-Based Motion Tracking Devices to Measure Shoulder Range-of-Motion: A Systematic Review and Meta-Analysis
by Peter Beshara, David B. Anderson, Matthew Pelletier and William R. Walsh
Sensors 2021, 21(24), 8186; https://0-doi-org.brum.beds.ac.uk/10.3390/s21248186 - 08 Dec 2021
Cited by 12 | Viewed by 3888
Abstract
Advancements in motion sensing technology can potentially allow clinicians to make more accurate range-of-motion (ROM) measurements and informed decisions regarding patient management. The aim of this study was to systematically review and appraise the literature on the reliability of the Kinect, inertial sensors, [...] Read more.
Advancements in motion sensing technology can potentially allow clinicians to make more accurate range-of-motion (ROM) measurements and informed decisions regarding patient management. The aim of this study was to systematically review and appraise the literature on the reliability of the Kinect, inertial sensors, smartphone applications and digital inclinometers/goniometers to measure shoulder ROM. Eleven databases were screened (MEDLINE, EMBASE, EMCARE, CINAHL, SPORTSDiscus, Compendex, IEEE Xplore, Web of Science, Proquest Science and Technology, Scopus, and PubMed). The methodological quality of the studies was assessed using the consensus-based standards for the selection of health Measurement Instruments (COSMIN) checklist. Reliability assessment used intra-class correlation coefficients (ICCs) and the criteria from Swinkels et al. (2005). Thirty-two studies were included. A total of 24 studies scored “adequate” and 2 scored “very good” for the reliability standards. Only one study scored “very good” and just over half of the studies (18/32) scored “adequate” for the measurement error standards. Good intra-rater reliability (ICC > 0.85) and inter-rater reliability (ICC > 0.80) was demonstrated with the Kinect, smartphone applications and digital inclinometers. Overall, the Kinect and ambulatory sensor-based human motion tracking devices demonstrate moderate–good levels of intra- and inter-rater reliability to measure shoulder ROM. Future reliability studies should focus on improving study design with larger sample sizes and recommended time intervals between repeated measurements. Full article
(This article belongs to the Special Issue Kinect Sensor and Its Application)
Show Figures

Figure 1

Back to TopTop