sensors-logo

Journal Browser

Journal Browser

Medical Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (20 June 2022) | Viewed by 41104

Special Issue Editors


E-Mail Website
Guest Editor
Antal Bejczy Center for Intelligent Robotics, Obuda University, Budapest, Hungary
Interests: surgical robotics; medical robot autonomy; robot safety and standardization
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD, USA
Interests: novel tools; image guidance; robot control techniques for medical robotics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical–surgical robotics have a stunning 40-year history, and now, as innovations in the field rapidly accelerate, there has been an unprecedented rise in applications and systems. Surgical robotics are entering new domains, requiring highly sophisticated manipulation skills and decision making, while rehabilitation and assistive robotics are now conquering several domains. The newest generation of medical robots not only functions as an agile extension of the human eyes and hands, but also becomes a skillful and smart partner to their human counterpart. The goal of this SI is to more broadly engage the medical/surgical robotics community, present the latest developments, and define the roadmap for future enhancements to these platforms. Apart from intelligent sensor technologies, smart mechatronics and data science tools, articles on technical development targeting ergonomics and usability, reports striving to establish a safe and reliable environment for Medical Robots 4.0 should also be included. Automated medical treatment promises a better quality of life for many, where eminence in science, engineering and design should go hand-in-hand, addressing technical-, safety- and performance-related questions. Your submissions are warmly welcomed.

Dr. Tamás Haidegger
Dr. Axel Krieger
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent medical robotic systems and equipment
  • Medical imaging and image-based robotic intervention
  • Haptics and physical interaction in medical robotics
  • Autonomous sensing, manipulation, control, and optimization for medical robots,Computationally augmented and virtual environments for medical robotics
  • Intuitive and advanced medical instrumentation
  • Computer-integrated interventional systems
  • Biomechanical-, bio-inspired and image-guided surgical systems
  • Surgical data science and computer aided medical procedures

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

15 pages, 5425 KiB  
Article
Sensor-Based Automated Detection of Electrosurgical Cautery States
by Josh Ehrlich, Amoon Jamzad, Mark Asselin, Jessica Robin Rodgers, Martin Kaufmann, Tamas Haidegger, John Rudan, Parvin Mousavi, Gabor Fichtinger and Tamas Ungi
Sensors 2022, 22(15), 5808; https://0-doi-org.brum.beds.ac.uk/10.3390/s22155808 - 03 Aug 2022
Cited by 1 | Viewed by 3594
Abstract
In computer-assisted surgery, it is typically required to detect when the tool comes into contact with the patient. In activated electrosurgery, this is known as the energy event. By continuously tracking the electrosurgical tools’ location using a navigation system, energy events can [...] Read more.
In computer-assisted surgery, it is typically required to detect when the tool comes into contact with the patient. In activated electrosurgery, this is known as the energy event. By continuously tracking the electrosurgical tools’ location using a navigation system, energy events can help determine locations of sensor-classified tissues. Our objective was to detect the energy event and determine the settings of electrosurgical cautery—robustly and automatically based on sensor data. This study aims to demonstrate the feasibility of using the cautery state to detect surgical incisions, without disrupting the surgical workflow. We detected current changes in the wires of the cautery device and grounding pad using non-invasive current sensors and an oscilloscope. An open-source software was implemented to apply machine learning on sensor data to detect energy events and cautery settings. Our methods classified each cautery state at an average accuracy of 95.56% across different tissue types and energy level parameters altered by surgeons during an operation. Our results demonstrate the feasibility of automatically identifying energy events during surgical incisions, which could be an important safety feature in robotic and computer-integrated surgery. This study provides a key step towards locating tissue classifications during breast cancer operations and reducing the rate of positive margins. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

13 pages, 2124 KiB  
Article
Bridging 3D Slicer and ROS2 for Image-Guided Robotic Interventions
by Laura Connolly, Anton Deguet, Simon Leonard, Junichi Tokuda, Tamas Ungi, Axel Krieger, Peter Kazanzides, Parvin Mousavi, Gabor Fichtinger and Russell H. Taylor
Sensors 2022, 22(14), 5336; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145336 - 17 Jul 2022
Cited by 3 | Viewed by 3654
Abstract
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot [...] Read more.
Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several “ad hoc” attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

16 pages, 3001 KiB  
Article
Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks
by Guillermo Sánchez-Brizuela, Francisco-Javier Santos-Criado, Daniel Sanz-Gobernado, Eusebio de la Fuente-López, Juan-Carlos Fraile, Javier Pérez-Turiel and Ana Cisnal
Sensors 2022, 22(14), 5180; https://0-doi-org.brum.beds.ac.uk/10.3390/s22145180 - 11 Jul 2022
Cited by 6 | Viewed by 2198
Abstract
Medical instruments detection in laparoscopic video has been carried out to increase the autonomy of surgical robots, evaluate skills or index recordings. However, it has not been extended to surgical gauzes. Gauzes can provide valuable information to numerous tasks in the operating room, [...] Read more.
Medical instruments detection in laparoscopic video has been carried out to increase the autonomy of surgical robots, evaluate skills or index recordings. However, it has not been extended to surgical gauzes. Gauzes can provide valuable information to numerous tasks in the operating room, but the lack of an annotated dataset has hampered its research. In this article, we present a segmentation dataset with 4003 hand-labelled frames from laparoscopic video. To prove the dataset potential, we analyzed several baselines: detection using YOLOv3, coarse segmentation, and segmentation with a U-Net. Our results show that YOLOv3 can be executed in real time but provides a modest recall. Coarse segmentation presents satisfactory results but lacks inference speed. Finally, the U-Net baseline achieves a good speed-quality compromise running above 30 FPS while obtaining an IoU of 0.85. The accuracy reached by U-Net and its execution speed demonstrate that precise and real-time gauze segmentation can be achieved, training convolutional neural networks on the proposed dataset. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

20 pages, 3041 KiB  
Article
s-CAM: An Untethered Insertable Laparoscopic Surgical Camera Robot with Non-Contact Actuation
by Ning Li, Hui Liu, Reza Yazdanpanah Abdolmalaki, Gregory J. Mancini and Jindong Tan
Sensors 2022, 22(9), 3405; https://0-doi-org.brum.beds.ac.uk/10.3390/s22093405 - 29 Apr 2022
Cited by 2 | Viewed by 3467
Abstract
Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational [...] Read more.
Fully insertable robotic imaging devices represent a promising future of minimally invasive laparoscopic vision. Emerging research efforts in this field have resulted in several proof-of-concept prototypes. One common drawback of these designs derives from their clumsy tethering wires which not only cause operational interference but also reduce camera mobility. In this paper, a tetherless insertable surgical camera (s-CAM) robot with non-contact transabdominal actuation is presented for single-incision laparoscopic vision. Wireless video transmission and control communication using onboard power help eliminate cumbersome tethering wires. Furthermore, magnetic based camera actuation gets rid of intrinsic physical constraints of mechanical driving mechanisms, thereby improving camera mobility and reducing operational interference. In addition, a custom Bluetooth low energy (BLE) application profile and a real-time operating system (RTOS) based multitask programming framework are also proposed to facilitate embedded software design for insertable medical devices. Initial ex vivo test results of the s-CAM design have demonstrated technical feasibility of a tetherless insertable laparoscopic camera. Effective imaging is confirmed at as low as 500 lx illumination. Wireless laparoscopic vision is accessible within a distance of more than 10 m. Transabdominal BLE communication is stable at over −52 dBm and shows its potential for wireless control of insertable medical devices. RTOS based sfotware event response is bounded within 1 ms while the CPU usage is at 3∼5%. The device is able to work for 50 min with its onboard power. For the mobility, the robot can translate against the interior abdominal wall to reach full abdomen quadrants, tilt between −180 and +180, and pan in the range of 0∼360. The s-CAM has brought robotic laparoscopic imaging one step further toward less invasiveness and more dexterity. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

20 pages, 6663 KiB  
Article
Performance and Capability Assessment in Surgical Subtask Automation
by Tamás D. Nagy and Tamás Haidegger
Sensors 2022, 22(7), 2501; https://0-doi-org.brum.beds.ac.uk/10.3390/s22072501 - 24 Mar 2022
Cited by 17 | Viewed by 2716
Abstract
Robot-Assisted Minimally Invasive Surgery (RAMIS) has reshaped the standard clinical practice during the past two decades. Many believe that the next big step in the advancement of RAMIS will be partial autonomy, which may reduce the fatigue and the cognitive load on the [...] Read more.
Robot-Assisted Minimally Invasive Surgery (RAMIS) has reshaped the standard clinical practice during the past two decades. Many believe that the next big step in the advancement of RAMIS will be partial autonomy, which may reduce the fatigue and the cognitive load on the surgeon by performing the monotonous, time-consuming subtasks of the surgical procedure autonomously. Although serious research efforts are paid to this area worldwide, standard evaluation methods, metrics, or benchmarking techniques are still not formed. This article aims to fill the void in the research domain of surgical subtask automation by proposing standard methodologies for performance evaluation. For that purpose, a novel characterization model is presented for surgical automation. The current metrics for performance evaluation and comparison are overviewed and analyzed, and a workflow model is presented that can help researchers to identify and apply their choice of metrics. Existing systems and setups that serve or could serve as benchmarks are also introduced and the need for standard benchmarks in the field is articulated. Finally, the matter of Human–Machine Interface (HMI) quality, robustness, and the related legal and ethical issues are presented. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

13 pages, 712 KiB  
Article
A Case Study of Upper Limb Robotic-Assisted Therapy Using the Track-Hold Device
by Marco Righi, Massimo Magrini, Cristina Dolciotti and Davide Moroni
Sensors 2022, 22(3), 1009; https://0-doi-org.brum.beds.ac.uk/10.3390/s22031009 - 28 Jan 2022
Cited by 3 | Viewed by 1648
Abstract
The Track-Hold System (THS) project, developed in a healthcare facility and therefore in a controlled and protected healthcare environment, contributes to the more general and broad context of Robotic-Assisted Therapy (RAT). RAT represents an advanced and innovative rehabilitation method, both motor and cognitive, [...] Read more.
The Track-Hold System (THS) project, developed in a healthcare facility and therefore in a controlled and protected healthcare environment, contributes to the more general and broad context of Robotic-Assisted Therapy (RAT). RAT represents an advanced and innovative rehabilitation method, both motor and cognitive, and uses active, passive, and facilitating robotic devices. RAT devices can be equipped with sensors to detect and track voluntary and involuntary movements. They can work in synergy with multimedia protocols developed ad hoc to achieve the highest possible level of functional re-education. The THS is based on a passive robotic arm capable of recording and facilitating the movements of the upper limbs. An operational interface completes the device for its use in the clinical setting. In the form of a case study, the researchers conducted the experimentation in the former Tabarracci hospital (Viareggio, Italy). The case study develops a motor and cognitive rehabilitation protocol. The chosen subjects suffered from post-stroke outcomes affecting the right upper limb, including strength deficits, tremors, incoordination, and motor apraxia. During the first stage of the enrolment, the researchers worked with seven patients. The researchers completed the pilot with four patients because three of them got a stroke recurrence. The collaboration with four patients permitted the generation of an enlarged case report to collect preliminary data. The preliminary clinical results of the Track-Hold System Project demonstrated good compliance by patients with robotic-assisted rehabilitation; in particular, patients underwent a gradual path of functional recovery of the upper limb using the implemented interface. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

21 pages, 961 KiB  
Article
Handlebar Robotic System for Bimanual Motor Control and Learning Research
by Lucas R. L. Cardoso, Leonardo M. Pedro and Arturo Forner-Cordero
Sensors 2021, 21(18), 5991; https://0-doi-org.brum.beds.ac.uk/10.3390/s21185991 - 07 Sep 2021
Cited by 1 | Viewed by 2043
Abstract
Robotic devices can be used for motor control and learning research. In this work, we present the construction, modeling and experimental validation of a bimanual robotic device. We tested some hypotheses that may help to better understand the motor learning processes involved in [...] Read more.
Robotic devices can be used for motor control and learning research. In this work, we present the construction, modeling and experimental validation of a bimanual robotic device. We tested some hypotheses that may help to better understand the motor learning processes involved in the interlimb coordination function. The system emulates a bicycle handlebar with rotational motion, thus requiring bilateral upper limb control and a coordinated sequence of joint sub-movements. The robotic handlebar is compact and portable and can register in a fast rate both position and forces independently from arms, including prehension forces. An impedance control system was implemented in order to promote a safer environment for human interaction and the system is able to generate force fields, suitable for implementing motor learning paradigms. The novelty of the system is the decoupling of prehension and manipulation forces of each hand, thus paving the way for the investigation of hand dominance function in a bimanual task. Experiments were conducted with ten healthy subjects, kinematic and dynamic variables were measured during a rotational set of movements. Statistical analyses showed that movement velocity decreased with practice along with an increase in reaction time. This suggests an increase of the task planning time. Prehension force decreased with practice. However, an unexpected result was that the dominant hand did not lead the bimanual task, but helped to correct the movement, suggesting different roles for each hand during a cooperative bimanual task. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

24 pages, 3023 KiB  
Article
Endoscopic Image-Based Skill Assessment in Robot-Assisted Minimally Invasive Surgery
by Gábor Lajkó, Renáta Nagyné Elek and Tamás Haidegger
Sensors 2021, 21(16), 5412; https://0-doi-org.brum.beds.ac.uk/10.3390/s21165412 - 10 Aug 2021
Cited by 15 | Viewed by 3232
Abstract
Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic—acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors—or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully [...] Read more.
Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic—acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors—or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU–ISI Gesture and Skill Assessment Working Set dataset—co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy—averaged over five cross-validation trials—for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features—such as image-based orientation and image-based collision detection—or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

22 pages, 5039 KiB  
Article
Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation
by Yun-Hsuan Su, Wenfan Jiang, Digesh Chitrakar, Kevin Huang, Haonan Peng and Blake Hannaford
Sensors 2021, 21(15), 5163; https://0-doi-org.brum.beds.ac.uk/10.3390/s21155163 - 30 Jul 2021
Cited by 12 | Viewed by 3757
Abstract
Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly [...] Read more.
Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

Review

Jump to: Research, Other

23 pages, 1000 KiB  
Review
Non-Technical Skill Assessment and Mental Load Evaluation in Robot-Assisted Minimally Invasive Surgery
by Renáta Nagyné Elek and Tamás Haidegger
Sensors 2021, 21(8), 2666; https://0-doi-org.brum.beds.ac.uk/10.3390/s21082666 - 10 Apr 2021
Cited by 23 | Viewed by 7655
Abstract
BACKGROUND: Sensor technologies and data collection practices are changing and improving quality metrics across various domains. Surgical skill assessment in Robot-Assisted Minimally Invasive Surgery (RAMIS) is essential for training and quality assurance. The mental workload on the surgeon (such as time criticality, task [...] Read more.
BACKGROUND: Sensor technologies and data collection practices are changing and improving quality metrics across various domains. Surgical skill assessment in Robot-Assisted Minimally Invasive Surgery (RAMIS) is essential for training and quality assurance. The mental workload on the surgeon (such as time criticality, task complexity, distractions) and non-technical surgical skills (including situational awareness, decision making, stress resilience, communication, leadership) may directly influence the clinical outcome of the surgery. METHODS: A literature search in PubMed, Scopus and PsycNet databases was conducted for relevant scientific publications. The standard PRISMA method was followed to filter the search results, including non-technical skill assessment and mental/cognitive load and workload estimation in RAMIS. Publications related to traditional manual Minimally Invasive Surgery were excluded, and also the usability studies on the surgical tools were not assessed. RESULTS: 50 relevant publications were identified for non-technical skill assessment and mental load and workload estimation in the domain of RAMIS. The identified assessment techniques ranged from self-rating questionnaires and expert ratings to autonomous techniques, citing their most important benefits and disadvantages. CONCLUSIONS: Despite the systematic research, only a limited number of articles was found, indicating that non-technical skill and mental load assessment in RAMIS is not a well-studied area. Workload assessment and soft skill measurement do not constitute part of the regular clinical training and practice yet. Meanwhile, the importance of the research domain is clear based on the publicly available surgical error statistics. Questionnaires and expert-rating techniques are widely employed in traditional surgical skill assessment; nevertheless, recent technological development in sensors and Internet of Things-type devices show that skill assessment approaches in RAMIS can be much more profound employing automated solutions. Measurements and especially big data type analysis may introduce more objectivity and transparency to this critical domain as well. SIGNIFICANCE: Non-technical skill assessment and mental load evaluation in Robot-Assisted Minimally Invasive Surgery is not a well-studied area yet; while the importance of this domain from the clinical outcome’s point of view is clearly indicated by the available surgical error statistics. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

Other

Jump to: Research, Review

13 pages, 629 KiB  
Systematic Review
Effectiveness of Video Games as Physical Treatment in Patients with Cystic Fibrosis: Systematic Review
by Remedios López-Liria, Daniel Checa-Mayordomo, Francisco Antonio Vega-Ramírez, Amelia Victoria García-Luengo, María Ángeles Valverde-Martínez and Patricia Rocamora-Pérez
Sensors 2022, 22(5), 1902; https://0-doi-org.brum.beds.ac.uk/10.3390/s22051902 - 28 Feb 2022
Cited by 3 | Viewed by 3038
Abstract
Physical training at home by making individuals play active video games is a new therapeutic strategy to improve the condition of patients with cystic fibrosis (CF). We reviewed studies on the use of video games and their benefits in the treatment of CF. [...] Read more.
Physical training at home by making individuals play active video games is a new therapeutic strategy to improve the condition of patients with cystic fibrosis (CF). We reviewed studies on the use of video games and their benefits in the treatment of CF. We conducted a systematic review with data from six databases (PubMed, Medline, Scopus, Web of Science, PEDro, and Cochrane library plus) since 2010, according to PRISMA standards. The descriptors were: “Cystic Fibrosis”, “Video Game”, “Gaming Console”, “Pulmonary Rehabilitation”, “Physiotherapy”, and “Physical Therapy”. Nine articles with 320 participants met the inclusion criteria and the study objective. Patients who played active video games showed a high intensity of exercise and higher ventilatory and aerobic capacity compared to the values of these parameters in tests such as the cardiopulmonary stress test or the six-minute walk test. Adequate values of metabolic demand in these patients were recorded after playing certain video games. A high level of treatment adherence and satisfaction was observed in both children and adults. Although the quality of the included studies was moderate, the evidence to confirm these results was insufficient. More robust studies are needed, including those on evaluation and health economics, to determine the effectiveness of the treatment. Full article
(This article belongs to the Special Issue Medical Robotics)
Show Figures

Figure 1

Back to TopTop