sensors-logo

Journal Browser

Journal Browser

Augmented Reality Head-Mounted Displays and Smart Sensors for Image-Guided Surgery and Surgical Simulation

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Biomedical Sensors".

Deadline for manuscript submissions: closed (20 September 2023) | Viewed by 12354

Special Issue Editor


E-Mail Website
Guest Editor
Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
Interests: augmented reality; computer vision; surgical navigation; surgical simulation; optical tracking; electromagnetic tracking; multimodal tracking; VR/AR/MR technology in medicine; visual perception; human-machine interfaces
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The goal of visual augmented reality (AR) is to seamlessly enrich the visual perception of the physical world with computer-generated elements that appear to spatially coexist with it. Particularly, AR technology has proven to be a key asset for the development of new image-guided surgery paradigms aimed at improving the accuracy, efficiency, and reproducibility of the surgical act. AR allows the ubiquitous enrichment of the surgical scene with contextually blended virtual navigation aids (i.e., in situ visualization). In this context, AR head-mounted displays (HMDs) are deemed the most ergonomic and efficient output medium to guide complex manipulative procedures for their ability to retain the surgeon’s natural egocentric perception of the augmented workspace.

To ensure locational coherence between the real and the virtual elements, the joint exploitation of smart sensors and computer vision solutions for anatomy and/or tool tracking based on artificial intelligence (AI) is stirring the release of more efficient and less invasive surgical navigation solutions. Measured visual or multi-modal data can be automatically processed in a data-driven way to significantly reduce the human factor and hardware dependency for anatomy and tool tracking. In addition, new “smart” algorithms can also help interpret the surgical scene, thus guiding the surgical workflow in a more intuitive way.

The purpose of this Special Issue is to connect researchers in the field of wearable AR and AI-driven image-guided surgery solutions, to share their ideas, and to discuss the recent advances in these two fields of research.

Dr. Fabrizio Cutolo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • augmented reality head-mounted displays for surgical navigation
  • augmented reality head-mounted displays for surgical simulation
  • computer vision solutions for image-guided surgery
  • markerless tracking for patient’s anatomy or surgical tool
  • vision and multimodal understanding for surgical assistance
  • wearable-friendly visual or multimodal human–computer interaction
  • deep learning for computer vision and multimodal understanding

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 411 KiB  
Article
On-Device Execution of Deep Learning Models on HoloLens2 for Real-Time Augmented Reality Medical Applications
by Silvia Zaccardi, Taylor Frantz, David Beckwée, Eva Swinnen and Bart Jansen
Sensors 2023, 23(21), 8698; https://0-doi-org.brum.beds.ac.uk/10.3390/s23218698 - 25 Oct 2023
Viewed by 1306
Abstract
The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces [...] Read more.
The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2’s limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2—Unity Barracuda and Windows Machine Learning (WinML)—using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance. Full article
Show Figures

Figure 1

10 pages, 7533 KiB  
Article
Head-Mounted Projector for Manual Precision Tasks: Performance Assessment
by Virginia Mamone, Vincenzo Ferrari, Renzo D’Amato, Sara Condino, Nadia Cattari and Fabrizio Cutolo
Sensors 2023, 23(7), 3494; https://0-doi-org.brum.beds.ac.uk/10.3390/s23073494 - 27 Mar 2023
Cited by 2 | Viewed by 1275
Abstract
The growing interest in augmented reality applications has led to an in-depth look at the performance of head-mounted displays and their testing in numerous domains. Other devices for augmenting the real world with virtual information are presented less frequently and usually focus on [...] Read more.
The growing interest in augmented reality applications has led to an in-depth look at the performance of head-mounted displays and their testing in numerous domains. Other devices for augmenting the real world with virtual information are presented less frequently and usually focus on the description of the device rather than on its performance analysis. This is the case of projected augmented reality, which, compared to head-worn AR displays, offers the advantages of being simultaneously accessible by multiple users whilst preserving user awareness of the environment and feeling of immersion. This work provides a general evaluation of a custom-made head-mounted projector for the aid of precision manual tasks through an experimental protocol designed for investigating spatial and temporal registration and their combination. The results of the tests show that the accuracy (0.6±0.1 mm of spatial registration error) and motion-to-photon latency (113±12 ms) make the proposed solution suitable for guiding precision tasks. Full article
Show Figures

Figure 1

12 pages, 1634 KiB  
Article
Magic Leap 1 versus Microsoft HoloLens 2 for the Visualization of 3D Content Obtained from Radiological Images
by Giulia Zari, Sara Condino, Fabrizio Cutolo and Vincenzo Ferrari
Sensors 2023, 23(6), 3040; https://0-doi-org.brum.beds.ac.uk/10.3390/s23063040 - 11 Mar 2023
Cited by 6 | Viewed by 2709
Abstract
The adoption of extended reality solutions is growing rapidly in the healthcare world. Augmented reality (AR) and virtual reality (VR) interfaces can bring advantages in various medical-health sectors; it is thus not surprising that the medical MR market is among the fastest-growing ones. [...] Read more.
The adoption of extended reality solutions is growing rapidly in the healthcare world. Augmented reality (AR) and virtual reality (VR) interfaces can bring advantages in various medical-health sectors; it is thus not surprising that the medical MR market is among the fastest-growing ones. The present study reports on a comparison between two of the most popular MR head-mounted displays, Magic Leap 1 and Microsoft HoloLens 2, for the visualization of 3D medical imaging data. We evaluate the functionalities and performance of both devices through a user-study in which surgeons and residents assessed the visualization of 3D computer-generated anatomical models. The digital content is obtained through a dedicated medical imaging suite (Verima imaging suite) developed by the Italian start-up company (Witapp s.r.l.). According to our performance analysis in terms of frame rate, there are no significant differences between the two devices. The surgical staff expressed a clear preference for Magic Leap 1, particularly for the better visualization quality and the ease of interaction with the 3D virtual content. Nonetheless, even though the results of the questionnaire were slightly more positive for Magic Leap 1, the spatial understanding of the 3D anatomical model in terms of depth relations and spatial arrangement was positively evaluated for both devices. Full article
Show Figures

Figure 1

16 pages, 4091 KiB  
Article
Augmenting Image-Guided Procedures through In Situ Visualization of 3D Ultrasound via a Head-Mounted Display
by Felix von Haxthausen, Christoph Rüger, Malte Maria Sieren, Roman Kloeckner and Floris Ernst
Sensors 2023, 23(4), 2168; https://0-doi-org.brum.beds.ac.uk/10.3390/s23042168 - 15 Feb 2023
Cited by 5 | Viewed by 2017
Abstract
Medical ultrasound (US) is a commonly used modality for image-guided procedures. Recent research systems providing an in situ visualization of 2D US images via an augmented reality (AR) head-mounted display (HMD) were shown to be advantageous over conventional imaging through reduced task completion [...] Read more.
Medical ultrasound (US) is a commonly used modality for image-guided procedures. Recent research systems providing an in situ visualization of 2D US images via an augmented reality (AR) head-mounted display (HMD) were shown to be advantageous over conventional imaging through reduced task completion times and improved accuracy. In this work, we continue in the direction of recent developments by describing the first AR HMD application visualizing real-time volumetric (3D) US in situ for guiding vascular punctures. We evaluated the application on a technical level as well as in a mixed-methods user study with a qualitative prestudy and a quantitative main study, simulating a vascular puncture. Participants completed the puncture task significantly faster when using 3D US AR mode compared to 2D US AR, with a decrease of 28.4% in time. However, no significant differences were observed regarding the success rate of vascular puncture (2D US AR—50% vs. 3D US AR—72%). On the technical side, the system offers a low latency of 49.90 ± 12.92 ms and a satisfactory frame rate of 60 Hz. Our work shows the feasibility of a system that visualizes real-time 3D US data via an AR HMD, and our experiments show, furthermore, that this may offer additional benefits in US-guided tasks (i.e., reduced task completion time) over 2D US images viewed in AR by offering a vividly volumetric visualization. Full article
Show Figures

Figure 1

14 pages, 708 KiB  
Article
Force-Sensorless Identification and Classification of Tissue Biomechanical Parameters for Robot-Assisted Palpation
by Alejandro Gutierrez-Giles, Miguel A. Padilla-Castañeda, Luis Alvarez-Icaza and Enoch Gutierrez-Herrera
Sensors 2022, 22(22), 8670; https://0-doi-org.brum.beds.ac.uk/10.3390/s22228670 - 10 Nov 2022
Viewed by 1521
Abstract
The implementation of robotic systems for minimally invasive surgery and medical procedures is an active topic of research in recent years. One of the most common procedures is the palpation of soft tissues to identify their mechanical characteristics. In particular, it is very [...] Read more.
The implementation of robotic systems for minimally invasive surgery and medical procedures is an active topic of research in recent years. One of the most common procedures is the palpation of soft tissues to identify their mechanical characteristics. In particular, it is very useful to identify the tissue’s stiffness or equivalently its elasticity coefficient. However, this identification relies on the existence of a force sensor or a tactile sensor mounted at the tip of the robot, as well as on measuring the robot velocity. For some applications it would be desirable to identify the biomechanical characteristics of soft tissues without the need for a force/tactile nor velocity sensors. An estimation of such quantities can be obtained by a model-based state observer for which the inputs are only the robot joint positions and its commanded joint torques. The estimated velocities and forces can then be employed for closed-loop force control, force reflection, and mechanical parameters estimation. In this work, a closed-loop force control is proposed based on the estimated contact forces to avoid any tissue damage. Then, the information from the estimated forces and velocities is used in a least squares estimator of the mechanical parameters. Moreover, the estimated biomechanical parameters are employed in a Bayesian classifier to provide further help for the physician to make a diagnosis. We have found that a combination of the parameters of both linear and nonlinear viscoelastic models provide better classification results: 0% misclassifications against 50% when using a linear model, and 3.12% when using only a nonlinear model, for the case in which the samples have very similar mechanical properties. Full article
Show Figures

Figure 1

12 pages, 3225 KiB  
Article
Performance and Usability Evaluation of an Extended Reality Platform to Monitor Patient’s Health during Surgical Procedures
by Pasquale Arpaia, Egidio De Benedetto, Lucio De Paolis, Giovanni D’Errico, Nicola Donato and Luigi Duraccio
Sensors 2022, 22(10), 3908; https://0-doi-org.brum.beds.ac.uk/10.3390/s22103908 - 21 May 2022
Cited by 9 | Viewed by 1969
Abstract
An extended-reality (XR) platform for real-time monitoring of patients’ health during surgical procedures is proposed. The proposed system provides real-time access to a comprehensive set of patients’ information, which are made promptly available to the surgical team in the operating room (OR). In [...] Read more.
An extended-reality (XR) platform for real-time monitoring of patients’ health during surgical procedures is proposed. The proposed system provides real-time access to a comprehensive set of patients’ information, which are made promptly available to the surgical team in the operating room (OR). In particular, the XR platform supports the medical staff by automatically acquiring the patient’s vitals from the operating room instrumentation and displaying them in real-time directly on an XR headset. Furthermore, information regarding the patient clinical record is also shown upon request. Finally, the XR-based monitoring platform also allows displaying in XR the video stream coming directly from the endoscope. The innovative aspect of the proposed XR-based monitoring platform lies in the comprehensiveness of the available information, in its modularity and flexibility (in terms of adaption to different sources of data), ease of use, and most importantly, in a reliable communication, which are critical requirements for the healthcare field. To validate the proposed system, experimental tests were conducted using instrumentation typically available in the operating room (i.e., a respiratory ventilator, a patient monitor for intensive care, and an endoscope). The overall results showed (i) an accuracy of the data communication greater than 99 %, along with (ii) an average time response below ms, and (iii) satisfying feedback from the SUS questionnaires filled out by the physicians after intensive use. Full article
Show Figures

Figure 1

Back to TopTop