The Application of Imaging Technology in Medical Intervention and Surgery

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: closed (28 February 2022) | Viewed by 56284

Special Issue Editors


E-Mail Website
Guest Editor
Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
Interests: image-guided interventions; surgical navigation; visualization; augmented reality; image registration; instrument tracking

E-Mail Website
Guest Editor
Robarts Research Institute, Western University, London, ON N6A 3K7, Canada
Interests: image-guided interventions; surgical navigation; visualization; medical simulation; calibration and registration

Special Issue Information

Dear Colleagues,

Imaging has played an important role in intervention and surgery since Roentgen made the first X-ray plate in 1895. Whether it was to diagnose a disease, or plan a treatment, imaging has played an increasingly dominant role in diagnosis, as well as in procedure planning and guidance. With the advent of computed tomography, magnetic resonance imaging, dynamic radiography, ultrasound, fluorescence imaging, positron emission tomography, and single photon emission computed tomography, the clinician has a wide array of tools to examine both structure and function of the human body. With an increasing desire to perform interventional procedures in a non-invasive fashion, medical imaging is also playing an increasingly important role in the guidance of procedures, that promises to render many surgical operations dramatically less invasive and more precise.

This Special Issue of Journal of Imaging aims to feature reports of recent advances in medical imaging technology that are specifically designed to facilitate the surgical procedure, novel applications of conventional imaging procedures to assist image-guided procedures, and innovative visualization technologies that can enhance the user interface between the surgeon and the patient.

Prof. Dr. Terry Peters
Dr. Elvis C.S. Chen
Guest Editors

*This Special Issue is endorsed by the MICCAI Society*

text

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • pre-operative imaging
  • intra-operative imaging
  • image fusion
  • image display
  • computed tomography
  • magnetic resonance imaging
  • ultrasound
  • endoscopy
  • laparoscopy
  • microscopy
  • visualization
  • virtual reality
  • augmented reality
  • mixed reality
  • 3D imaging
  • dynamic imaging
  • multi-spectral imaging
  • image-based tracking
  • computer vision
  • head-mounted displays

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

10 pages, 8447 KiB  
Article
Path Tracing vs. Volume Rendering Technique in Post-Surgical Assessment of Bone Flap in Oncologic Head and Neck Reconstructive Surgery: A Preliminary Study
by Nicolò Cardobi, Riccardo Nocini, Gabriele Molteni, Vittorio Favero, Andrea Fior, Daniele Marchioni, Stefania Montemezzi and Mirko D’Onofrio
J. Imaging 2023, 9(2), 24; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging9020024 - 20 Jan 2023
Cited by 1 | Viewed by 1238
Abstract
This study aims to compare a relatively novel three-dimensional rendering called Path Tracing (PT) to the Volume Rendering technique (VR) in the post-surgical assessment of head and neck oncologic surgery followed by bone flap reconstruction. This retrospective study included 39 oncologic patients who [...] Read more.
This study aims to compare a relatively novel three-dimensional rendering called Path Tracing (PT) to the Volume Rendering technique (VR) in the post-surgical assessment of head and neck oncologic surgery followed by bone flap reconstruction. This retrospective study included 39 oncologic patients who underwent head and neck surgery with free bone flap reconstructions. All exams were acquired using a 64 Multi-Detector CT (MDCT). PT and VR images were created on a dedicated workstation. Five readers, with different expertise in bone flap reconstructive surgery, independently reviewed the images (two radiologists, one head and neck surgeon and two otorhinolaryngologists, respectively). Every observer evaluated the images according to a 5-point Likert scale. The parameters assessed were image quality, anatomical accuracy, bone flap evaluation, and metal artefact. Mean and median values for all the parameters across the observer were calculated. The scores of both reconstruction methods were compared using a Wilcoxon matched-pairs signed rank test. Inter-reader agreement was calculated using Spearman’s rank correlation coefficient. PT was considered significantly superior to VR 3D reconstructions by all readers (p < 0.05). Inter-reader agreement was moderate to strong across four out of five readers. The agreement was stronger with PT images compared to VR images. In conclusion, PT reconstructions are significantly better than VR ones. Although they did not modify patient outcomes, they may improve the post-surgical evaluation of bone-free flap reconstructions following major head and neck surgery. Full article
Show Figures

Figure 1

17 pages, 16025 KiB  
Article
X23D—Intraoperative 3D Lumbar Spine Shape Reconstruction Based on Sparse Multi-View X-ray Data
by Sascha Jecklin, Carla Jancik, Mazda Farshad, Philipp Fürnstahl and Hooman Esfandiari
J. Imaging 2022, 8(10), 271; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8100271 - 02 Oct 2022
Cited by 6 | Viewed by 3432
Abstract
Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced [...] Read more.
Visual assessment based on intraoperative 2D X-rays remains the predominant aid for intraoperative decision-making, surgical guidance, and error prevention. However, correctly assessing the 3D shape of complex anatomies, such as the spine, based on planar fluoroscopic images remains a challenge even for experienced surgeons. This work proposes a novel deep learning-based method to intraoperatively estimate the 3D shape of patients’ lumbar vertebrae directly from sparse, multi-view X-ray data. High-quality and accurate 3D reconstructions were achieved with a learned multi-view stereo machine approach capable of incorporating the X-ray calibration parameters in the neural network. This strategy allowed a priori knowledge of the spinal shape to be acquired while preserving patient specificity and achieving a higher accuracy compared to the state of the art. Our method was trained and evaluated on 17,420 fluoroscopy images that were digitally reconstructed from the public CTSpine1K dataset. As evaluated by unseen data, we achieved an 88% average F1 score and a 71% surface score. Furthermore, by utilizing the calibration parameters of the input X-rays, our method outperformed a counterpart method in the state of the art by 22% in terms of surface score. This increase in accuracy opens new possibilities for surgical navigation and intraoperative decision-making solely based on intraoperative data, especially in surgical applications where the acquisition of 3D image data is not part of the standard clinical workflow. Full article
Show Figures

Figure 1

18 pages, 1548 KiB  
Article
Multimodal Registration for Image-Guided EBUS Bronchoscopy
by Xiaonan Zang, Wennan Zhao, Jennifer Toth, Rebecca Bascom and William Higgins
J. Imaging 2022, 8(7), 189; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8070189 - 08 Jul 2022
Cited by 1 | Viewed by 2247
Abstract
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, [...] Read more.
The state-of-the-art procedure for examining the lymph nodes in a lung cancer patient involves using an endobronchial ultrasound (EBUS) bronchoscope. The EBUS bronchoscope integrates two modalities into one device: (1) videobronchoscopy, which gives video images of the airway walls; and (2) convex-probe EBUS, which gives 2D fan-shaped views of extraluminal structures situated outside the airways. During the procedure, the physician first employs videobronchoscopy to navigate the device through the airways. Next, upon reaching a given node’s approximate vicinity, the physician probes the airway walls using EBUS to localize the node. Due to the fact that lymph nodes lie beyond the airways, EBUS is essential for confirming a node’s location. Unfortunately, it is well-documented that EBUS is difficult to use. In addition, while new image-guided bronchoscopy systems provide effective guidance for videobronchoscopic navigation, they offer no assistance for guiding EBUS localization. We propose a method for registering a patient’s chest CT scan to live surgical EBUS views, thereby facilitating accurate image-guided EBUS bronchoscopy. The method entails an optimization process that registers CT-based virtual EBUS views to live EBUS probe views. Results using lung cancer patient data show that the method correctly registered 28/28 (100%) lymph nodes scanned by EBUS, with a mean registration time of 3.4 s. In addition, the mean position and direction errors of registered sites were 2.2 mm and 11.8, respectively. In addition, sensitivity studies show the method’s robustness to parameter variations. Lastly, we demonstrate the method’s use in an image-guided system designed for guiding both phases of EBUS bronchoscopy. Full article
Show Figures

Figure 1

29 pages, 23343 KiB  
Article
Multi-Stage Platform for (Semi-)Automatic Planning in Reconstructive Orthopedic Surgery
by Florian Kordon, Andreas Maier, Benedict Swartman, Maxim Privalov, Jan Siad El Barbari and Holger Kunze
J. Imaging 2022, 8(4), 108; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040108 - 12 Apr 2022
Cited by 3 | Viewed by 3000
Abstract
Intricate lesions of the musculoskeletal system require reconstructive orthopedic surgery to restore the correct biomechanics. Careful pre-operative planning of the surgical steps on 2D image data is an essential tool to increase the precision and safety of these operations. However, the plan’s effectiveness [...] Read more.
Intricate lesions of the musculoskeletal system require reconstructive orthopedic surgery to restore the correct biomechanics. Careful pre-operative planning of the surgical steps on 2D image data is an essential tool to increase the precision and safety of these operations. However, the plan’s effectiveness in the intra-operative workflow is challenged by unpredictable patient and device positioning and complex registration protocols. Here, we develop and analyze a multi-stage algorithm that combines deep learning-based anatomical feature detection and geometric post-processing to enable accurate pre- and intra-operative surgery planning on 2D X-ray images. The algorithm allows granular control over each element of the planning geometry, enabling real-time adjustments directly in the operating room (OR). In the method evaluation of three ligament reconstruction tasks effect on the knee joint, we found high spatial precision in drilling point localization (ε<2.9mm) and low angulation errors for k-wire instrumentation (ε<0.75) on 38 diagnostic radiographs. Comparable precision was demonstrated in 15 complex intra-operative trauma cases suffering from strong implant overlap and multi-anatomy exposure. Furthermore, we found that the diverse feature detection tasks can be efficiently solved with a multi-task network topology, improving precision over the single-task case. Our platform will help overcome the limitations of current clinical practice and foster surgical plan generation and adjustment directly in the OR, ultimately motivating the development of novel 2D planning guidelines. Full article
Show Figures

Graphical abstract

16 pages, 21314 KiB  
Article
Imaging PPG for In Vivo Human Tissue Perfusion Assessment during Surgery
by Marco Lai, Stefan D. van der Stel, Harald C. Groen, Mark van Gastel, Koert F. D. Kuhlmann, Theo J. M. Ruers and Benno H. W. Hendriks
J. Imaging 2022, 8(4), 94; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8040094 - 31 Mar 2022
Cited by 10 | Viewed by 2847
Abstract
Surgical excision is the golden standard for treatment of intestinal tumors. In this surgical procedure, inadequate perfusion of the anastomosis can lead to postoperative complications, such as anastomotic leakages. Imaging photoplethysmography (iPPG) can potentially provide objective and real-time feedback of the perfusion status [...] Read more.
Surgical excision is the golden standard for treatment of intestinal tumors. In this surgical procedure, inadequate perfusion of the anastomosis can lead to postoperative complications, such as anastomotic leakages. Imaging photoplethysmography (iPPG) can potentially provide objective and real-time feedback of the perfusion status of tissues. This feasibility study aims to evaluate an iPPG acquisition system during intestinal surgeries to detect the perfusion levels of the microvasculature tissue bed in different perfusion conditions. This feasibility study assesses three patients that underwent resection of a portion of the small intestine. Data was acquired from fully perfused, non-perfused and anastomosis parts of the intestine during different phases of the surgical procedure. Strategies for limiting motion and noise during acquisition were implemented. iPPG perfusion maps were successfully extracted from the intestine microvasculature, demonstrating that iPPG can be successfully used for detecting perturbations and perfusion changes in intestinal tissues during surgery. This study provides proof of concept for iPPG to detect changes in organ perfusion levels. Full article
Show Figures

Figure 1

20 pages, 35766 KiB  
Article
Qualitative Comparison of Image Stitching Algorithms for Multi-Camera Systems in Laparoscopy
by Sylvain Guy, Jean-Loup Haberbusch, Emmanuel Promayon, Stéphane Mancini and Sandrine Voros
J. Imaging 2022, 8(3), 52; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8030052 - 23 Feb 2022
Cited by 4 | Viewed by 3633
Abstract
Multi-camera systems were recently introduced into laparoscopy to increase the narrow field of view of the surgeon. The video streams are stitched together to create a panorama that is easier for the surgeon to comprehend. Multi-camera prototypes for laparoscopy use quite basic algorithms [...] Read more.
Multi-camera systems were recently introduced into laparoscopy to increase the narrow field of view of the surgeon. The video streams are stitched together to create a panorama that is easier for the surgeon to comprehend. Multi-camera prototypes for laparoscopy use quite basic algorithms and have only been evaluated on simple laparoscopic scenarios. The more recent state-of-the-art algorithms, mainly designed for the smartphone industry, have not yet been evaluated in laparoscopic conditions. We developed a simulated environment to generate a dataset of multi-view images displaying a wide range of laparoscopic situations, which is adaptable to any multi-camera system. We evaluated classical and state-of-the-art image stitching techniques used in non-medical applications on this dataset, including one unsupervised deep learning approach. We show that classical techniques that use global homography fail to provide a clinically satisfactory rendering and that even the most recent techniques, despite providing high quality panorama images in non-medical situations, may suffer from poor alignment or severe distortions in simulated laparoscopic scenarios. We highlight the main advantages and flaws of each algorithm within a laparoscopic context, identify the main remaining challenges that are specific to laparoscopy, and propose methods to improve these approaches. We provide public access to the simulated environment and dataset. Full article
Show Figures

Figure 1

17 pages, 2900 KiB  
Article
Head-Mounted Display-Based Augmented Reality for Image-Guided Media Delivery to the Heart: A Preliminary Investigation of Perceptual Accuracy
by Mitchell Doughty and Nilesh R. Ghugre
J. Imaging 2022, 8(2), 33; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8020033 - 30 Jan 2022
Cited by 10 | Viewed by 4057
Abstract
By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a [...] Read more.
By aligning virtual augmentations with real objects, optical see-through head-mounted display (OST-HMD)-based augmented reality (AR) can enhance user-task performance. Our goal was to compare the perceptual accuracy of several visualization paradigms involving an adjacent monitor, or the Microsoft HoloLens 2 OST-HMD, in a targeted task, as well as to assess the feasibility of displaying imaging-derived virtual models aligned with the injured porcine heart. With 10 participants, we performed a user study to quantify and compare the accuracy, speed, and subjective workload of each paradigm in the completion of a point-and-trace task that simulated surgical targeting. To demonstrate the clinical potential of our system, we assessed its use for the visualization of magnetic resonance imaging (MRI)-based anatomical models, aligned with the surgically exposed heart in a motion-arrested open-chest porcine model. Using the HoloLens 2 with alignment of the ground truth target and our display calibration method, users were able to achieve submillimeter accuracy (0.98 mm) and required 1.42 min for calibration in the point-and-trace task. In the porcine study, we observed good spatial agreement between the MRI-models and target surgical site. The use of an OST-HMD led to improved perceptual accuracy and task-completion times in a simulated targeting task. Full article
Show Figures

Figure 1

13 pages, 11311 KiB  
Article
Towards a First-Person Perspective Mixed Reality Guidance System for Needle Interventions
by Leah Groves, Natalie Li, Terry M. Peters and Elvis C. S. Chen
J. Imaging 2022, 8(1), 7; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8010007 - 07 Jan 2022
Cited by 9 | Viewed by 2716
Abstract
While ultrasound (US) guidance has been used during central venous catheterization to reduce complications, including the puncturing of arteries, the rate of such problems remains non-negligible. To further reduce complication rates, mixed-reality systems have been proposed as part of the user interface for [...] Read more.
While ultrasound (US) guidance has been used during central venous catheterization to reduce complications, including the puncturing of arteries, the rate of such problems remains non-negligible. To further reduce complication rates, mixed-reality systems have been proposed as part of the user interface for such procedures. We demonstrate the use of a surgical navigation system that renders a calibrated US image, and the needle and its trajectory, in a common frame of reference. We compare the effectiveness of this system, whereby images are rendered on a planar monitor and within a head-mounted display (HMD), to the standard-of-care US-only approach, via a phantom-based user study that recruited 31 expert clinicians and 20 medical students. These users performed needle-insertions into a phantom under the three modes of visualization. The success rates were significantly improved under HMD-guidance as compared to US-guidance, for both expert clinicians (94% vs. 70%) and medical students (70% vs. 25%). Users more consistently positioned their needle closer to the center of the vessel’s lumen under HMD-guidance compared to US-guidance. The performance of the clinicians when interacting with this monitor system was comparable to using US-only guidance, with no significant difference being observed across any metrics. The results suggest that the use of an HMD to align the clinician’s visual and motor fields promotes successful needle guidance, highlighting the importance of continued HMD-guidance research. Full article
Show Figures

Figure 1

18 pages, 3219 KiB  
Article
A System for Real-Time, Online Mixed-Reality Visualization of Cardiac Magnetic Resonance Images
by Dominique Franson, Andrew Dupuis, Vikas Gulani, Mark Griswold and Nicole Seiberlich
J. Imaging 2021, 7(12), 274; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7120274 - 14 Dec 2021
Cited by 3 | Viewed by 3034
Abstract
Image-guided cardiovascular interventions are rapidly evolving procedures that necessitate imaging systems capable of rapid data acquisition and low-latency image reconstruction and visualization. Compared to alternative modalities, Magnetic Resonance Imaging (MRI) is attractive for guidance in complex interventional settings thanks to excellent soft tissue [...] Read more.
Image-guided cardiovascular interventions are rapidly evolving procedures that necessitate imaging systems capable of rapid data acquisition and low-latency image reconstruction and visualization. Compared to alternative modalities, Magnetic Resonance Imaging (MRI) is attractive for guidance in complex interventional settings thanks to excellent soft tissue contrast and large fields-of-view without exposure to ionizing radiation. However, most clinically deployed MRI sequences and visualization pipelines exhibit poor latency characteristics, and spatial integration of complex anatomy and device orientation can be challenging on conventional 2D displays. This work demonstrates a proof-of-concept system linking real-time cardiac MR image acquisition, online low-latency reconstruction, and a stereoscopic display to support further development in real-time MR-guided intervention. Data are acquired using an undersampled, radial trajectory and reconstructed via parallelized through-time radial generalized autocalibrating partially parallel acquisition (GRAPPA) implemented on graphics processing units. Images are rendered for display in a stereoscopic mixed-reality head-mounted display. The system is successfully tested by imaging standard cardiac views in healthy volunteers. Datasets comprised of one slice (46 ms), two slices (92 ms), and three slices (138 ms) are collected, with the acquisition time of each listed in parentheses. Images are displayed with latencies of 42 ms/frame or less for all three conditions. Volumetric data are acquired at one volume per heartbeat with acquisition times of 467 ms and 588 ms when 8 and 12 partitions are acquired, respectively. Volumes are displayed with a latency of 286 ms or less. The faster-than-acquisition latencies for both planar and volumetric display enable real-time 3D visualization of the heart. Full article
Show Figures

Figure 1

17 pages, 6238 KiB  
Article
Combined Mass Spectrometry and Histopathology Imaging for Perioperative Tissue Assessment in Cancer Surgery
by Laura Connolly, Amoon Jamzad, Martin Kaufmann, Catriona E. Farquharson, Kevin Ren, John F. Rudan, Gabor Fichtinger and Parvin Mousavi
J. Imaging 2021, 7(10), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7100203 - 04 Oct 2021
Cited by 4 | Viewed by 2371
Abstract
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires [...] Read more.
Mass spectrometry is an effective imaging tool for evaluating biological tissue to detect cancer. With the assistance of deep learning, this technology can be used as a perioperative tissue assessment tool that will facilitate informed surgical decisions. To achieve such a system requires the development of a database of mass spectrometry signals and their corresponding pathology labels. Assigning correct labels, in turn, necessitates precise spatial registration of histopathology and mass spectrometry data. This is a challenging task due to the domain differences and noisy nature of images. In this study, we create a registration framework for mass spectrometry and pathology images as a contribution to the development of perioperative tissue assessment. In doing so, we explore two opportunities in deep learning for medical image registration, namely, unsupervised, multi-modal deformable image registration and evaluation of the registration. We test this system on prostate needle biopsy cores that were imaged with desorption electrospray ionization mass spectrometry (DESI) and show that we can successfully register DESI and histology images to achieve accurate alignment and, consequently, labelling for future training. This automation is expected to improve the efficiency and development of a deep learning architecture that will benefit the use of mass spectrometry imaging for cancer diagnosis. Full article
Show Figures

Figure 1

16 pages, 7030 KiB  
Article
SpineDepth: A Multi-Modal Data Collection Approach for Automatic Labelling and Intraoperative Spinal Shape Reconstruction Based on RGB-D Data
by Florentin Liebmann, Dominik Stütz, Daniel Suter, Sascha Jecklin, Jess G. Snedeker, Mazda Farshad, Philipp Fürnstahl and Hooman Esfandiari
J. Imaging 2021, 7(9), 164; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7090164 - 27 Aug 2021
Cited by 2 | Viewed by 2490
Abstract
Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these [...] Read more.
Computer aided orthopedic surgery suffers from low clinical adoption, despite increased accuracy and patient safety. This can partly be attributed to cumbersome and often radiation intensive registration methods. Emerging RGB-D sensors combined with artificial intelligence data-driven methods have the potential to streamline these procedures. However, developing such methods requires vast amount of data. To this end, a multi-modal approach that enables acquisition of large clinical data, tailored to pedicle screw placement, using RGB-D sensors and a co-calibrated high-end optical tracking system was developed. The resulting dataset comprises RGB-D recordings of pedicle screw placement along with individually tracked ground truth poses and shapes of spine levels L1–L5 from ten cadaveric specimens. Besides a detailed description of our setup, quantitative and qualitative outcome measures are provided. We found a mean target registration error of 1.5 mm. The median deviation between measured and ground truth bone surface was 2.4 mm. In addition, a surgeon rated the overall alignment based on 10% random samples as 5.8 on a scale from 1 to 6. Generation of labeled RGB-D data for orthopedic interventions with satisfactory accuracy is feasible, and its publication shall promote future development of data-driven artificial intelligence methods for fast and reliable intraoperative registration. Full article
Show Figures

Figure 1

17 pages, 9999 KiB  
Article
Usability of Graphical Visualizations on a Tool-Mounted Interface for Spine Surgery
by Laura Schütz, Caroline Brendle, Javier Esteban, Sandro M. Krieg, Ulrich Eck and Nassir Navab
J. Imaging 2021, 7(8), 159; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7080159 - 21 Aug 2021
Cited by 6 | Viewed by 2706
Abstract
Screw placement in the correct angular trajectory is one of the most intricate tasks during spinal fusion surgery. Due to the crucial role of pedicle screw placement for the outcome of the operation, spinal navigation has been introduced into the clinical routine. Despite [...] Read more.
Screw placement in the correct angular trajectory is one of the most intricate tasks during spinal fusion surgery. Due to the crucial role of pedicle screw placement for the outcome of the operation, spinal navigation has been introduced into the clinical routine. Despite its positive effects on the precision and safety of the surgical procedure, local separation of the navigation information and the surgical site, combined with intricate visualizations, limit the benefits of the navigation systems. Instead of a tech-driven design, a focus on usability is required in new research approaches to enable advanced and effective visualizations. This work presents a new tool-mounted interface (TMI) for pedicle screw placement. By fixing a TMI onto the surgical instrument, physical de-coupling of the anatomical target and navigation information is resolved. A total of 18 surgeons participated in a usability study comparing the TMI to the state-of-the-art visualization on an external screen. With the usage of the TMI, significant improvements in system usability (Kruskal–Wallis test p < 0.05) were achieved. A significant reduction in mental demand and overall cognitive load, measured using a NASA-TLX (p < 0.05), were observed. Moreover, a general improvement in performance was shown by means of the surgical task time (one-way ANOVA p < 0.001). Full article
Show Figures

Figure 1

15 pages, 1593 KiB  
Article
Design of an Ultrasound-Navigated Prostate Cancer Biopsy System for Nationwide Implementation in Senegal
by Gabor Fichtinger, Parvin Mousavi, Tamas Ungi, Aaron Fenster, Purang Abolmaesumi, Gernot Kronreif, Juan Ruiz-Alzola, Alain Ndoye, Babacar Diao and Ron Kikinis
J. Imaging 2021, 7(8), 154; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7080154 - 20 Aug 2021
Viewed by 2654
Abstract
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods [...] Read more.
This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on “frugal technology”, intended to be affordable for low–middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond. Full article
Show Figures

Figure 1

16 pages, 4702 KiB  
Article
A Virtual Reality System for Improved Image-Based Planning of Complex Cardiac Procedures
by Shujie Deng, Gavin Wheeler, Nicolas Toussaint, Lindsay Munroe, Suryava Bhattacharya, Gina Sajith, Ei Lin, Eeshar Singh, Ka Yee Kelly Chu, Saleha Kabir, Kuberan Pushparajah, John M. Simpson, Julia A. Schnabel and Alberto Gomez
J. Imaging 2021, 7(8), 151; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7080151 - 19 Aug 2021
Cited by 9 | Viewed by 3108
Abstract
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display [...] Read more.
The intricate nature of congenital heart disease requires understanding of the complex, patient-specific three-dimensional dynamic anatomy of the heart, from imaging data such as three-dimensional echocardiography for successful outcomes from surgical and interventional procedures. Conventional clinical systems use flat screens, and therefore, display remains two-dimensional, which undermines the full understanding of the three-dimensional dynamic data. Additionally, the control of three-dimensional visualisation with two-dimensional tools is often difficult, so used only by imaging specialists. In this paper, we describe a virtual reality system for immersive surgery planning using dynamic three-dimensional echocardiography, which enables fast prototyping for visualisation such as volume rendering, multiplanar reformatting, flow visualisation and advanced interaction such as three-dimensional cropping, windowing, measurement, haptic feedback, automatic image orientation and multiuser interactions. The available features were evaluated by imaging and nonimaging clinicians, showing that the virtual reality system can help improve the understanding and communication of three-dimensional echocardiography imaging and potentially benefit congenital heart disease treatment. Full article
Show Figures

Figure 1

12 pages, 1586 KiB  
Article
Can Liquid Lenses Increase Depth of Field in Head Mounted Video See-Through Devices?
by Marina Carbone, Davide Domeneghetti, Fabrizio Cutolo, Renzo D’Amato, Emanuele Cigna, Paolo Domenico Parchi, Marco Gesi, Luca Morelli, Mauro Ferrari and Vincenzo Ferrari
J. Imaging 2021, 7(8), 138; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7080138 - 05 Aug 2021
Cited by 1 | Viewed by 2150
Abstract
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth [...] Read more.
Wearable Video See-Through (VST) devices for Augmented Reality (AR) and for obtaining a Magnified View are taking hold in the medical and surgical fields. However, these devices are not yet usable in daily clinical practice, due to focusing problems and a limited depth of field. This study investigates the use of liquid-lens optics to create an autofocus system for wearable VST visors. The autofocus system is based on a Time of Flight (TOF) distance sensor and an active autofocus control system. The integrated autofocus system in the wearable VST viewers showed good potential in terms of providing rapid focus at various distances and a magnified view. Full article
Show Figures

Figure 1

10 pages, 2669 KiB  
Article
Determination of the Round Window Niche Anatomy Using Cone Beam Computed Tomography Imaging as Preparatory Work for Individualized Drug-Releasing Implants
by Farnaz Matin, Ziwen Gao, Felix Repp, Samuel John, Thomas Lenarz and Verena Scheper
J. Imaging 2021, 7(5), 79; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging7050079 - 26 Apr 2021
Cited by 7 | Viewed by 5917
Abstract
Modern therapy of inner ear disorders is increasingly shifting to local drug delivery using a growing number of pharmaceuticals. Access to the inner ear is usually made via the round window membrane (RWM), located in the bony round window niche (RWN). We hypothesize [...] Read more.
Modern therapy of inner ear disorders is increasingly shifting to local drug delivery using a growing number of pharmaceuticals. Access to the inner ear is usually made via the round window membrane (RWM), located in the bony round window niche (RWN). We hypothesize that the individual shape and size of the RWN have to be taken into account for safe reliable and controlled drug delivery. Therefore, we investigated the anatomy and its variations. Cone beam computed tomography (CBCT) images of 50 patients were analyzed. Based on the reconstructed 3D volumes, individual anatomies of the RWN, RWM, and bony overhang were determined by segmentation using 3D SlicerTM with a custom build plug-in. A large individual anatomical variability of the RWN with a mean volume of 4.54 mm3 (min 2.28 mm3, max 6.64 mm3) was measured. The area of the RWM ranged from 1.30 to 4.39 mm2 (mean: 2.93 mm2). The bony overhang had a mean length of 0.56 mm (min 0.04 mm, max 1.24 mm) and the shape was individually very different. Our data suggest that there is a potential for individually designed and additively manufactured RWN implants due to large differences in the volume and shape of the RWN. Full article
Show Figures

Figure 1

Review

Jump to: Research

27 pages, 4902 KiB  
Review
Augmenting Performance: A Systematic Review of Optical See-Through Head-Mounted Displays in Surgery
by Mitchell Doughty, Nilesh R. Ghugre and Graham A. Wright
J. Imaging 2022, 8(7), 203; https://0-doi-org.brum.beds.ac.uk/10.3390/jimaging8070203 - 20 Jul 2022
Cited by 21 | Viewed by 5632
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were [...] Read more.
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2–5 mm has been demonstrated in phantom models, several human factors and technical challenges—perception, ease of use, context, interaction, and occlusion—remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation. Full article
Show Figures

Figure 1

Back to TopTop