Healthcare Technology Letters
Volume 5, Issue 5, October 2018
Volumes & issues:
Volume 5, Issue 5
October 2018
-
- Source: Healthcare Technology Letters, Volume 5, Issue 5, page: 136 –136
- DOI: 10.1049/htl.2018.5092
- Type: Article
- + Show details - Hide details
-
p.
136
(1)
- Author(s): Étienne Léger ; Jonatan Reyes ; Simon Drouin ; D. Louis Collins ; Tiberiu Popa ; Marta Kersten-Oertel
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 137 –142
- DOI: 10.1049/htl.2018.5063
- Type: Article
- + Show details - Hide details
-
p.
137
–142
(6)
In image-guided neurosurgery, a registration between the patient and their pre-operative images and the tracking of surgical tools enables GPS-like guidance to the surgeon. However, factors such as brainshift, image distortion, and registration error cause the patient-to-image alignment accuracy to degrade throughout the surgical procedure no longer providing accurate guidance. The authors present a gesture-based method for manual registration correction to extend the usage of augmented reality (AR) neuronavigation systems. The authors’ method, which makes use of the touchscreen capabilities of a tablet on which the AR navigation view is presented, enables surgeons to compensate for the effects of brainshift, misregistration, or tracking errors. They tested their system in a laboratory user study with ten subjects and found that they were able to achieve a median registration RMS error of 3.51 mm on landmarks around the craniotomy of interest. This is comparable to the level of accuracy attainable with previously proposed methods and currently available commercial systems while being simpler and quicker to use. The method could enable surgeons to quickly and easily compensate for most of the observed shift. Further advantages of their method include its ease of use, its small impact on the surgical workflow and its small-time requirement.
- Author(s): Mathias Unberath ; Javad Fotouhi ; Jonas Hajek ; Andreas Maier ; Greg Osgood ; Russell Taylor ; Mehran Armand ; Nassir Navab
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 143 –147
- DOI: 10.1049/htl.2018.5066
- Type: Article
- + Show details - Hide details
-
p.
143
–147
(5)
Interventional C-arm imaging is crucial to percutaneous orthopedic procedures as it enables the surgeon to monitor the progress of surgery on the anatomy level. Minimally invasive interventions require repeated acquisition of X-ray images from different anatomical views to verify tool placement. Achieving and reproducing these views often comes at the cost of increased surgical time and radiation. We propose a marker-free ‘technician-in-the-loop’ Augmented Reality (AR) solution for C-arm repositioning. The X-ray technician operating the C-arm interventionally is equipped with a head-mounted display system capable of recording desired C-arm poses in 3D via an integrated infrared sensor. For C-arm repositioning to a target view, the recorded pose is restored as a virtual object and visualized in an AR environment, serving as a perceptual reference for the technician. Our proof-of-principle findings from a simulated trauma surgery indicate that the proposed system can decrease the 2.76 X-ray images required for re-aligning the scanner with an intra-operatively recorded C-arm view down to zero, suggesting substantial reductions of radiation dose. The proposed AR solution is a first step towards facilitating communication between the surgeon and the surgical staff, improving the quality of surgical image acquisition, and enabling context-aware guidance for surgery rooms of the future.
- Author(s): Gavin Wheeler ; Shujie Deng ; Nicolas Toussaint ; Kuberan Pushparajah ; Julia A. Schnabel ; John M. Simpson ; Alberto Gomez
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 148 –153
- DOI: 10.1049/htl.2018.5064
- Type: Article
- + Show details - Hide details
-
p.
148
–153
(6)
The authors present a method to interconnect the Visualisation Toolkit (VTK) and Unity. This integration enables them to exploit the visualisation capabilities of VTK with Unity's widespread support of virtual, augmented, and mixed reality displays, and interaction and manipulation devices, for the development of medical image applications for virtual environments. The proposed method utilises OpenGL context sharing between Unity and VTK to render VTK objects into the Unity scene via a Unity native plugin. The proposed method is demonstrated in a simple Unity application that performs VTK volume rendering to display thoracic computed tomography and cardiac magnetic resonance images. Quantitative measurements of the achieved frame rates show that this approach provides over 90 fps using standard hardware, which is suitable for current augmented reality/virtual reality display devices.
- Author(s): Wenyao Xia ; Elvis C.S. Chen ; Terry Peters
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 154 –157
- DOI: 10.1049/htl.2018.5067
- Type: Article
- + Show details - Hide details
-
p.
154
–157
(4)
Stereoscopic endoscopes have been used increasingly in minimally invasive surgery to visualise the organ surface and manipulate various surgical tools. However, insufficient and irregular light sources become major challenges for endoscopic surgery. Not only do these conditions hinder image processing algorithms, sometimes surgical tools are barely visible when operating within low-light regions. In addition, low-light regions have low signal-to-noise ratio and metrication artefacts due to quantisation errors. As a result, present image enhancement methods usually suffer from heavy noise amplification in low-light regions. In this Letter, the authors propose an effective method for endoscopic image enhancement by identifying different illumination regions and designing the enhancement design criteria for desired image quality. Compared with existing image enhancement methods, the proposed method is able to enhance the low-light region while preventing noise amplification during image enhancement process. The proposed method is tested with 200 images acquired by endoscopic surgeries. Computed results show that the proposed algorithm can outperform state-of-the-art algorithms for image enhancement, in terms of naturalness image quality evaluator and illumination index.
- Author(s): Reid Vassallo ; Hidetoshi Kasuya ; Benjamin W.Y. Lo ; Terry Peters ; Yiming Xiao
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 158 –161
- DOI: 10.1049/htl.2018.5069
- Type: Article
- + Show details - Hide details
-
p.
158
–161
(4)
Cerebrovascular surgery treats vessel abnormalities in the brain and spinal cord, including arteriovenous malformations (AVMs) and aneurysms. These procedures often involve clipping the vessels feeding blood to these abnormalities, making accurate classification of blood vessel types (feeding versus draining) important during surgery. Previous work to guide the intraoperative identification of the vessels included augmented reality (AR) using pre-operative images, injected dyes, and Doppler ultrasound, but each with their drawbacks. The authors propose and demonstrate a novel technique to help differentiate vessels by enhancing short videos of a few seconds from the surgical microscope using motion magnification and spectral analysis, and constructing AR views that fuse the analysis results as intuitive colourmaps and the surgical microscopic view. They demonstrated the proposed technique retrospectively with two real cerebrovascular surgical cases: one AVM and one aneurysm. The results showed that the proposed technique can help characterise different vessel types (feeding and draining the abnormality), which agree with those identified by the operating surgeon.
- Author(s): Rafael Moreta-Martinez ; David García-Mato ; Mónica García-Sevilla ; Rubén Pérez-Mañanes ; José Calvo-Haro ; Javier Pascau
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 162 –166
- DOI: 10.1049/htl.2018.5072
- Type: Article
- + Show details - Hide details
-
p.
162
–166
(5)
Augmented reality (AR) can be an interesting technology for clinical scenarios as an alternative to conventional surgical navigation. However, the registration between augmented data and real-world spaces is a limiting factor. In this study, the authors propose a method based on desktop three-dimensional (3D) printing to create patient-specific tools containing a visual pattern that enables automatic registration. This specific tool fits on the patient only in the location it was designed for, avoiding placement errors. This solution has been developed as a software application running on Microsoft HoloLens. The workflow was validated on a 3D printed phantom replicating the anatomy of a patient presenting an extraosseous Ewing's sarcoma, and then tested during the actual surgical intervention. The application allowed physicians to visualise the skin, bone and tumour location overlaid on the phantom and patient. This workflow could be extended to many clinical applications in the surgical field and also for training and simulation, in cases where hard body structures are involved. Although the authors have tested their workflow on AR head mounted display, they believe that a similar approach can be applied to other devices such as tablets or smartphones.
- Author(s): Davide Scorza ; Gaetano Amoroso ; Camilo Cortés ; Arkaitz Artetxe ; Álvaro Bertelsen ; Michele Rizzi ; Laura Castana ; Elena De Momi ; Francesco Cardinale ; Luis Kabongo
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 167 –171
- DOI: 10.1049/htl.2018.5075
- Type: Article
- + Show details - Hide details
-
p.
167
–171
(5)
StereoElectroEncephaloGraphy (SEEG) is a minimally invasive technique that consists of the insertion of multiple intracranial electrodes to precisely identify the epileptogenic focus. The planning of electrode trajectories is a cumbersome and time-consuming task. Current approaches to support the planning focus on electrode trajectory optimisation based on geometrical constraints but are not helpful to produce an initial electrode set to begin with the planning procedure. In this work, the authors propose a methodology that analyses retrospective planning data and builds a set of average trajectories, representing the practice of a clinical centre, which can be mapped to a new patient to initialise planning procedure. They collected and analysed the data from 75 anonymised patients, obtaining 30 exploratory patterns and 61 mean trajectories in an average brain space. A preliminary validation on a test set showed that they were able to correctly map 90% of those trajectories and, after optimisation, they have comparable or better values than manual trajectories in terms of distance from vessels and insertion angle. Finally, by detecting and analysing similar plans, they were able to identify eight planning strategies, which represent the main tailored sets of trajectories that neurosurgeons used to deal with the different patient cases.
- Author(s): André Mewes ; Florian Heinrich ; Bennet Hensen ; Frank Wacker ; Kai Lawonn ; Christian Hansen
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 172 –176
- DOI: 10.1049/htl.2018.5076
- Type: Article
- + Show details - Hide details
-
p.
172
–176
(5)
During MRI-guided interventions, navigation support is often separated from the operating field on displays, which impedes the interpretation of positions and orientations of instruments inside the patient's body as well as hand–eye coordination. To overcome these issues projector-based augmented reality can be used to support needle guidance inside the MRI bore directly in the operating field. The authors present two visualisation concepts for needle navigation aids which were compared in an accuracy and usability study with eight participants, four of whom were experienced radiologists. The results show that both concepts are equally accurate ( and ), useful and easy to use, with clear visual feedback about the state and success of the needle puncture. For easier clinical applicability, a dynamic projection on moving surfaces and organ movement tracking are needed. For now, tests with patients with respiratory arrest are feasible.
- Author(s): Esmitt Ramírez ; Carles Sánchez ; Agnés Borràs ; Marta Diez-Ferrer ; Antoni Rosell ; Debora Gil
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 177 –182
- DOI: 10.1049/htl.2018.5074
- Type: Article
- + Show details - Hide details
-
p.
177
–182
(6)
Virtual bronchoscopy (VB) is a non-invasive exploration tool for intervention planning and navigation of possible pulmonary lesions (PLs). A VB software involves the location of a PL and the calculation of a route, starting from the trachea, to reach it. The selection of a VB software might be a complex process, and there is no consensus in the community of medical software developers in which is the best-suited system to use or framework to choose. The authors present Bronchoscopy Exploration (BronchoX), a VB software to plan biopsy interventions that generate physician-readable instructions to reach the PLs. The authors’ solution is open source, multiplatform, and extensible for future functionalities, designed by their multidisciplinary research and development group. BronchoX is a compound of different algorithms for segmentation, visualisation, and navigation of the respiratory tract. Performed results are a focus on the test the effectiveness of their proposal as an exploration software, also to measure its accuracy as a guiding system to reach PLs. Then, 40 different virtual planning paths were created to guide physicians until distal bronchioles. These results provide a functional software for BronchoX and demonstrate how following simple instructions is possible to reach distal lesions from the trachea.
- Author(s): David W. Shattuck
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 183 –188
- DOI: 10.1049/htl.2018.5077
- Type: Article
- + Show details - Hide details
-
p.
183
–188
(6)
The recent advent of high-performance consumer virtual reality (VR) systems has opened new possibilities for immersive visualisation of numerous types of data. Medical imaging has long made use of advanced visualisation techniques, and VR offers exciting new opportunities for data exploration. The author presents a new framework for interacting with neuroimaging data, including MRI volumes, neuroanatomical surface models, diffusion tensors, and streamline tractography, as well as text-based annotations. The system was developed for the HTC Vive using C++, OpenGL, and the OpenVR software development kit. The author developed custom GLSL shaders for each type of data to provide high-performance real-time rendering suitable for use in a VR environment. These are integrated with an interface that enables the user to manipulate the scene through the Vive controllers and perform operations such as volume slicing, fibre track selection, and structural queries. The software can read data generated by existing automated brain MRI analysis packages, enabling the rapid development of subject-specific visualisations of multimodal data or annotated atlases. The system can also support multiple simultaneous users, placing them in the same virtual space to interact with each other while visualising the same datasets, opening new possibilities for teaching and for collaborative exploration of neuroimaging data.
- Author(s): Houssam El-Hariri ; Prashant Pandey ; Antony J. Hodgson ; Rafeef Garbi
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 189 –193
- DOI: 10.1049/htl.2018.5061
- Type: Article
- + Show details - Hide details
-
p.
189
–193
(5)
Augmented reality (AR) has proven to be a useful, exciting technology in several areas of healthcare. AR may especially enhance the operator's experience in minimally invasive surgical applications by providing more intuitive and naturally immersive visualisation in those procedures which heavily rely on three-dimensional (3D) imaging data. Benefits include improved operator ergonomics, reduced fatigue, and simplified hand–eye coordination. Head-mounted AR displays may hold great potential for enhancing surgical navigation given their compactness and intuitiveness of use. In this work, the authors propose a method that can intra-operatively locate bone structures using tracked ultrasound (US), registers to the corresponding pre-operative computed tomography (CT) data and generates 3D AR visualisation of the operated surgical scene through a head-mounted display. The proposed method deploys optically-tracked US, bone surface segmentation from the US and CT image volumes, and multimodal volume registration to align pre-operative to the corresponding intra-operative data. The enhanced surgical scene is then visualised in an AR framework using a HoloLens. They demonstrate the method's utility using a foam pelvis phantom and quantitatively assess accuracy by comparing the locations of fiducial markers in the real and virtual spaces, yielding root mean square errors of 3.22, 22.46, and 28.30 mm in the x, y, and z directions, respectively.
- Author(s): Long Qian ; Anton Deguet ; Peter Kazanzides
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 194 –200
- DOI: 10.1049/htl.2018.5065
- Type: Article
- + Show details - Hide details
-
p.
194
–200
(7)
In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FA's hand–eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries.
- Author(s): Tianyu Song ; Chenglin Yang ; Omid Dianat ; Ehsan Azimi
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 201 –207
- DOI: 10.1049/htl.2018.5062
- Type: Article
- + Show details - Hide details
-
p.
201
–207
(7)
Endodontic treatment is performed to treat inflamed or infected root canal system of any involved teeth. It is estimated that 22.3 million endodontic procedures are performed annually in the USA. Preparing a proper access cavity before cleaning/shaping (instrumentation) of the root canal system is among the most important steps to achieve a successful treatment outcome. However, accidents such as perforation, gouging, ledge and canal transportation may occur during the procedure because of an improper or incomplete access cavity design. To reduce or prevent these errors in root canal treatments, this Letter introduces an assistive augmented reality (AR) technology on the head-mounted display (HMD). The proposed system provides audiovisual warning and correction in situ on the optical see-through HMD to assist the dentists to prepare access cavity. Interaction of the clinician with the system is via voice commands allowing the bi-manual operation. Also, dentist is able to review tooth radiographs during the procedure without the need to divert attention away from the patient and look at a separate monitor. Experiments are performed to evaluate the accuracy of the measurements. To the best of the authors’ knowledge, this is the first time that an HMD-based AR prototype is introduced for an endodontic procedure.
- Author(s): Andrew D. Speers ; Burton Ma ; William R. Jarnagin ; Sharifa Himidan ; Amber L. Simpson ; Richard P. Wildes
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 208 –214
- DOI: 10.1049/htl.2018.5071
- Type: Article
- + Show details - Hide details
-
p.
208
–214
(7)
Image-guided liver surgery aims to enhance the precision of resection and ablation by providing fast localisation of tumours and adjacent complex vasculature to improve oncologic outcome. This Letter presents a novel end-to-end solution for fast stereo reconstruction and motion estimation that demonstrates high accuracy with phantom and clinical data. The authors’ computationally efficient coarse-to-fine (CTF) stereo approach facilitates liver imaging by accounting for low texture regions, enabling precise three-dimensional (3D) boundary recovery through the use of adaptive windows and utilising a robust 3D motion estimator to reject spurious data. To the best of their knowledge, theirs is the only adaptive CTF matching approach to reconstruction and motion estimation that registers time series of reconstructions to a single key frame for registration to a volumetric computed tomography scan. The system is evaluated empirically in controlled laboratory experiments with a liver phantom and motorised stages for precise quantitative evaluation. Additional evaluation is provided through testing with patient data during liver resection.
- Author(s): Sahar Benadi ; Irene Ollivier ; Caroline Essert
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 215 –220
- DOI: 10.1049/htl.2018.5070
- Type: Article
- + Show details - Hide details
-
p.
215
–220
(6)
Stereoelectroencephalography is a surgical procedure used in the treatment of pharmacoresistant epilepsy. Multiple electrodes are inserted in the patient's brain in order to record the electrical activity and detect the epileptogenic zone at the source of the seizures. An accurate localisation of their contacts on post-operative images is a crucial step to interpret the recorded signals and achieve a successful resection afterwards. In this Letter, the authors propose interactive and automatic methods to help the surgeon with the segmentation of the electrodes and their contacts. Then, they present a preliminary comparison of the methods in terms of accuracy and processing time through experimental measurements performed by two users, and discuss these first results. The final purpose of this work is to assist the neurosurgeons and neurologists in the contacts localisation procedure, make it faster, more precise and less tedious.
- Author(s): Taylor Frantz ; Bart Jansen ; Johnny Duerinck ; Jef Vandemeulebroucke
- Source: Healthcare Technology Letters, Volume 5, Issue 5, p. 221 –225
- DOI: 10.1049/htl.2018.5079
- Type: Article
- + Show details - Hide details
-
p.
221
–225
(5)
Major hurdles for Microsoft's HoloLens as a tool in medicine have been accessing tracking data, as well as a relatively high-localisation error of the displayed information; cumulatively resulting in its limited use and minimal quantification. The following work investigates the augmentation of HoloLens with the proprietary image processing SDK Vuforia, allowing integration of data from its front-facing RGB camera to provide more spatially stable holograms for neuronavigational use. Continuous camera tracking was able to maintain hologram registration with a mean perceived drift of 1.41 mm, as well as a mean sub 2-mm surface point localisation accuracy of 53%, all while allowing the researcher to walk about a test area. This represents a 68% improvement for the later and a 34% improvement for the former compared with a typical HoloLens deployment used as a control. Both represent a significant improvement on hologram stability given the current state-of-the-art, and to the best of the authors knowledge are the first example of quantified measurements when augmenting hologram stability using data from the RGB sensor.
Guest Editorial: Papers from the 12th Workshop on Augmented Environments for Computer-Assisted Interventions
Gesture-based registration correction using a mobile augmented reality image-guided neurosurgery system
Augmented reality-based feedback for technician-in-the-loop C-arm repositioning
Virtual interaction and visualisation of 3D medical imaging data with VTK and Unity
Endoscopic image enhancement with noise suppression
Augmented reality guidance in cerebrovascular surgery using microscopic video enhancement
Augmented reality in computer-assisted interventions based on patient-specific 3D printed reference
Experience-based SEEG planning: from retrospective data to automated electrode trajectories suggestions
Concepts for augmented reality visualisation to support needle guidance inside the MRI
BronchoX: bronchoscopy exploration software for biopsy intervention planning
Multiuser virtual reality environment for visualising neuroimaging data
Augmented reality visualisation for orthopaedic surgical guidance with pre- and intra-operative multimodal image data fusion
ARssist: augmented reality on a head-mounted display for the first assistant in robotic surgery
Endodontic guided treatment using augmented reality on a head-mounted display system
Fast and accurate vision-based stereo reconstruction and motion estimation for image-guided liver surgery
Comparison of interactive and automatic segmentation of stereoelectroencephalography electrodes on computed tomography post-operative images: preliminary results
Augmenting Microsoft's HoloLens with vuforia tracking for neuronavigation
Most viewed content
Most cited content for this Journal
-
Pervasive assistive technology for people with dementia: a UCD case
- Author(s): Julia Rosemary Thorpe ; Kristoffer V.H. Rønn-Andersen ; Paulina Bień ; Ali Gürcan Özkil ; Birgitte Hysse Forchhammer ; Anja M. Maier
- Type: Article
-
A remote healthcare monitoring framework for diabetes prediction using machine learning
- Author(s): Jayroop Ramesh ; Raafat Aburukba ; Assim Sagahyroon
- Type: Article
-
PD_Manager: an mHealth platform for Parkinson's disease patient management
- Author(s): Kostas M. Tsiouris ; Dimitrios Gatsios ; George Rigas ; Dragana Miljkovic ; Barbara Koroušić Seljak ; Marko Bohanec ; Maria T. Arredondo ; Angelo Antonini ; Spyros Konitsiotis ; Dimitrios D. Koutsouris ; Dimitrios I. Fotiadis
- Type: Article
-
Towards X-ray free endovascular interventions – using HoloLens for on-line holographic visualisation
- Author(s): Ivo Kuhlemann ; Markus Kleemann ; Philipp Jauer ; Achim Schweikard ; Floris Ernst
- Type: Article
-
Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function
- Author(s): Salim Lahmiri
- Type: Article