Healthcare Technology Letters
Volume 4, Issue 5, October 2017
Volumes & issues:
Volume 4, Issue 5
October 2017
-
- Author(s): Pascal Fallavollita ; Marta Kersten ; Cristian A. Linte ; Philip Pratt ; Ziv Yaniv
- Source: Healthcare Technology Letters, Volume 4, Issue 5, page: 149 –149
- DOI: 10.1049/htl.2017.0078
- Type: Article
- + Show details - Hide details
-
p.
149
(1)
- Author(s): Michael S. Sacks ; Amir Khalighi ; Bruno Rego ; Salma Ayoub ; Andrew Drach
- Source: Healthcare Technology Letters, Volume 4, Issue 5, page: 150 –150
- DOI: 10.1049/htl.2017.0076
- Type: Article
- + Show details - Hide details
-
p.
150
(1)
- Author(s): Sandrine de Ribaupierre and Roy Eagleson
- Source: Healthcare Technology Letters, Volume 4, Issue 5, page: 151 –151
- DOI: 10.1049/htl.2017.0077
- Type: Article
- + Show details - Hide details
-
p.
151
(1)
There are a number of challenges that must be faced when trying to develop AR and VR-based Neurosurgical simulators, Surgical Navigation Platforms, and “Smart OR” systems. Trying to simulate an operating room environment and surgical tasks in Augmented and Virtual Reality is a challenge many are attempting to solve, in order to train surgeons or help them operate. What are some of the needs of the surgeon, and what are the challenges encountered (human computer interface, perception, workflow, etc). We discuss these tradeoffs and conclude with critical remarks.
- Author(s): Saeed M. Bakhshmand ; Roy Eagleson ; Sandrine de Ribaupierre
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 152 –156
- DOI: 10.1049/htl.2017.0073
- Type: Article
- + Show details - Hide details
-
p.
152
–156
(5)
Non-invasive assessment of cognitive importance has been a major challenge for planning of neurosurgical procedures. In the past decade, in vivo brain imaging modalities have been considered for estimating the ‘eloquence’ of brain areas. In order to estimate the impact of damage caused by an access path towards a target region inside of the skull, multi-modal metrics are introduced in this paper. Accordingly, this estimated damage is obtained by combining multi-modal metrics. In other words, this damage is an aggregate of intervened grey matter volume and axonal fibre numbers, weighted by their importance within the assigned anatomical and functional networks. To validate these metrics, an exhaustive search algorithm is implemented for characterising the solution space and visually representing connectional cost associated with a path initiated from underlying points. In this presentation, brain networks are built from resting state functional magnetic resonance imaging (fMRI) and deterministic tractography. their results demonstrate that the proposed approach is capable of refining traditional heuristics, such as choosing the minimal distance from the lesion, by supplementing connectional importance of the resected tissue. This provides complementary information to help the surgeon in avoiding important functional hubs and their anatomical linkages; which are derived from neuroimaging modalities and incorporated to the related anatomical landmarks.
- Author(s): Elvis C.S. Chen ; Isabella Morgan ; Uditha Jayarathne ; Burton Ma ; Terry M. Peters
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 157 –162
- DOI: 10.1049/htl.2017.0072
- Type: Article
- + Show details - Hide details
-
p.
157
–162
(6)
Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.
- Author(s): Long Chen ; Wen Tang ; Nigel W. John
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 163 –167
- DOI: 10.1049/htl.2017.0068
- Type: Article
- + Show details - Hide details
-
p.
163
–167
(5)
The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors’ framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.
- Author(s): Sing Chun Lee ; Bernhard Fuerst ; Keisuke Tateno ; Alex Johnson ; Javad Fotouhi ; Greg Osgood ; Federico Tombari ; Nassir Navab
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 168 –173
- DOI: 10.1049/htl.2017.0066
- Type: Article
- + Show details - Hide details
-
p.
168
–173
(6)
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
- Author(s): Shusil Dangi ; Hina Shah ; Antonio R. Porras ; Beatriz Paniagua ; Cristian A. Linte ; Marius Linguraru ; Andinet Enquobahrie
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 174 –178
- DOI: 10.1049/htl.2017.0067
- Type: Article
- + Show details - Hide details
-
p.
174
–178
(5)
Craniosynostosis is a congenital malformation of the infant skull typically treated via corrective surgery. To accurately quantify the extent of deformation and identify the optimal correction strategy, the patient-specific skull model extracted from a pre-surgical computed tomography (CT) image needs to be registered to an atlas of head CT images representative of normal subjects. Here, the authors present a robust multi-stage, multi-resolution registration pipeline to map a patient-specific CT image to the atlas space of normal CT images. The proposed registration pipeline first performs an initial optimisation at very low resolution to yield a good initial alignment that is subsequently refined at high resolution. They demonstrate the robustness of the proposed method by evaluating its performance on 560 head CT images of 320 normal subjects and 240 craniosynostosis patients and show a success rate of 92.8 and 94.2%, respectively. Their method achieved a mean surface-to-surface distance between the patient and template skull of <2.5 mm in the targeted skull region across both the normal subjects and patients.
- Author(s): Séverine Habert ; Ulrich Eck ; Pascal Fallavollita ; Stefan Parent ; Nassir Navab ; Farida Cheriet
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 179 –183
- DOI: 10.1049/htl.2017.0069
- Type: Article
- + Show details - Hide details
-
p.
179
–183
(5)
Minimally invasive surgeries (MISs) are gaining popularity as alternatives to conventional open surgeries. In thoracoscopic scoliosis MIS, fluoroscopy is used to guide pedicle screw placement and to visualise the effect of the intervention on the spine curvature. However, cosmetic external appearance is the most important concern for patients, while correction of the spine and achieving coronal and sagittal trunk balance are the top priorities for surgeons. The authors present the feasibility study of the first intra-operative assistive system for scoliosis surgery composed of a single RGBD camera affixed on a C-arm which allows visualising in real time the surgery effects on the patient trunk surface in the transverse plane. They perform three feasibility experiments from simulated data based on scoliotic patients to live acquisition from non-scoliotic mannequin and person, all showing that the proposed system accuracy is comparable with scoliotic surface reconstruction state of art.
- Author(s): Ivo Kuhlemann ; Markus Kleemann ; Philipp Jauer ; Achim Schweikard ; Floris Ernst
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 184 –187
- DOI: 10.1049/htl.2017.0061
- Type: Article
- + Show details - Hide details
-
p.
184
–187
(4)
A major challenge during endovascular interventions is visualising the position and orientation of the catheter being inserted. This is typically achieved by intermittent X-ray imaging. Since the radiation exposure to the surgeon is considerable, it is desirable to reduce X-ray exposure to the bare minimum needed. Additionally, transferring two-dimensional (2D) X-ray images to 3D locations is challenging. The authors present the development of a real-time navigation framework, which allows a 3D holographic view of the vascular system without any need of radiation. They extract the patient's surface and vascular tree from pre-operative computed tomography data and register it to the patient using a magnetic tracking system. The system was evaluated on an anthropomorphic full-body phantom by experienced clinicians using a four-point questionnaire. The average score of the system (maximum of 20) was found to be 17.5. The authors’ approach shows great potential to improve the workflow for endovascular procedures, by simultaneously reducing X-ray exposure. It will also improve the learning curve and help novices to more quickly master the required skills.
- Author(s): Étienne Léger ; Simon Drouin ; D. Louis Collins ; Tiberiu Popa ; Marta Kersten-Oertel
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 188 –192
- DOI: 10.1049/htl.2017.0062
- Type: Article
- + Show details - Hide details
-
p.
188
–192
(5)
Image-guided surgery (IGS) has allowed for more minimally invasive procedures, leading to better patient outcomes, reduced risk of infection, less pain, shorter hospital stays and faster recoveries. One drawback that has emerged with IGS is that the surgeon must shift their attention from the patient to the monitor for guidance. Yet both cognitive and motor tasks are negatively affected with attention shifts. Augmented reality (AR), which merges the realworld surgical scene with preoperative virtual patient images and plans, has been proposed as a solution to this drawback. In this work, we studied the impact of two different types of AR IGS set-ups (mobile AR and desktop AR) and traditional navigation on attention shifts for the specific task of craniotomy planning. We found a significant difference in terms of the time taken to perform the task and attention shifts between traditional navigation, but no significant difference between the different AR set-ups. With mobile AR, however, users felt that the system was easier to use and that their performance was better. These results suggest that regardless of where the AR visualisation is shown to the surgeon, AR may reduce attention shifts, leading to more streamlined and focused procedures.
- Author(s): Zhe Min ; Hongliang Ren ; Max Q.-H. Meng
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 193 –198
- DOI: 10.1049/htl.2017.0065
- Type: Article
- + Show details - Hide details
-
p.
193
–198
(6)
Accurate understanding of surgical tool-tip tracking error is important for decision making in image-guided surgery. In this Letter, the authors present a novel method to estimate/model surgical tool-tip tracking error in which they take pivot calibration uncertainty into consideration. First, a new type of error that is referred to as total target registration error (TTRE) is formally defined in a single-rigid registration. Target localisation error (TLE) in two spaces to be registered is considered in proposed TTRE formulation. With first-order approximation in fiducial localisation error (FLE) or TLE magnitude, TTRE statistics (mean, covariance matrix and root-mean-square (RMS)) are then derived. Second, surgical tool-tip tracking error in optical tracking system (OTS) frame is formulated using TTRE when pivot calibration uncertainty is considered. Finally, TTRE statistics of tool-tip in OTS frame are then propagated relative to a coordinate reference frame (CRF) rigid-body. Monte Carlo simulations are conducted to validate the proposed error model. The percentage passing statistical tests that there is no difference between simulated and theoretical mean and covariance matrix of tool-tip tracking error in CRF space is more than 90% in all test cases. The RMS percentage difference between simulated and theoretical tool-tip tracking error in CRF space is within 5% in all test cases.
- Author(s): Joseph Plazak ; Simon Drouin ; Louis Collins ; Marta Kersten-Oertel
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 199 –203
- DOI: 10.1049/htl.2017.0074
- Type: Article
- + Show details - Hide details
-
p.
199
–203
(5)
Image-guided neurosurgery, or neuronavigation, has been used to visualise the location of a surgical probe by mapping the probe location to pre-operative models of a patient's anatomy. One common limitation of this approach is that it requires the surgeon to divert their attention away from the patient and towards the neuronavigation system. In order to improve this type of application, the authors designed a system that sonifies (i.e. provides audible feedback of) distance information between a surgical probe and the location of the anatomy of interest. A user study (n = 15) was completed to determine the utility of sonified distance information within an existing neuronavigation platform (Intraoperative Brain Imaging System (IBIS) Neuronav). The authors’ results were consistent with the idea that combining auditory distance cues with existing visual information from image-guided surgery systems may result in greater accuracy when locating specified points on a pre-operative scan, thereby potentially reducing the extent of the required surgical openings, as well as potentially increasing the precision of individual surgical tasks. Further, the authors’ results were also consistent with the hypothesis that combining auditory and visual information reduces the perceived difficulty in locating a target location within a three-dimensional volume.
- Author(s): Rohit Singla ; Philip Edgcumbe ; Philip Pratt ; Christopher Nguan ; Robert Rohling
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 204 –209
- DOI: 10.1049/htl.2017.0063
- Type: Article
- + Show details - Hide details
-
p.
204
–209
(6)
In laparoscopic surgery, the surgeon must operate with a limited field of view and reduced depth perception. This makes spatial understanding of critical structures difficult, such as an endophytic tumour in a partial nephrectomy. Such tumours yield a high complication rate of 47%, and excising them increases the risk of cutting into the kidney's collecting system. To overcome these challenges, an augmented reality guidance system is proposed. Using intra-operative ultrasound, a single navigation aid, and surgical instrument tracking, four augmentations of guidance information are provided during tumour excision. Qualitative and quantitative system benefits are measured in simulated robot-assisted partial nephrectomies. Robot-to-camera calibration achieved a total registration error of 1.0 ± 0.4 mm while the total system error is 2.5 ± 0.5 mm. The system significantly reduced healthy tissue excised from an average (±standard deviation) of 30.6 ± 5.5 to 17.5 ± 2.4 cm3 (p < 0.05) and reduced the depth from the tumor underside to cut from an average (±standard deviation) of 10.2 ± 4.1 to 3.3 ± 2.3 mm (p < 0.05). Further evaluation is required in vivo, but the system has promising potential to reduce the amount of healthy parenchymal tissue excised.
- Author(s): Trinette Wright ; Sandrine de Ribaupierre ; Roy Eagleson
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 210 –215
- DOI: 10.1049/htl.2017.0070
- Type: Article
- + Show details - Hide details
-
p.
210
–215
(6)
Advances in virtual and augmented reality (AR) are having an impact on the medical field in areas such as surgical simulation. Improvements to surgical simulation will provide students and residents with additional training and evaluation methods. This is particularly important for procedures such as the endoscopic third ventriculostomy (ETV), which residents perform regularly. Simulators such as NeuroTouch, have been designed to aid in training associated with this procedure. The authors have designed an affordable and easily accessible ETV simulator, and compare it with the existing NeuroTouch for its usability and training effectiveness. This simulator was developed using Unity, Vuforia and the leap motion (LM) for an AR environment. The participants, 16 novices and two expert neurosurgeons, were asked to complete 40 targeting tasks. Participants used the NeuroTouch tool or a virtual hand controlled by the LM to select the position and orientation for these tasks. The length of time to complete each task was recorded and the trajectory log files were used to calculate performance. The resulting data from the novices' and experts' speed and accuracy are compared, and they discuss the objective performance of training in terms of the speed and accuracy of targeting accuracy for each system.
- Author(s): Odysseas Zisimopoulos ; Evangello Flouty ; Mark Stacey ; Sam Muscroft ; Petros Giataganas ; Jean Nehme ; Andre Chow ; Danail Stoyanov
- Source: Healthcare Technology Letters, Volume 4, Issue 5, p. 216 –222
- DOI: 10.1049/htl.2017.0064
- Type: Article
- + Show details - Hide details
-
p.
216
–222
(7)
Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors’ knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems.
Guest Editors' Foreword
On the need for multi-scale geometric modelling of the mitral heart valve
Editorial: Challenges for the usability of AR and VR for clinical neurosurgical procedures
Multimodal connectivity based eloquence score computation and visualisation for computer-aided neurosurgical path planning
Hand–eye calibration using a target registration error model
Real-time geometry-aware augmented reality in minimally invasive surgery
Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery
Robust head CT image registration pipeline for craniosynostosis skull correction surgery
Application of an RGBD augmented C-arm for minimally invasive scoliosis surgery assistance
Towards X-ray free endovascular interventions – using HoloLens for on-line holographic visualisation
Quantifying attention shifts in augmented reality image-guided neurosurgery
Estimation of surgical tool-tip tracking error distribution in coordinate reference frame involving pivot calibration uncertainty
Distance sonification in image-guided neurosurgery
Intra-operative ultrasound-based augmented reality guidance for laparoscopic surgery
Design and evaluation of an augmented reality simulator using leap motion
Can surgical simulation be used to train detection and classification of neural networks?
Most viewed content
Most cited content for this Journal
-
Pervasive assistive technology for people with dementia: a UCD case
- Author(s): Julia Rosemary Thorpe ; Kristoffer V.H. Rønn-Andersen ; Paulina Bień ; Ali Gürcan Özkil ; Birgitte Hysse Forchhammer ; Anja M. Maier
- Type: Article
-
A remote healthcare monitoring framework for diabetes prediction using machine learning
- Author(s): Jayroop Ramesh ; Raafat Aburukba ; Assim Sagahyroon
- Type: Article
-
PD_Manager: an mHealth platform for Parkinson's disease patient management
- Author(s): Kostas M. Tsiouris ; Dimitrios Gatsios ; George Rigas ; Dragana Miljkovic ; Barbara Koroušić Seljak ; Marko Bohanec ; Maria T. Arredondo ; Angelo Antonini ; Spyros Konitsiotis ; Dimitrios D. Koutsouris ; Dimitrios I. Fotiadis
- Type: Article
-
Towards X-ray free endovascular interventions – using HoloLens for on-line holographic visualisation
- Author(s): Ivo Kuhlemann ; Markus Kleemann ; Philipp Jauer ; Achim Schweikard ; Floris Ernst
- Type: Article
-
Image denoising in bidimensional empirical mode decomposition domain: the role of Student's probability distribution function
- Author(s): Salim Lahmiri
- Type: Article