Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience

Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience To the Editor: We read with great interest the paper entitled “Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience” by Mascitelli et al,1 recently published in your journal. The authors reported a series of 28 vascular lesions (aneurysms, arteriovenous malformations [AVMs], cavernous malformations), 53 oncologic lesions (intraparenchymal and skull-based) and 3 other lesions (abscess, cerebrospinal fluid leak, and a Teflon granuloma), treated with the support of Microscope-based Augmented Reality (MbAR). The reported series is particularly relevant when considering that a recent review on the applications of augmented reality (AR) in humans identified 18 studies published between 1996 and 2015, overall, including the treatment of 77 vascular lesions, 75 neoplastic lesions, and 43 other diseases.2 Despite the wide range of diseases in the present series, the list of surgical conditions amenable of AR-aided surgery is even longer, including acute and chronic hydrocephalus,3 neoplastic or non-neoplastic epileptogenic lesions,4 and brain hemorrhage.5 Currently, there are no prospective studies showing a significant difference between AR-aided surgeries vs the standard neuronavigation-guided procedures in terms of morbidity, mortality, and clinical effectiveness. Thus, at the moment, conclusions on the best AR system cannot be dragged. Nonetheless, the MbAR is not novel,6,7 and demonstrated several points of interest, but also some important limitations. Technically, MbAR consists of fiducial-based, optical coregistration of the microscope and patient's head. The AR scene is created by overlapping the virtual content (provided by preoperative images), with the real content (provided by the microscope itself), by means of a neuronavigation system. The final AR image is injected in the microscope eyepiece. Thus, the surgeon can comfortably see the virtual image on the natural axis of view connecting the surgical field with the surgeon's eye. So, the MbAR setup overcomes an important source of error of other AR systems,8 namely the parallax error. That error occurs when different points of view of the same surgical target can be achieved by the surgeon's eye and optic devices: when the operator's point of view is the same as that of the real data source, there is no mismatch between what the surgeon sees and what the device actually captures. Conversely, when the surgeon and the data source have different points of view, there might be uncertainty, and potentially error, about the actual position of the target.8 Another advantage of the MbAR is the unpaired magnification respect to any other surgical visualization system. Anyway, when a macroscopic view of the surgical field is needed (ie, ventricular drain placement, initial steps of craniotomy), the use and setup of MbAR systems might be more cumbersome and time-consuming than traditional neuronavigation systems. A crucial limitation of the MbAR systems is the limited stereoscopic 3-D visualization of the AR scene. Ideally, the optimal AR scene would consist of a tridimensional virtual rendering of even complex geometric shapes, perfectly merging with a tridimensional view of the reality, without hiding the details of the surgical field itself. Conversely, the MbAR depicts the surgical target by means of dashed lines outlining the perimeter of the target itself in two dimensions. Additionally, the virtual component does not fully merge with the real component of the AR scene, but rather “pops out” of the scene itself. Unfortunately, the depth perception is critical for, at least, 3 main microsurgical task: first, for finding deep and/or small targets (ie, brain aneurysms)9; second, for treating complex lesions, as in the case of AVMs resection that requires a precise understanding of the relationship of the arterial feeders and the veins between them, and respect to the AVM nidus itself;10 third, to avoid critical structures close to the target, as in the case of skull base tumors surrounded by neurovascular structures7 or in the case of intraparenchymal tumors close to eloquent areas.11 Schematically, the limited stereoscopic visualization is due to 2 main reasons. First, the microscope captures a monoscopic bidimensional view of the surgical field and so, by definition, the tridimensional virtual image cannot merge with the real scene.8 Second, the representation of the virtual content, which might, at least in part, compensate for the monoscopic view, is still rudimental. Indeed, the depth perception can be improved by several visualization processing techniques.12 As an example, “Chromadepth” rendering, uses specific color-coding to represent distances following the visible light spectrum: near objects are red, while, as distance increases, objects are orange, yellow, green, and blue.13 Another example is the “Aerial Perspective” (aka “Fog” method), where colors became more transparent as the structures are deeper. Another important improvement for targeting surgical lesions, as well as for avoiding critical surrounding structures consists in adding procedural cues to the AR scene, such as the trajectory of the surgical instrument through the brain parenchyma and/or cisterns. This advancement can be theoretically achieved by fiducial-tracked surgical instruments, whose position can be determined respect to predetermined virtual trajectories, in a similar fashion to the traditional pointers of the neuronavigation systems. Alternatively, a much cheaper option consists in adding a series of isocentric, target-like, objects to the virtual content, allowing the surgeon adjusting the direction of the surgical instrument in order to center sequentially each of the target-like objects.11 Thus, as demonstrated in the present1 and previous studies,7,9,10,14 the MbAR is reliable and potentially useful for the initial “macroscopic” steps of brain surgeries (or just for treating superficial targets), including the skin incision, the craniotomy, the durotomy and the corticectomy, which can be also performed simply by using the more practical neuronavigation systems. On the other hand, the MbAR demonstrated limited accuracy and to be potentially distracting, when the “microscopic” tasks of the procedure are performed, such as finding and treating small and/or deep and/or complex lesions, or avoiding critical structures, which indeed is also a common limitation of current neuronavigation systems. In conclusion, although the MbAR is a promising and spectacular tool, its technical limitations do not allow overcoming the limitations of the neuronavigation systems, and prevent it from being used in the daily neurosurgical practice. Nonetheless, the worldwide diffusion of surgical microscopes still makes them an ideal platform for implementing and sharing new AR functionalities in the future. Disclosure The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. REFERENCES 1. Mascitelli JR , Schlachter L , Chartrain AG et al. Navigation-linked heads-up display in intracranial surgery: early experience . Oper Neurosurg . 2017 . published ahead of print October 10, 2017 (doi: 10.1093/ons/opx205) . 2. Meola A , Cutolo F , Carbone M , Cagnazzo F , Ferrari M , Ferrari V . Augmented reality in neurosurgery: a systematic review . Neurosurg Rev . 2017 ; 40 ( 4 ): 537 - 548 . Google Scholar CrossRef Search ADS PubMed 3. Lovo EE , Quintana JC , Puebla MC et al. A novel, inexpensive method of image coregistration for applications in image-guided surgery using augmented reality . Neurosurgery . 2007 ; 60 ( 4 Suppl 2 ): 366 - 371 . Google Scholar PubMed 4. Doyle WK . Low end interactive image-directed neurosurgery. Update on rudimentary augmented reality used in epilepsy surgery . Stud Health Technol Inform . 1996 ; 29 : 1 - 11 . Google Scholar PubMed 5. Iseki H , Masutani Y , Iwahara M et al. Volumegraph (overlaid three-dimensional image-guided navigation). Clinical application of augmented reality in neurosurgery . Stereotact Funct Neurosurg . 1997 ; 68 ( 1-4 ): 18 - 24 . Google Scholar CrossRef Search ADS PubMed 6. Edwards PJ , Hawkes DJ , Hill DL et al. Augmentation of reality using an operating microscope for otolaryngology and neurosurgical guidance . J Image Guid Surg . 1995 ; 1 ( 3 ): 172 - 178 . Google Scholar CrossRef Search ADS PubMed 7. Edwards PJ , Johnson LG , Hawkes DJ , Fenlon MR , Strong A , Gleeson M . Clinical experience and perception in stereo augmented reality surgical navigation . In: YG Z , Jiang T , eds. MIAR . Berlin Heidelberg : Springer-Verlag ; 2004 : 369 - 376 . Google Scholar CrossRef Search ADS 8. Kockro RA , Tsai YT , Ng I et al. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe . Neurosurgery . 2009 ; 65 ( 4 ): 795 - 808 . Google Scholar CrossRef Search ADS PubMed 9. Cabrilo I , Bijlenga P , Schaller K . Augmented reality in the surgery of cerebral aneurysms: a technical report . Neurosurgery . 2014 ; 10 ( Suppl 2 ): 252 - 261 . Google Scholar CrossRef Search ADS PubMed 10. Cabrilo I , Bijlenga P , Schaller K . Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations . Acta Neurochir . 2014 ; 156 ( 9 ): 1769 - 1774 . Google Scholar CrossRef Search ADS PubMed 11. Cutolo F , Meola A , Carbone M et al. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom . Comp Assisted Surg . 2017 ; 22 ( 1 ): 39 - 53 . Google Scholar CrossRef Search ADS 12. Kersten-Oertel M , Jannin P , Collins DL . DVV: a taxonomy for mixed reality visualization in image guided surgery . IEEE Trans Visual Comput Graph . 2012 ; 18 ( 2 ): 332 - 352 . Google Scholar CrossRef Search ADS 13. Kersten-Oertel M , Chen SJ , Collins DL . An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery . IEEE Trans Vis Comput Graph . 2014 ; 20 ( 3 ): 391 - 403 . Google Scholar CrossRef Search ADS PubMed 14. Cabrilo I , Schaller K , Bijlenga P . Augmented reality-assisted bypass surgery: embracing minimal invasiveness . World Neurosurg . 2015 ; 83 ( 4 ): 596 - 602 . Google Scholar CrossRef Search ADS PubMed Copyright © 2018 by the Congress of Neurological Surgeons This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Operative Neurosurgery Oxford University Press

Letter: Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience

Loading next page...
 
/lp/ou_press/letter-navigation-linked-heads-up-display-in-intracranial-surgery-1M4P0av5h0
Publisher
Congress of Neurological Surgeons
Copyright
Copyright © 2018 by the Congress of Neurological Surgeons
ISSN
2332-4252
eISSN
2332-4260
D.O.I.
10.1093/ons/opy048
Publisher site
See Article on Publisher Site

Abstract

To the Editor: We read with great interest the paper entitled “Navigation-Linked Heads-Up Display in Intracranial Surgery: Early Experience” by Mascitelli et al,1 recently published in your journal. The authors reported a series of 28 vascular lesions (aneurysms, arteriovenous malformations [AVMs], cavernous malformations), 53 oncologic lesions (intraparenchymal and skull-based) and 3 other lesions (abscess, cerebrospinal fluid leak, and a Teflon granuloma), treated with the support of Microscope-based Augmented Reality (MbAR). The reported series is particularly relevant when considering that a recent review on the applications of augmented reality (AR) in humans identified 18 studies published between 1996 and 2015, overall, including the treatment of 77 vascular lesions, 75 neoplastic lesions, and 43 other diseases.2 Despite the wide range of diseases in the present series, the list of surgical conditions amenable of AR-aided surgery is even longer, including acute and chronic hydrocephalus,3 neoplastic or non-neoplastic epileptogenic lesions,4 and brain hemorrhage.5 Currently, there are no prospective studies showing a significant difference between AR-aided surgeries vs the standard neuronavigation-guided procedures in terms of morbidity, mortality, and clinical effectiveness. Thus, at the moment, conclusions on the best AR system cannot be dragged. Nonetheless, the MbAR is not novel,6,7 and demonstrated several points of interest, but also some important limitations. Technically, MbAR consists of fiducial-based, optical coregistration of the microscope and patient's head. The AR scene is created by overlapping the virtual content (provided by preoperative images), with the real content (provided by the microscope itself), by means of a neuronavigation system. The final AR image is injected in the microscope eyepiece. Thus, the surgeon can comfortably see the virtual image on the natural axis of view connecting the surgical field with the surgeon's eye. So, the MbAR setup overcomes an important source of error of other AR systems,8 namely the parallax error. That error occurs when different points of view of the same surgical target can be achieved by the surgeon's eye and optic devices: when the operator's point of view is the same as that of the real data source, there is no mismatch between what the surgeon sees and what the device actually captures. Conversely, when the surgeon and the data source have different points of view, there might be uncertainty, and potentially error, about the actual position of the target.8 Another advantage of the MbAR is the unpaired magnification respect to any other surgical visualization system. Anyway, when a macroscopic view of the surgical field is needed (ie, ventricular drain placement, initial steps of craniotomy), the use and setup of MbAR systems might be more cumbersome and time-consuming than traditional neuronavigation systems. A crucial limitation of the MbAR systems is the limited stereoscopic 3-D visualization of the AR scene. Ideally, the optimal AR scene would consist of a tridimensional virtual rendering of even complex geometric shapes, perfectly merging with a tridimensional view of the reality, without hiding the details of the surgical field itself. Conversely, the MbAR depicts the surgical target by means of dashed lines outlining the perimeter of the target itself in two dimensions. Additionally, the virtual component does not fully merge with the real component of the AR scene, but rather “pops out” of the scene itself. Unfortunately, the depth perception is critical for, at least, 3 main microsurgical task: first, for finding deep and/or small targets (ie, brain aneurysms)9; second, for treating complex lesions, as in the case of AVMs resection that requires a precise understanding of the relationship of the arterial feeders and the veins between them, and respect to the AVM nidus itself;10 third, to avoid critical structures close to the target, as in the case of skull base tumors surrounded by neurovascular structures7 or in the case of intraparenchymal tumors close to eloquent areas.11 Schematically, the limited stereoscopic visualization is due to 2 main reasons. First, the microscope captures a monoscopic bidimensional view of the surgical field and so, by definition, the tridimensional virtual image cannot merge with the real scene.8 Second, the representation of the virtual content, which might, at least in part, compensate for the monoscopic view, is still rudimental. Indeed, the depth perception can be improved by several visualization processing techniques.12 As an example, “Chromadepth” rendering, uses specific color-coding to represent distances following the visible light spectrum: near objects are red, while, as distance increases, objects are orange, yellow, green, and blue.13 Another example is the “Aerial Perspective” (aka “Fog” method), where colors became more transparent as the structures are deeper. Another important improvement for targeting surgical lesions, as well as for avoiding critical surrounding structures consists in adding procedural cues to the AR scene, such as the trajectory of the surgical instrument through the brain parenchyma and/or cisterns. This advancement can be theoretically achieved by fiducial-tracked surgical instruments, whose position can be determined respect to predetermined virtual trajectories, in a similar fashion to the traditional pointers of the neuronavigation systems. Alternatively, a much cheaper option consists in adding a series of isocentric, target-like, objects to the virtual content, allowing the surgeon adjusting the direction of the surgical instrument in order to center sequentially each of the target-like objects.11 Thus, as demonstrated in the present1 and previous studies,7,9,10,14 the MbAR is reliable and potentially useful for the initial “macroscopic” steps of brain surgeries (or just for treating superficial targets), including the skin incision, the craniotomy, the durotomy and the corticectomy, which can be also performed simply by using the more practical neuronavigation systems. On the other hand, the MbAR demonstrated limited accuracy and to be potentially distracting, when the “microscopic” tasks of the procedure are performed, such as finding and treating small and/or deep and/or complex lesions, or avoiding critical structures, which indeed is also a common limitation of current neuronavigation systems. In conclusion, although the MbAR is a promising and spectacular tool, its technical limitations do not allow overcoming the limitations of the neuronavigation systems, and prevent it from being used in the daily neurosurgical practice. Nonetheless, the worldwide diffusion of surgical microscopes still makes them an ideal platform for implementing and sharing new AR functionalities in the future. Disclosure The authors have no personal, financial, or institutional interest in any of the drugs, materials, or devices described in this article. REFERENCES 1. Mascitelli JR , Schlachter L , Chartrain AG et al. Navigation-linked heads-up display in intracranial surgery: early experience . Oper Neurosurg . 2017 . published ahead of print October 10, 2017 (doi: 10.1093/ons/opx205) . 2. Meola A , Cutolo F , Carbone M , Cagnazzo F , Ferrari M , Ferrari V . Augmented reality in neurosurgery: a systematic review . Neurosurg Rev . 2017 ; 40 ( 4 ): 537 - 548 . Google Scholar CrossRef Search ADS PubMed 3. Lovo EE , Quintana JC , Puebla MC et al. A novel, inexpensive method of image coregistration for applications in image-guided surgery using augmented reality . Neurosurgery . 2007 ; 60 ( 4 Suppl 2 ): 366 - 371 . Google Scholar PubMed 4. Doyle WK . Low end interactive image-directed neurosurgery. Update on rudimentary augmented reality used in epilepsy surgery . Stud Health Technol Inform . 1996 ; 29 : 1 - 11 . Google Scholar PubMed 5. Iseki H , Masutani Y , Iwahara M et al. Volumegraph (overlaid three-dimensional image-guided navigation). Clinical application of augmented reality in neurosurgery . Stereotact Funct Neurosurg . 1997 ; 68 ( 1-4 ): 18 - 24 . Google Scholar CrossRef Search ADS PubMed 6. Edwards PJ , Hawkes DJ , Hill DL et al. Augmentation of reality using an operating microscope for otolaryngology and neurosurgical guidance . J Image Guid Surg . 1995 ; 1 ( 3 ): 172 - 178 . Google Scholar CrossRef Search ADS PubMed 7. Edwards PJ , Johnson LG , Hawkes DJ , Fenlon MR , Strong A , Gleeson M . Clinical experience and perception in stereo augmented reality surgical navigation . In: YG Z , Jiang T , eds. MIAR . Berlin Heidelberg : Springer-Verlag ; 2004 : 369 - 376 . Google Scholar CrossRef Search ADS 8. Kockro RA , Tsai YT , Ng I et al. Dex-ray: augmented reality neurosurgical navigation with a handheld video probe . Neurosurgery . 2009 ; 65 ( 4 ): 795 - 808 . Google Scholar CrossRef Search ADS PubMed 9. Cabrilo I , Bijlenga P , Schaller K . Augmented reality in the surgery of cerebral aneurysms: a technical report . Neurosurgery . 2014 ; 10 ( Suppl 2 ): 252 - 261 . Google Scholar CrossRef Search ADS PubMed 10. Cabrilo I , Bijlenga P , Schaller K . Augmented reality in the surgery of cerebral arteriovenous malformations: technique assessment and considerations . Acta Neurochir . 2014 ; 156 ( 9 ): 1769 - 1774 . Google Scholar CrossRef Search ADS PubMed 11. Cutolo F , Meola A , Carbone M et al. A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom . Comp Assisted Surg . 2017 ; 22 ( 1 ): 39 - 53 . Google Scholar CrossRef Search ADS 12. Kersten-Oertel M , Jannin P , Collins DL . DVV: a taxonomy for mixed reality visualization in image guided surgery . IEEE Trans Visual Comput Graph . 2012 ; 18 ( 2 ): 332 - 352 . Google Scholar CrossRef Search ADS 13. Kersten-Oertel M , Chen SJ , Collins DL . An evaluation of depth enhancing perceptual cues for vascular volume visualization in neurosurgery . IEEE Trans Vis Comput Graph . 2014 ; 20 ( 3 ): 391 - 403 . Google Scholar CrossRef Search ADS PubMed 14. Cabrilo I , Schaller K , Bijlenga P . Augmented reality-assisted bypass surgery: embracing minimal invasiveness . World Neurosurg . 2015 ; 83 ( 4 ): 596 - 602 . Google Scholar CrossRef Search ADS PubMed Copyright © 2018 by the Congress of Neurological Surgeons This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

Operative NeurosurgeryOxford University Press

Published: Mar 23, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off