In vivo estimation of target registration errors during augmented reality laparoscopic surgery

In vivo estimation of target registration errors during augmented reality laparoscopic surgery Purpose Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. Methods The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. Results The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. Conclusion We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery. Keywords Image-guided surgery · Augmented reality · Liver · Validation · Error measurement · Laparoscope Introduction addressed by introducing external images to the procedure, known as image-guided surgery (IGS). A recent review [5] Laparoscopic surgery for liver resection is in general prefer- describes the state of the art of laparoscopic IGS. In most able to open surgery, due to the significant reduction in cases Augmented Reality (AR), where a model is overlaid post-operative pain and scarring [7]. Currently only a minor- directly on the laparoscopic video, is avoided due to the dif- ity of patients at specialist hospitals undergoes laparoscopic ficulty in creating a well aligned overlay on a deforming and resection. One reason for the low rate of laparoscopic resec- mobile organ. One approach is to show a solid model derived tion is the difficulty surgeons have in identifying key anatomy from pre-operative Computed Tomography (CT) next to the through a laparoscopic camera and monitor [4]. This can be surgical scene. Whilst the model may be orientated to match the surgical scene, it is up to the surgeon to identify the final correspondence between the model and the video. The first B Stephen Thompson reported use of an AR overlay in laparoscopic liver surgery s.thompson@ucl.ac.uk is reported by [10] making the case for the benefits of an AR Wellcome/EPSRC Centre for Interventional and Surgical laparoscopic system. We developed the “SmartLiver” IGS Sciences, University College London, London, UK system to show the liver model overlaid on the video feed Division of Surgery and Interventional Science, University from a laparoscope. This spares the surgeon some cognitive College London, London, UK 123 866 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 load; however, it raises questions in terms of perception and Pratt el al. [15] overlay a wire-frame of the organ surface. interpretation of errors. In our experience, these approaches are too visually clut- In any AR system, there will be misalignment between tered for liver surgery, hence our proposed use of outline the overlay and what is visible on the screen. Furthermore, rendering. Communication of alignment errors gets harder it must remain the responsibility of the surgeon to interpret when deformable registration is used. Bano et al. [3]show and act upon any apparent error. To enable this, we have two results relevant to our study in their pre-clinical work on implemented advanced visualisation algorithms, to allow the using intra-operative C-arm to inform a non-rigid registra- surgeon to rapidly identify AR overlay errors. Figure 1 shows tion of the liver. Firstly, in their porcine model, deformation an in vivo overlay using our system. A key feature of the due to insufflation is a significant source of registration error overlay is that we have maintained a projected 2D outline of (around 8 mm). Furthermore, the error measured at internal the liver, which can be compared to the visible anatomy. The vessels is significantly higher (by approximately 6 mm) than outline enables an estimate of the accuracy of any overlaid the error measured at the liver surface. non-visible anatomy. Contributions of this Paper Background Our proposed method for in vivo estimation of errors uses the One reason for the slow progress of laparoscopic IGS is a visible misalignment of the liver outline (Fig. 1) to infer the lack of a realistic approach to the measurement and inter- misalignment of non-visible target anatomy. In this paper, we pretation of alignment errors. In contrast to orthopaedics define a measure of visible misalignment, re-projection error or neurosurgery, the anatomy of the abdomen is mobile, so (RPE), and test the assumption that RPE is a useful predictor IGS using rigid registration may suffer significant localised of the misalignment of subsurface targets, or target regis- errors. It is theoretically possible to use deformable registra- tration error (TRE). In part this can be estimated using the tion and motion models [17]; however, this adds complexity, relations between fiducial localisation error (FLE) and TRE and makes it harder for the surgeon to interpret the sys- originally put forward by Fitzpatrick and West [11]; however, tem’s performance. Breath hold or gating can also be used to two factors limit the applicability of their approach. Firstly, improve the apparent accuracy, at a cost in usability. the FLE of individual in vivo landmarks are not independent Collins et al. [9] investigate the effect of variation in sur- random variables, as they will all be influenced by system- face reconstruction protocol on rigid and non-rigid surface- atic errors in calibration and tracking of the laparoscope and based registration. They show that a system using rigid tissue motion. Independence of FLE is a key assumption of registration can be expected to have registration errors around [11] and derived works; therefore, use of these relationships 10 mm, while deformable registration can get down to when the assumption is not true can significantly underesti- approximately 6 mm. These figures are also in agreement mate TRE [19]. Secondly, in our calculation of RPE, errors with our results. normal to the camera lens are effectively discarded, because Kang et al. [13] propose an AR laparoscopic system they cannot be estimated from a 2D image. This creates a that avoids some of the problems of soft tissue motion and non-linear transformation from 3D misalignment errors to deformation between scan and surgery by only using intra- 2D RPE. Therefore, it is not clear that RPE can be safely operatively acquired ultrasound images. They report errors used as a proxy for FLE. of approximately 3 mm for their ultrasound only AR system. In our pre-clinical work, only point landmarks were used The primary source of errors in such a system will be tracking for validation [21]; however, during our ongoing in vivo val- and calibration errors, again providing a useful comparison idation we have found it extremely difficult to identify point with our system. landmarks on the human liver. In general, the landmarks we Hayashi et al. [12] present a novel registration method for have been able to use are concentrated around the high cur- gastric surgery, using subsurface landmarks to progressively vature points close to the falciform ligament. In contrast, it is improve the registration as and when they become visible dur- possible to identify line landmarks across the entire visible ing resection. They report accuracies around 13 mm, which edge of the liver. To enable validation of the system in vivo, is similar to our best achieved accuracy of 12 mm. Interest- we have therefore developed a novel algorithm to measure ingly they report that their surgeons believe the system would RPE using both point and line landmark features. become useful at accuracies of 10 mm, as the surgeon should With this paper, we make three important and novel con- be able to mentally compensate for the residual registration tributions. We test the validity of using RPE derived from errors caused by deformation and motion. point and landmark features to estimate subsurface TRE, in Amir-Khalili et al. [1] propose displaying contours show- so doing we enable the translation from pre-clinical to clinical ing uncertainty around the displayed targets. Alternatively, research. Secondly, the algorithm is applied to 9 in vivo cases, 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 867 Fig. 1 The right liver lobe as seen through the laparoscopic camera, estimate of the accuracy of overlay for non-visible vessels, veins (blue left image. The right image shows the same scene augmented using the and purple) and arteries (red). Also visible is the gall bladder (yellow SmartLiver system. The outline of the liver, shown as an orange line, outline) and a tumour (green) can be compared to the visible liver outline. The mismatch gives an to our knowledge this is the first attempt at a quantitative eval- Steps 6 and 7 in Fig. 2 define the transform from model uation of a liver AR IGS system on multiple patients. Lastly space to world space, henceforth denoted T . Once T M2W M2W we describe the ongoing development of the SmartLiver sys- is estimated the surgeon can refer to the augmented real- tem, including the use of a novel rendering engine to enable ity display, to localise subsurface anatomy. Steps 6 and/or 7 in vivo visualisation of misalignment errors and an improved can be repeated to give a new estimate of T if the liver M2W user interface. moves significantly. The visualisation (Fig. 1) shows visible anatomy as a 2D outline and internal anatomy as a depth fogged surface model. Visualisation is implemented using the Visualisation Library. The surgeon can use the mismatch Materials and methods between the visible and projected outlines to make a rapid assessment of the system accuracy. Analysis of registration SmartLiver surgery workflow using surface-based accuracy was performed after surgery, using data saved dur- registration ing surgery. These data consist of video and tracking data recorded throughout the procedure, calibration data for the The SmartLiver system hardware consists of a workstation laparoscope, and any estimates of T from in-theatre reg- M2W PC and a Polaris Spectra optical tracking system, mounted istrations. on a custom built trolley with an un-interruptible power supply. The PC runs custom software based on the NifTK Estimation of re-projection error software platform [8]. The PC includes an NVIDIA SDI cap- ture card and an NVIDIA K6000 GPU. In theatre, the system Errors in augmented reality can be estimated in some appli- stands next to the laparoscopic stack, allowing the surgeon cations where features are visible in both the video and in the to see an augmented reality overlay near their existing line projected model. This approach was described in our previ- of sight. ous publication [21] on pre-clinical and phantom data and is Figure 2 shows the software flowchart and user inter- extended here. face from start up to augmented reality overlay. Up until the Landmark points on the CT derived model and on the patient being ready for surgery, set-up time does not impact video data were manually identified by a surgeon who had on total theatre time. Once the patient is anaesthetised and been trained in the use of our software. Point and line picking ready for surgery, time is critical, hence the need for a well- on the model was done using NifTK [8], utilising MITK’s defined work flow and simple user interface. The in vivo data [14] point set interaction plugin. We wrote a custom point and reported in this paper was gathered using earlier versions of line picking application for the video data, which now forms the user interface. Because the user interface was often dif- part of the NifTK software suite. The software scans through ficult to use the quality of any registrations performed in a recorded video file stopping every n frames where n is set theatre is highly variable, as will be seen in the results. by the user, typically between 25 and 100 frames, depending 1 2 Northern Digital Inc. www.ndigital.com. www.visualizationlibrary.org. 123 868 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 5: Liver surface 1: Patient model patches are re- loaded and checked constructed using by the user. [22]. 2: Tracking and 6: The user manu- video data sources ally aligns the model started and status to video, using on checked. screen buttons. 3: Tracking collar is attached to la- 7: ICP registers re- paroscope, before constructed surfaces covering with sterile to model [21]. drapes. 8: Overlay is ready, 4: Laparoscope is individual anatomy calibrated using objects can be method from [20]. turned on/of. Fig. 2 Flow diagram of the SmartLiver IGS software. The user runs through 7 tabbed screens, moving from system initialisation to registration and overlay. To provide the clearest possible images, we have used a mixture of images from clinical use (panels 3, 4, and 8) and phantom testing (panels1,2,5,6,7) on the length of the recorded video. The software finds the If the geometric error (in mm) remains the same, the pixel nearest (in time) tracking data to the video frame and checks error will increase as the camera gets closer to the object. To the timing difference. If the tracking data are from within 20 counter this problem, we “re-project” the on-screen errors ms of the video frame the user is shown a pair of still images onto a plane parallel to the camera frame at the distance of the from the left and right channels. If the timing difference in corresponding model feature. The distance between the two greater than 20 ms the frame is skipped. points on this plane can be measured in millimetres. Because When presented with the two still images the user is able the on-screen point is back projected onto a plane passing to click on either of them to define visible landmarks. The through the corresponding model point, there is no error in user can toggle between point and line selection mode. The the direction normal to the camera plane (the z direction). landmarks correspond to those selected on the patient model. The above approach was used on phantom and pre-clinical Landmarks not visible in a given frame are simply excluded. data using landmark points [21]. However, we found it was We have written another application to determine RPE difficult to identify corresponding landmark points for in vivo using the landmark points, the camera calibration, the camera data. Specifically, it was very difficult to find point features tracking data, and T . For each frame of video where away from the centre of the liver (near the falciform liga- M2W landmark points have been picked, the error in pixels between ment). In contrast, line features, such as the liver edges, can the picked landmark and its projected location on the model be identified across the entire liver and used by the surgeon to is calculated. Landmarks that do not project onto the screens assess accuracy. Therefore, the methodology was extended visible area are excluded from the analysis. to allow the use of line features on the liver surface. The user Representing errors in pixels is problematic for two rea- defines lines as a set of discrete vertices on both the model sons. Firstly, it has no physical meaning, the surgeon is and the video. When calculating errors, the lines on the video interested in how the system errors compare with anatomy, images are treated as a set of discrete vertices, whilst linear for example the smallest vessel size that can be safely cut interpolation between vertices is used on the model. Figure through and cauterised (approx 3 mm). Secondly, it makes 3 shows examples of line and point features identified on no account of the distance of the object from the camera. phantom and in vivo data. The question of how to measure 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 869 RPE using lines is more ambiguous than for points. We use in world space; however, we do not use this method as it the following algorithm: gives an inaccurate measure of the overlay errors observed in the SmartLiver system. Use of a separate pointer results in errors in the hand-eye and left to right lens calibration of 1. Define uniquely identifiable points and lines (points con- the stereo laparoscope showing up as a linear offset. The nected by straight segments) on the CT derived liver SmartLiver system avoids the need for a highly accurate surface model. hand-eye calibration by performing all localisation and over- 2. For a given video frame, mark any visible points and lines. lay in the coordinate system of the laparoscope lens. The liver Partial lines may be used, i.e. there is no requirement that model is located relative to the laparoscope lens position at the whole line is visible on the video frame. some time zero. The model is placed in world coordinates 3. Each line vertex on the image is re-projected along a ray using the hand-eye transform and tracking data. The model through the camera’s origin. is subsequently projected on to the screen using the same 4. Transform model features to the camera lens’ coordinate hand-eye transform. Provided the laparoscope motion is lim- system using T and the world to camera transform. M2W ited between time zero and the time of AR projection the 5. For each ray, find the closest point (x) on the correspond- inaccuracies in the hand-eye calibration largely cancel out. ing model line. As a clinical laparoscope is constrained by the trocar, we 6. Define a plane ( p) parallel to the camera image plane have found this to be the case during pre-clinical and clinical passing though (x). evaluation of the system. 7. Compute the distance between point x and the intersec- To get a more relevant error measure T is found using tion of the ray with plane p. M2W stereo triangulation as follows. The pin head positions are 8. The mean distance for all vertices of the re-projected line manually defined in multiple stereo image pairs taken from is the RPE for that feature. a video sequence of the uncovered pin heads. The 3D posi- tion of the pin relative to the left camera lens is triangulated Experiment 1: correlation of TRE and RPE on a liver using the pixel location in each stereo pair, the two cameras’ phantom intrinsic matrices and right to left lens transform. The tri- angulated points are placed in world coordinates using the The assumption that RPE can be used to estimate TRE is hand-eye and tracking transforms. The result is a point cloud fundamental to the utility of our proposed IGS system. We in world space for each pin head. The pin heads defined in test this assumption here. To estimate the system’s accuracy the model are registered to the centroids of these point clouds in localising subsurface landmarks a custom made silicone by minimising fiducial registration error (FRE) a per Arun et phantom was utilised, see Fig. 4. al. [2]. RPE for this ideal model to world transform (denoted The shape of the phantom was taken from a CT scan of T ) will not be zero, as errors due to tracking, cali- M2W (i ) an adult male liver. The external appearance was designed to bration, and point picking will still be present; however, the be representative of a healthy adult liver to enable testing of RPE will be approximately minimised, giving the surgeon the computer vision algorithms on the phantom [21]. The outer best possible estimate of the position of the subsurface tar- part of the liver phantom is made from flexible silicone and gets. Therefore, T is assigned a zero TRE. Any other M2W (i ) can be repeatably mounted on a set of 9 rigid pins inserted model to world transform for the phantom data set can be into a moulded epoxy base, see Fig. 4. This configuration described in terms of its TRE relative to T . M2W (i ) enables future work on deformable registration, by utilising bases with different pin geometry. The experiment we performed consisted of: For this paper, we treat the 9 positioning pins as subsur- 1. Identify landmark points and lines in the CT model of face targets, so the accuracy of a given estimate of T can M2W the liver phantom. be assessed by removing the flexible liver phantom and mea- 2. Record a tracked video sequence of the surface of the suring the pin head locations. This method depends on the liver phantom. repeatability of the positioning of the flexible liver phantom 3. Remove the silicone phantom, record a tracked video on the base, which was checked by taking 2 CT scans, with sequence of the subsurface pins. the liver phantom removed and replaced between each scan. 4. Identify landmark points in both videos, plus lines in the The CT scans were then aligned using the pin head positions surface videos. and the alignment of the liver phantom surfaces compared 5. Measure the RPE of landmarks (pin heads and surface visually. No significant misalignment was observed. points and lines) using T . M2W (i ) The model to world transform, T , could be found M2W by using a separate tracked pointer to locate the pin heads The RPE thus found will be substantially lower than that www.healthcuts.co.uk. observed for in vivo data due to the absence of numerous 123 870 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 Fig. 3 Example projection using the surface features on the phantom (left) and on in vivo data. The on-screen features (shown in yellow) are defined on the recorded video images. The projected features (in white) are projected from the model using the estimated T M2W Fig. 4 The silicone liver phantom used for validation. The exterior (left) hand image shows the relative positions of the subsurface targets and is representative in appearance and geometry of an adult male liver. The surface landmarks. The 9 targets are shown in red, the 6 peripheral sur- internal pins (centre, highlighted in red) secure the liver phantom and face point landmarks are shown in yellow, the 2 central landmarks in act as subsurface landmarks for the measurement of TRE. The right- green and the 9 line features in blue error sources encountered in vivo, most significantly errors ated at each integer value of normalised Euclidean distances due to liver motion and deformation, but also the difficulty from 1 to 20 in. The range of normalised Euclidean distances in achieving the optimum rigid body registration. Though was set to provide a usable distribution of results at clinically the sources of error are varied, we make the assumption that representative RPE. their combined effect can be modelled using perturbations ), TRE and At each perturbed transform (denoted T M2W ( p) of T . To create sufficient data to test for correlation RPE were calculated for each available landmark. RMS val- M2W (i ) between RPE and TRE, we generated 20,000 random per- ues for each measure over multiple landmarks were then turbations of T , and measured the root mean square calculated and reported. RMS TRE was calculated using Eq. M2W (i ) (RMS) values of RPE and TRE at each. 1, where X is the position vector for each of the nine targets Random perturbations were defined by 6 independent ran- (pin heads) in world coordinates. dom variables, 3 translations and 3 rotations. All rotations were about the centroid of the liver phantom. Translations were randomly sampled from a zero mean normal distribu- TRE = (X − T X ) (1) RMS i M2W ( p) i tion of standard deviation 1.0 mm. Rotations were randomly i =1 sampled from a zero mean normal distribution of standard ◦ ◦ deviation 1.2 . The scaling (1.2 per mm) was set so that a translation or rotation of 1 standard deviation results in Three measures of RMS RPE were calculated using different the same mean absolute displacement across the liver phan- subsets of the surface features shown in Fig. 4. The first uses tom. Rotations and translations were then scaled (using the all 8 available surface point landmarks, the second only uses same scalar for all six vectors) to give a defined normalised the 2 point landmarks near the falciform ligament to represent Euclidean distance from T . Sampling in this way gen- M2W (i ) the sort of point features that can be located in vivo. The last erates perturbations uniformly distributed along each of the 6 measure of RMS RPE uses these 2 point landmarks together degrees of freedom. 1000 random perturbations were gener- with 9 line features, predominantly along the front edge of 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 871 the liver phantom, representative of the line features that can with the laparoscope again moved steadily around the phan- be located in vivo. tom at a speed of approximately 35 mm/s. A total of 44 frames were manually annotated, giving 87 samples of the pin head Experiment 2: evaluation of in vivo data positions. The pin heads picked in the CT model and the pin heads For in vivo data, there exists no ideal transform as the posi- triangulated from the video form two sets of ordered fiducial points, allowing T to be found by minimising FRE as tion of subsurface landmarks remains unknown. However, we M2W (i ) were able to collect substantial amounts of in vivo clinical per [2]. The residual FRE was 2.55 mm, suggesting an error data, following as closely as possible the protocol described in localising each pin head of around 2.89 mm using equation in section “SmartLiver surgery workflow using surface-based 10 from [11]. The RMS RPE at T was 2.15 mm M2W (i ) registration”. To date we have evaluated the accuracy on nine Figure 5 plots the distribution of RMS TRE versus RMS clinical procedures. In each case landmark points were iden- RPE and FRE. Each of the 20,000 registrations were binned tified in the CT-derived liver model and in several hundred according to their RMS RPE in 1 mm bins centred around frames of video per patient. integer values from 1 to 30mm, for each bin the mean Where available, any model to world transforms, T , and standard deviation of RMS TRE is plotted. Correlation M2W between RMS TRE and RMS RPE or FRE was measured determined by manual alignment in theatre were used to mea- sure RMS RPE on surface landmarks. Where sufficient point using Pearson’s correlation coefficient (r), and the mean stan- dard deviation (σ ) over all bins. Figure 5a plots RMS TRE landmarks were available it was also possible to estimate T based on triangulation and registration of surface land- versus RMS RPE and FRE when evaluated on the pin heads M2W mark points. Such landmark-based registration is used by themselves. Both RMS RPE and FRE correlate very well similar liver IGS systems [6,12], so it makes a useful com- with RMS TRE, an unsurprising result given that the mea- parison with our system. surements are all made on the same features, but confirmation In most cases, we also recorded ex vivo laparoscope cali- that RPE can be used as a proxy for TRE in the ideal case. bration data, either of a cross-hair [20] or in earlier cases of a Figure 5b plots RMS TRE versus RMS RPE when RPE chessboard calibration grid [23]. These calibration data were is calculated using surface landmark features only, whilst used to assess the accuracy of the system in theatre in the the RMS TRE is measured at the subsurface pin heads. The first (red) line shows the result for when all (8) surface point absence of tissue motion. Chessboard corners or cross-hair centres were manually identified in the video data for tens landmarks are used for measuring RPE, similarly to our pre- of frames per data set. These feature were triangulated to clinical results ([21]). In this case, RMS RPE provides a good world coordinates, and these used to measure re-projection predictor of RMS TRE with a Pearson’s correlation coeffi- error using the method described in section “Estimation of cient of 0.79. This suggests that in cases where surface point re-projection error”. Using this method will include errors landmarks are available over a significant area they provide in picking the points in the video frames, allowing a more a useful indicator of subsurface accuracy. The second (blue) direct comparison with the in vivo accuracy, in contrast to line shows a more clinically realistic situation in which only reporting the calibration residual errors. those point landmarks near the falciform ligament are used to calculate RPE. In this case, the correlation coefficient is significantly reduced (to 0.44), and what correlation there is predominantly occurs above RMS RPE of 15 mm, mak- Results ing this measurement of questionable clinical use. The third (green) line in Fig. 5b shows how correlation can be improved Experiment 1: correlation of TRE and RPE on a liver phantom by incorporating surface line features, which can be identi- fied in vivo. Video of the liver phantom surface was recorded, imaging the surface point and line landmarks. A total of 2296 stereo Experiment 2: evaluation of in vivo data images were recorded (2 × 540 × 1920 pixels). The laparo- scope was moved steadily by hand to try and image each Data from nine patients have been analysed. Acquisitions landmark, at an average speed (measured at the lens) of were made during surgery with the laparoscope being slowly approximately 30 mm/s. A total of 68 images were manu- moved by hand. Acquisition time and speed varied, but typi- ally annotated with the positions of point and line landmarks cally consisted of 1–3 minutes of video with the laparoscope by an experienced research scientist, to give a total of 76 lens moving at around 10–20 mm/s. The number of point point landmarks and 104 line landmarks. The flexible sili- and line features used varied between patients. The mini- cone liver phantom was then removed from its base. A total mum number of points was 3 and the maximum was 7. The of 2460 stereo pairs of the securing pin heads were recorded, minimum number of lines was 5 and the maximum was 9. 123 872 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 (a) (b) Fig. 5 RMS RPE measured on visible features versus RMS TRE mea- removed from its base.) b shows the RMS RPE measured using 3 subsets sured at the pin heads for the phantom. a shows the RMS RPE measured of features on the on the liver phantom’s surface using 9 subsurface pin heads (i.e. with the silicone liver phantom An average 469 frames of video data were manually anno- Discussion tated per patient, with a minimum of 80 and a maximum of 1909 frames. Annotation of the video and CT was done by Several tentative conclusions can be drawn from Table 1.The an experienced laparoscopic surgeon. In all cases RPE was combination of dynamic and static deformation and laparo- measured on a static calibration pattern, on a set of triangu- scope tracking and calibration errors is at least 10 mm. This lated in vivo point landmarks, and a set of in vivo lines and is the best case accuracy for a laparoscopic IGS utilising points registered using point-based registration. The result- optical tracking and a rigid model. There is a slight improve- ing RMS RPE is recorded in the first three numerical columns ment in RMS RPE for retrospective manual alignment versus of Table 1. in theatre manual alignment, probably due the time pres- The last three numerical columns in Table 1 show the sure and ergonomic compromise present during surgery. The results of registrations performed using the SmartLiver sys- best RMS RPE was found using the surface-based ICP; how- tem’s user interface. In four cases, registration was performed ever, there remain significant challenges to make this process during surgery, (Manual Live Alignment), in three differ- robust. ent cases manual alignment was performed after surgery In vivo results indicate that it is possible to achieve appar- on recorded data (Manual Retro. Alignment). Registration ent accuracies (RPE) of around 12 mm, which correspond to using the surface-based Iterative Closest Point (ICP) algo- mean subsurface accuracies around 15 mm (green line in Fig. rithm was performed once, using surface patches grabbed 5b) with a rigid registration system. Whether such accuracy during surgery. Due to the small sample sizes, we have not is clinically useful is currently unknown. The SmartLiver performed any statistical comparison of the different regis- IGS system is at present the only laparoscopic liver surgery tration methods. system where an augmented reality overlay is attempted rou- As in our pre-clinical work [21], it is useful to analyse the tinely. Clinical evaluation is ongoing to try and link accuracy in vivo results in terms of what error sources contribute to achieved to clinical outcome. Clinical evaluation will also the overall error. The bottom 10 rows of Table 1 show which enable an analysis of the most useful way to report errors, error sources contribute to each result. i.e. here we report RMS errors, whereas it may be more rel- evant to focus on the extreme values. Anecdotally, surgeons were generally impressed with the overlays achieved, giving encouragement that the system may be useful at its current accuracy level. Our long-term aim is to develop a clinical guidance sys- Excluding two outliers the minimum number of frames used is 210, and the maximum 462. tem which can reliably achieve accuracies better than 5 mm, 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 873 Table 1 Average RMS RPE errors measured for the human clinical data, classified by what error sources contribute to each error measurement Registration method Calib. Error Triang. Points Point-Based Reg. Manual Live Align. Manual Retro. Align. ICP Retro. Align. Average RMS RPE 7.5 10.2 16.3 25.0 19.4 12.3 Standard deviation (Samples) 5.2 (9) 3.3 (9) 3.9 (9) 8.8 (4) 7.4 (3) − (1) Contributing errors Laparoscope tracking Laparoscope calibration Picking points in video Picking points in CT – – Ordered point-based registration – –  –– Manual registration – – –  – Operating room conditions – – –  –– ICP surface-based registration – – – – – Static deformation (insuflation) – – Dynamic deformation (breathing) – Cells containing ticks indicate that a given error source (rows) contributes to total error for a given registration method (columns) in order to allow the surgeon to navigate around vessels use for error estimation. We have begun work [18] looking at of that size. However, this target was set in the absence the what surface features provide the best registration, which of an agreed method to measure accuracy, so is somewhat could be extended so that the overlay only shows portions arbitrary. Nonetheless, the results presented here indicate of the liver edge to maximise correlation between apparent that accuracies better than 10 mm can only be achieved by RPE and TRE. deformable registration. Deformable registration and breath- ing motion compensation [16] of the liver has been shown to be technically possible by several groups [9,17]. This raises Conclusion the question of how the surgeon interprets alignment errors when the model has been computationally deformed. Fur- We have described some aspects of the in vivo clinical use of ther work could compare TRE and RPE over a wider range the SmartLiver AR IGS system. We have highlighted some of liver shapes and incorporating deformable registration. of the many challenges involved in the transition from pre- Our proposed approach of using the 2D projected organ out- clinical to clinical research in IGS. Not least of these is the line should continue to allow a rapid in vivo assessment of need for a clear and well-validated method to determine the error. in vivo accuracy. The algorithm we have presented, tested, Our phantom results, Fig. 5, indicate that the addition of and used should enable the evaluation of the IGS system line landmark features results in a smaller RPE for the same on a larger patient cohort, potentially showing a correlation TRE. This is likely due to the greater degree of freedom in between overlay accuracy and clinical outcomes. matching two lines. In this instance, this has helped bring the RPE values closer to TRE; however, this result may be Acknowledgements This publication presents independent research specific to the geometry tested. Further work is required to funded by the Health Innovation Challenge Fund (HICF-T4-317), determine whether this is true in a more general case. a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those Based on the phantom results, positive correlation between of the author(s) and not necessarily those of the Wellcome Trust or the RPE measured at the surface and TRE at subsurface land- Department of Health. S.T. was further supported by the EPSRC grant marks breaks down below RMS RPE of around 6 mm when “Medical image computing for next-generation healthcare technology” using points and lines and 10 mm when using central points [EP/M020533/1]. S.O., D.H., and M.J.C. were supported by the Well- come/EPSRC [203145Z/16/Z]. only. The main cause of this is likely to be the geometric rela- tionship between the position of surface landmarks and the subsurface targets. In theory, the same rules that govern the Compliance with ethical standards design of fiducial markers and tracked instruments [11,19,24] can inform the ideal choice of in vivo surface landmarks to Conflict of interest The authors declare that they have no conflict of interest. 123 874 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 Ethical approval All procedures performed in studies involving human 11. Fitzpatrick J, West J (2001) The distribution of target registration participants were in accordance with the ethical standards of the insti- error in rigid-body point-based registration. IEEE Trans Med Imag- tutional and/or national research committee and with the 1964 Helsinki ing 20(9):917–927 Declaration and its later amendments or comparable ethical standards. 12. Hayashi Y, Misawa K, Hawkes DJ, Mori K (2016) Progressive internal landmark registration for surgical navigation in laparo- Informed consent Informed consent was obtained from all individual scopic gastrectomy for gastric cancer. Int J Comput Assist Radiol Surg 11(5):837–845 participants included in the study. 13. Kang X, Azizian M, Wilson E, Wu K, Martin AD, Kane TD, Peters Open Access This article is distributed under the terms of the Creative CA, Cleary K, Shekhar R (2014) Stereoscopic augmented reality Commons Attribution 4.0 International License (http://creativecomm for laparoscopic surgery. Surg Endosc 28(7):2227–2235 ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, 14. Nolden M, Zelzer S, Seitel A, Wald D, Müller M, Franz AM, and reproduction in any medium, provided you give appropriate credit Maleike D, Fangerau M, Baumhauer M, Maier-Hein L, Maier-Hein to the original author(s) and the source, provide a link to the Creative KH, Meinzer HP, Wolf I (2013) The medical imaging interaction Commons license, and indicate if changes were made. toolkit: challenges and advances. Int J Comput Assist Radiol Surg 8(4):607–620 15. Pratt P, Mayer E, Vale J, Cohen D, Edwards E, Darzi A, Yang GZ (2012) An effective visualisation and registration system for References image-guided robotic partial nephrectomy. J Robot Surg 6:23–31 16. Ramalhinho J, Robu M, Thompson S, Edwards P, Schneider C, 1. Amir-Khalili A, Nosrati M, Peyrat JM, Hamarneh G, Abugharbieh Gurusamy K, Hawkes D, Davidson B, Barratt D, Clarkson MJ R (2013) Uncertainty-encoded augmented reality for robot-assisted (2017) Breathing motion compensated registration of laparoscopic partial nephrectomy: a phantom study. In: Medical Image Com- liver ultrasound to ct. In: SPIE Medical Imaging, pp 101,352V– puting and Computer-Assisted Intervention Workshop on Medical 101,352V. International Society for Optics and Photonics Imaging and Augmented Reality (MICCAI MIAR), vol 8090, pp 17. Reichard D, Häntsch D, Bodenstedt S, Suwelack S, Wagner M, 182–191 Kenngott H, Müller-Stich B, Maier-Hein L, Dillmann R, Speidel 2. Arun KS, Huang TS, Blostein SD (1987) Least-squares fitting of S (2017) Projective biomechanical depth matching for soft tissue two 3-D point sets. IEEE Trans Pattern Anal Mach Intell 5:698–700 registration in laparoscopic surgery. Int J Comput Assist Radiol 3. Bano J, Nicolau S, Hostettler A, Doignon C, Marescaux J, Soler Surg 12(7):1101–1110 L (2013) Registration of preoperative liver model for laparoscopic 18. Robu MR, Edwards P, Ramalhinho J, Thompson S, Davidson B, surgery from intraoperative 3D acquisition. In: Augmented reality Hawkes D, Stoyanov D, Clarkson MJ (2017) Intelligent viewpoint environments for medical imaging and computer-assisted interven- selection for efficient ct to video registration in laparoscopic liver tions, Lecture Notes in Computer Science, vol 8090, pp 201–210. surgery. Int J Comput Assist Radiol Surg 12(7):1079–1088 Springer, Berlin Heidelberg 19. Thompson S, Penney G, Dasgupta P, Hawkes D (2013) Improved 4. Bartoli A, Collins T, Bourdel N, Canis M (2012) Computer assisted modelling of tool tracking errors by modelling dependent marker minimally invasive surgery: is medical computer vision the answer errors. IEEE Trans Med Imaging 32(2):165–177 to improving laparosurgery? Med Hypotheses 79(6):858–863 20. Thompson S, Stoyanov D, Schneider C, Gurusamy K, Ourselin S, 5. Bernhardt S, Nicolau SA, Soler L, Doignon C (2017) The status of Davidson B, Hawkes D, Clarkson MJ (2016) Hand-eye calibration augmented reality in laparoscopic surgery as of 2016. Med Image for rigid laparoscopes using an invariant point. Int J Comput Assist Anal 37:66–90 Radiol Surg 11(6):1071–1080 6. Buchs NC, Volonte F, Pugin F, Toso C, Fusaglia M, Gavaghan 21. Thompson S, Totz J, Song Y, Johnsen S, Stoyanov D, Ourselin S, K, Majno PE, Peterhans M, Weber S, Morel P (2013) Augmented Gurusamy K, Schneider C, Davidson B, Hawkes D, Clarkson MJ environments for the targeting of hepatic lesions during image- (2015) Accuracy validation of an image guided laparoscopy system guided robotic liver surgery. J Surg Res 184(2):825–831 for liver resection. In: SPIE medical imaging. International society 7. Ciria R, Cherqui D, Geller D, Briceno J, Wakabayashi G (2016) for optics and photonice, vol 9415, pp 941509–941509–12. https:// Comparative short-term benefits of laparoscopic liver resection. doi.org/10.1117/12.2080974 Ann Surg 263(4):761–777 22. Totz J, Thompson S, Stoyanov D, Gurusamy K, Davidson B, 8. Clarkson M, Zombori G, Thompson S, Totz J, Song Y, Espak M, Hawkes DJ, Clarkson MJ (2014) Fast Semi-dense Surface Recon- Johnsen S, Hawkes D, Ourselin S (2015) The NifTK software struction from Stereoscopic Video in Laparoscopic Surgery. In: platform for image-guided interventions: platform overview and Information processing in computer-assisted interventions, Lec- NiftyLink messaging. Int J Comput Assist Radiol Surg 10(3):301– ture Notes in Computer Science, vol 8498. Springer International 316 Publishing, pp 206–215 9. Collins J, Weis J, Heiselman J, Clements L, Simpson A, Jarnagin 23. Tsai R (1987) A versatile camera calibration technique for high- W, Miga M (2017) Improving registration robustness for image- accuracy 3D machine vision metrology using off-the-shelf tv guided liver surgery in a novel human-to-phantom data framework. cameras and lenses. IEEE J Robot Autom 3(4):323–344 IEEE Trans Med Imaging 36:1502–1510 24. West JB, Maurer CR (2004) Designing optically tracked instru- 10. Conrad C, Fusaglia M, Peterhans M, Lu H, Weber S, Gayet B ments for image-guided surgery. IEEE Trans Med Imaging (2016) Augmented reality navigation surgery facilitates laparo- 23(5):533–545 scopic rescue of failed portal vein embolization. J Am Coll Surg 223(4):e31–e34 http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Computer Assisted Radiology and Surgery Springer Journals
Free
10 pages

Loading next page...
 
/lp/springer_journal/in-vivo-estimation-of-target-registration-errors-during-augmented-r6sQKbKbIw
Publisher
Springer Journals
Copyright
Copyright © 2018 by The Author(s)
Subject
Medicine & Public Health; Imaging / Radiology; Surgery; Health Informatics; Computer Imaging, Vision, Pattern Recognition and Graphics; Computer Science, general
ISSN
1861-6410
eISSN
1861-6429
D.O.I.
10.1007/s11548-018-1761-3
Publisher site
See Article on Publisher Site

Abstract

Purpose Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. Methods The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. Results The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. Conclusion We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery. Keywords Image-guided surgery · Augmented reality · Liver · Validation · Error measurement · Laparoscope Introduction addressed by introducing external images to the procedure, known as image-guided surgery (IGS). A recent review [5] Laparoscopic surgery for liver resection is in general prefer- describes the state of the art of laparoscopic IGS. In most able to open surgery, due to the significant reduction in cases Augmented Reality (AR), where a model is overlaid post-operative pain and scarring [7]. Currently only a minor- directly on the laparoscopic video, is avoided due to the dif- ity of patients at specialist hospitals undergoes laparoscopic ficulty in creating a well aligned overlay on a deforming and resection. One reason for the low rate of laparoscopic resec- mobile organ. One approach is to show a solid model derived tion is the difficulty surgeons have in identifying key anatomy from pre-operative Computed Tomography (CT) next to the through a laparoscopic camera and monitor [4]. This can be surgical scene. Whilst the model may be orientated to match the surgical scene, it is up to the surgeon to identify the final correspondence between the model and the video. The first B Stephen Thompson reported use of an AR overlay in laparoscopic liver surgery s.thompson@ucl.ac.uk is reported by [10] making the case for the benefits of an AR Wellcome/EPSRC Centre for Interventional and Surgical laparoscopic system. We developed the “SmartLiver” IGS Sciences, University College London, London, UK system to show the liver model overlaid on the video feed Division of Surgery and Interventional Science, University from a laparoscope. This spares the surgeon some cognitive College London, London, UK 123 866 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 load; however, it raises questions in terms of perception and Pratt el al. [15] overlay a wire-frame of the organ surface. interpretation of errors. In our experience, these approaches are too visually clut- In any AR system, there will be misalignment between tered for liver surgery, hence our proposed use of outline the overlay and what is visible on the screen. Furthermore, rendering. Communication of alignment errors gets harder it must remain the responsibility of the surgeon to interpret when deformable registration is used. Bano et al. [3]show and act upon any apparent error. To enable this, we have two results relevant to our study in their pre-clinical work on implemented advanced visualisation algorithms, to allow the using intra-operative C-arm to inform a non-rigid registra- surgeon to rapidly identify AR overlay errors. Figure 1 shows tion of the liver. Firstly, in their porcine model, deformation an in vivo overlay using our system. A key feature of the due to insufflation is a significant source of registration error overlay is that we have maintained a projected 2D outline of (around 8 mm). Furthermore, the error measured at internal the liver, which can be compared to the visible anatomy. The vessels is significantly higher (by approximately 6 mm) than outline enables an estimate of the accuracy of any overlaid the error measured at the liver surface. non-visible anatomy. Contributions of this Paper Background Our proposed method for in vivo estimation of errors uses the One reason for the slow progress of laparoscopic IGS is a visible misalignment of the liver outline (Fig. 1) to infer the lack of a realistic approach to the measurement and inter- misalignment of non-visible target anatomy. In this paper, we pretation of alignment errors. In contrast to orthopaedics define a measure of visible misalignment, re-projection error or neurosurgery, the anatomy of the abdomen is mobile, so (RPE), and test the assumption that RPE is a useful predictor IGS using rigid registration may suffer significant localised of the misalignment of subsurface targets, or target regis- errors. It is theoretically possible to use deformable registra- tration error (TRE). In part this can be estimated using the tion and motion models [17]; however, this adds complexity, relations between fiducial localisation error (FLE) and TRE and makes it harder for the surgeon to interpret the sys- originally put forward by Fitzpatrick and West [11]; however, tem’s performance. Breath hold or gating can also be used to two factors limit the applicability of their approach. Firstly, improve the apparent accuracy, at a cost in usability. the FLE of individual in vivo landmarks are not independent Collins et al. [9] investigate the effect of variation in sur- random variables, as they will all be influenced by system- face reconstruction protocol on rigid and non-rigid surface- atic errors in calibration and tracking of the laparoscope and based registration. They show that a system using rigid tissue motion. Independence of FLE is a key assumption of registration can be expected to have registration errors around [11] and derived works; therefore, use of these relationships 10 mm, while deformable registration can get down to when the assumption is not true can significantly underesti- approximately 6 mm. These figures are also in agreement mate TRE [19]. Secondly, in our calculation of RPE, errors with our results. normal to the camera lens are effectively discarded, because Kang et al. [13] propose an AR laparoscopic system they cannot be estimated from a 2D image. This creates a that avoids some of the problems of soft tissue motion and non-linear transformation from 3D misalignment errors to deformation between scan and surgery by only using intra- 2D RPE. Therefore, it is not clear that RPE can be safely operatively acquired ultrasound images. They report errors used as a proxy for FLE. of approximately 3 mm for their ultrasound only AR system. In our pre-clinical work, only point landmarks were used The primary source of errors in such a system will be tracking for validation [21]; however, during our ongoing in vivo val- and calibration errors, again providing a useful comparison idation we have found it extremely difficult to identify point with our system. landmarks on the human liver. In general, the landmarks we Hayashi et al. [12] present a novel registration method for have been able to use are concentrated around the high cur- gastric surgery, using subsurface landmarks to progressively vature points close to the falciform ligament. In contrast, it is improve the registration as and when they become visible dur- possible to identify line landmarks across the entire visible ing resection. They report accuracies around 13 mm, which edge of the liver. To enable validation of the system in vivo, is similar to our best achieved accuracy of 12 mm. Interest- we have therefore developed a novel algorithm to measure ingly they report that their surgeons believe the system would RPE using both point and line landmark features. become useful at accuracies of 10 mm, as the surgeon should With this paper, we make three important and novel con- be able to mentally compensate for the residual registration tributions. We test the validity of using RPE derived from errors caused by deformation and motion. point and landmark features to estimate subsurface TRE, in Amir-Khalili et al. [1] propose displaying contours show- so doing we enable the translation from pre-clinical to clinical ing uncertainty around the displayed targets. Alternatively, research. Secondly, the algorithm is applied to 9 in vivo cases, 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 867 Fig. 1 The right liver lobe as seen through the laparoscopic camera, estimate of the accuracy of overlay for non-visible vessels, veins (blue left image. The right image shows the same scene augmented using the and purple) and arteries (red). Also visible is the gall bladder (yellow SmartLiver system. The outline of the liver, shown as an orange line, outline) and a tumour (green) can be compared to the visible liver outline. The mismatch gives an to our knowledge this is the first attempt at a quantitative eval- Steps 6 and 7 in Fig. 2 define the transform from model uation of a liver AR IGS system on multiple patients. Lastly space to world space, henceforth denoted T . Once T M2W M2W we describe the ongoing development of the SmartLiver sys- is estimated the surgeon can refer to the augmented real- tem, including the use of a novel rendering engine to enable ity display, to localise subsurface anatomy. Steps 6 and/or 7 in vivo visualisation of misalignment errors and an improved can be repeated to give a new estimate of T if the liver M2W user interface. moves significantly. The visualisation (Fig. 1) shows visible anatomy as a 2D outline and internal anatomy as a depth fogged surface model. Visualisation is implemented using the Visualisation Library. The surgeon can use the mismatch Materials and methods between the visible and projected outlines to make a rapid assessment of the system accuracy. Analysis of registration SmartLiver surgery workflow using surface-based accuracy was performed after surgery, using data saved dur- registration ing surgery. These data consist of video and tracking data recorded throughout the procedure, calibration data for the The SmartLiver system hardware consists of a workstation laparoscope, and any estimates of T from in-theatre reg- M2W PC and a Polaris Spectra optical tracking system, mounted istrations. on a custom built trolley with an un-interruptible power supply. The PC runs custom software based on the NifTK Estimation of re-projection error software platform [8]. The PC includes an NVIDIA SDI cap- ture card and an NVIDIA K6000 GPU. In theatre, the system Errors in augmented reality can be estimated in some appli- stands next to the laparoscopic stack, allowing the surgeon cations where features are visible in both the video and in the to see an augmented reality overlay near their existing line projected model. This approach was described in our previ- of sight. ous publication [21] on pre-clinical and phantom data and is Figure 2 shows the software flowchart and user inter- extended here. face from start up to augmented reality overlay. Up until the Landmark points on the CT derived model and on the patient being ready for surgery, set-up time does not impact video data were manually identified by a surgeon who had on total theatre time. Once the patient is anaesthetised and been trained in the use of our software. Point and line picking ready for surgery, time is critical, hence the need for a well- on the model was done using NifTK [8], utilising MITK’s defined work flow and simple user interface. The in vivo data [14] point set interaction plugin. We wrote a custom point and reported in this paper was gathered using earlier versions of line picking application for the video data, which now forms the user interface. Because the user interface was often dif- part of the NifTK software suite. The software scans through ficult to use the quality of any registrations performed in a recorded video file stopping every n frames where n is set theatre is highly variable, as will be seen in the results. by the user, typically between 25 and 100 frames, depending 1 2 Northern Digital Inc. www.ndigital.com. www.visualizationlibrary.org. 123 868 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 5: Liver surface 1: Patient model patches are re- loaded and checked constructed using by the user. [22]. 2: Tracking and 6: The user manu- video data sources ally aligns the model started and status to video, using on checked. screen buttons. 3: Tracking collar is attached to la- 7: ICP registers re- paroscope, before constructed surfaces covering with sterile to model [21]. drapes. 8: Overlay is ready, 4: Laparoscope is individual anatomy calibrated using objects can be method from [20]. turned on/of. Fig. 2 Flow diagram of the SmartLiver IGS software. The user runs through 7 tabbed screens, moving from system initialisation to registration and overlay. To provide the clearest possible images, we have used a mixture of images from clinical use (panels 3, 4, and 8) and phantom testing (panels1,2,5,6,7) on the length of the recorded video. The software finds the If the geometric error (in mm) remains the same, the pixel nearest (in time) tracking data to the video frame and checks error will increase as the camera gets closer to the object. To the timing difference. If the tracking data are from within 20 counter this problem, we “re-project” the on-screen errors ms of the video frame the user is shown a pair of still images onto a plane parallel to the camera frame at the distance of the from the left and right channels. If the timing difference in corresponding model feature. The distance between the two greater than 20 ms the frame is skipped. points on this plane can be measured in millimetres. Because When presented with the two still images the user is able the on-screen point is back projected onto a plane passing to click on either of them to define visible landmarks. The through the corresponding model point, there is no error in user can toggle between point and line selection mode. The the direction normal to the camera plane (the z direction). landmarks correspond to those selected on the patient model. The above approach was used on phantom and pre-clinical Landmarks not visible in a given frame are simply excluded. data using landmark points [21]. However, we found it was We have written another application to determine RPE difficult to identify corresponding landmark points for in vivo using the landmark points, the camera calibration, the camera data. Specifically, it was very difficult to find point features tracking data, and T . For each frame of video where away from the centre of the liver (near the falciform liga- M2W landmark points have been picked, the error in pixels between ment). In contrast, line features, such as the liver edges, can the picked landmark and its projected location on the model be identified across the entire liver and used by the surgeon to is calculated. Landmarks that do not project onto the screens assess accuracy. Therefore, the methodology was extended visible area are excluded from the analysis. to allow the use of line features on the liver surface. The user Representing errors in pixels is problematic for two rea- defines lines as a set of discrete vertices on both the model sons. Firstly, it has no physical meaning, the surgeon is and the video. When calculating errors, the lines on the video interested in how the system errors compare with anatomy, images are treated as a set of discrete vertices, whilst linear for example the smallest vessel size that can be safely cut interpolation between vertices is used on the model. Figure through and cauterised (approx 3 mm). Secondly, it makes 3 shows examples of line and point features identified on no account of the distance of the object from the camera. phantom and in vivo data. The question of how to measure 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 869 RPE using lines is more ambiguous than for points. We use in world space; however, we do not use this method as it the following algorithm: gives an inaccurate measure of the overlay errors observed in the SmartLiver system. Use of a separate pointer results in errors in the hand-eye and left to right lens calibration of 1. Define uniquely identifiable points and lines (points con- the stereo laparoscope showing up as a linear offset. The nected by straight segments) on the CT derived liver SmartLiver system avoids the need for a highly accurate surface model. hand-eye calibration by performing all localisation and over- 2. For a given video frame, mark any visible points and lines. lay in the coordinate system of the laparoscope lens. The liver Partial lines may be used, i.e. there is no requirement that model is located relative to the laparoscope lens position at the whole line is visible on the video frame. some time zero. The model is placed in world coordinates 3. Each line vertex on the image is re-projected along a ray using the hand-eye transform and tracking data. The model through the camera’s origin. is subsequently projected on to the screen using the same 4. Transform model features to the camera lens’ coordinate hand-eye transform. Provided the laparoscope motion is lim- system using T and the world to camera transform. M2W ited between time zero and the time of AR projection the 5. For each ray, find the closest point (x) on the correspond- inaccuracies in the hand-eye calibration largely cancel out. ing model line. As a clinical laparoscope is constrained by the trocar, we 6. Define a plane ( p) parallel to the camera image plane have found this to be the case during pre-clinical and clinical passing though (x). evaluation of the system. 7. Compute the distance between point x and the intersec- To get a more relevant error measure T is found using tion of the ray with plane p. M2W stereo triangulation as follows. The pin head positions are 8. The mean distance for all vertices of the re-projected line manually defined in multiple stereo image pairs taken from is the RPE for that feature. a video sequence of the uncovered pin heads. The 3D posi- tion of the pin relative to the left camera lens is triangulated Experiment 1: correlation of TRE and RPE on a liver using the pixel location in each stereo pair, the two cameras’ phantom intrinsic matrices and right to left lens transform. The tri- angulated points are placed in world coordinates using the The assumption that RPE can be used to estimate TRE is hand-eye and tracking transforms. The result is a point cloud fundamental to the utility of our proposed IGS system. We in world space for each pin head. The pin heads defined in test this assumption here. To estimate the system’s accuracy the model are registered to the centroids of these point clouds in localising subsurface landmarks a custom made silicone by minimising fiducial registration error (FRE) a per Arun et phantom was utilised, see Fig. 4. al. [2]. RPE for this ideal model to world transform (denoted The shape of the phantom was taken from a CT scan of T ) will not be zero, as errors due to tracking, cali- M2W (i ) an adult male liver. The external appearance was designed to bration, and point picking will still be present; however, the be representative of a healthy adult liver to enable testing of RPE will be approximately minimised, giving the surgeon the computer vision algorithms on the phantom [21]. The outer best possible estimate of the position of the subsurface tar- part of the liver phantom is made from flexible silicone and gets. Therefore, T is assigned a zero TRE. Any other M2W (i ) can be repeatably mounted on a set of 9 rigid pins inserted model to world transform for the phantom data set can be into a moulded epoxy base, see Fig. 4. This configuration described in terms of its TRE relative to T . M2W (i ) enables future work on deformable registration, by utilising bases with different pin geometry. The experiment we performed consisted of: For this paper, we treat the 9 positioning pins as subsur- 1. Identify landmark points and lines in the CT model of face targets, so the accuracy of a given estimate of T can M2W the liver phantom. be assessed by removing the flexible liver phantom and mea- 2. Record a tracked video sequence of the surface of the suring the pin head locations. This method depends on the liver phantom. repeatability of the positioning of the flexible liver phantom 3. Remove the silicone phantom, record a tracked video on the base, which was checked by taking 2 CT scans, with sequence of the subsurface pins. the liver phantom removed and replaced between each scan. 4. Identify landmark points in both videos, plus lines in the The CT scans were then aligned using the pin head positions surface videos. and the alignment of the liver phantom surfaces compared 5. Measure the RPE of landmarks (pin heads and surface visually. No significant misalignment was observed. points and lines) using T . M2W (i ) The model to world transform, T , could be found M2W by using a separate tracked pointer to locate the pin heads The RPE thus found will be substantially lower than that www.healthcuts.co.uk. observed for in vivo data due to the absence of numerous 123 870 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 Fig. 3 Example projection using the surface features on the phantom (left) and on in vivo data. The on-screen features (shown in yellow) are defined on the recorded video images. The projected features (in white) are projected from the model using the estimated T M2W Fig. 4 The silicone liver phantom used for validation. The exterior (left) hand image shows the relative positions of the subsurface targets and is representative in appearance and geometry of an adult male liver. The surface landmarks. The 9 targets are shown in red, the 6 peripheral sur- internal pins (centre, highlighted in red) secure the liver phantom and face point landmarks are shown in yellow, the 2 central landmarks in act as subsurface landmarks for the measurement of TRE. The right- green and the 9 line features in blue error sources encountered in vivo, most significantly errors ated at each integer value of normalised Euclidean distances due to liver motion and deformation, but also the difficulty from 1 to 20 in. The range of normalised Euclidean distances in achieving the optimum rigid body registration. Though was set to provide a usable distribution of results at clinically the sources of error are varied, we make the assumption that representative RPE. their combined effect can be modelled using perturbations ), TRE and At each perturbed transform (denoted T M2W ( p) of T . To create sufficient data to test for correlation RPE were calculated for each available landmark. RMS val- M2W (i ) between RPE and TRE, we generated 20,000 random per- ues for each measure over multiple landmarks were then turbations of T , and measured the root mean square calculated and reported. RMS TRE was calculated using Eq. M2W (i ) (RMS) values of RPE and TRE at each. 1, where X is the position vector for each of the nine targets Random perturbations were defined by 6 independent ran- (pin heads) in world coordinates. dom variables, 3 translations and 3 rotations. All rotations were about the centroid of the liver phantom. Translations were randomly sampled from a zero mean normal distribu- TRE = (X − T X ) (1) RMS i M2W ( p) i tion of standard deviation 1.0 mm. Rotations were randomly i =1 sampled from a zero mean normal distribution of standard ◦ ◦ deviation 1.2 . The scaling (1.2 per mm) was set so that a translation or rotation of 1 standard deviation results in Three measures of RMS RPE were calculated using different the same mean absolute displacement across the liver phan- subsets of the surface features shown in Fig. 4. The first uses tom. Rotations and translations were then scaled (using the all 8 available surface point landmarks, the second only uses same scalar for all six vectors) to give a defined normalised the 2 point landmarks near the falciform ligament to represent Euclidean distance from T . Sampling in this way gen- M2W (i ) the sort of point features that can be located in vivo. The last erates perturbations uniformly distributed along each of the 6 measure of RMS RPE uses these 2 point landmarks together degrees of freedom. 1000 random perturbations were gener- with 9 line features, predominantly along the front edge of 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 871 the liver phantom, representative of the line features that can with the laparoscope again moved steadily around the phan- be located in vivo. tom at a speed of approximately 35 mm/s. A total of 44 frames were manually annotated, giving 87 samples of the pin head Experiment 2: evaluation of in vivo data positions. The pin heads picked in the CT model and the pin heads For in vivo data, there exists no ideal transform as the posi- triangulated from the video form two sets of ordered fiducial points, allowing T to be found by minimising FRE as tion of subsurface landmarks remains unknown. However, we M2W (i ) were able to collect substantial amounts of in vivo clinical per [2]. The residual FRE was 2.55 mm, suggesting an error data, following as closely as possible the protocol described in localising each pin head of around 2.89 mm using equation in section “SmartLiver surgery workflow using surface-based 10 from [11]. The RMS RPE at T was 2.15 mm M2W (i ) registration”. To date we have evaluated the accuracy on nine Figure 5 plots the distribution of RMS TRE versus RMS clinical procedures. In each case landmark points were iden- RPE and FRE. Each of the 20,000 registrations were binned tified in the CT-derived liver model and in several hundred according to their RMS RPE in 1 mm bins centred around frames of video per patient. integer values from 1 to 30mm, for each bin the mean Where available, any model to world transforms, T , and standard deviation of RMS TRE is plotted. Correlation M2W between RMS TRE and RMS RPE or FRE was measured determined by manual alignment in theatre were used to mea- sure RMS RPE on surface landmarks. Where sufficient point using Pearson’s correlation coefficient (r), and the mean stan- dard deviation (σ ) over all bins. Figure 5a plots RMS TRE landmarks were available it was also possible to estimate T based on triangulation and registration of surface land- versus RMS RPE and FRE when evaluated on the pin heads M2W mark points. Such landmark-based registration is used by themselves. Both RMS RPE and FRE correlate very well similar liver IGS systems [6,12], so it makes a useful com- with RMS TRE, an unsurprising result given that the mea- parison with our system. surements are all made on the same features, but confirmation In most cases, we also recorded ex vivo laparoscope cali- that RPE can be used as a proxy for TRE in the ideal case. bration data, either of a cross-hair [20] or in earlier cases of a Figure 5b plots RMS TRE versus RMS RPE when RPE chessboard calibration grid [23]. These calibration data were is calculated using surface landmark features only, whilst used to assess the accuracy of the system in theatre in the the RMS TRE is measured at the subsurface pin heads. The first (red) line shows the result for when all (8) surface point absence of tissue motion. Chessboard corners or cross-hair centres were manually identified in the video data for tens landmarks are used for measuring RPE, similarly to our pre- of frames per data set. These feature were triangulated to clinical results ([21]). In this case, RMS RPE provides a good world coordinates, and these used to measure re-projection predictor of RMS TRE with a Pearson’s correlation coeffi- error using the method described in section “Estimation of cient of 0.79. This suggests that in cases where surface point re-projection error”. Using this method will include errors landmarks are available over a significant area they provide in picking the points in the video frames, allowing a more a useful indicator of subsurface accuracy. The second (blue) direct comparison with the in vivo accuracy, in contrast to line shows a more clinically realistic situation in which only reporting the calibration residual errors. those point landmarks near the falciform ligament are used to calculate RPE. In this case, the correlation coefficient is significantly reduced (to 0.44), and what correlation there is predominantly occurs above RMS RPE of 15 mm, mak- Results ing this measurement of questionable clinical use. The third (green) line in Fig. 5b shows how correlation can be improved Experiment 1: correlation of TRE and RPE on a liver phantom by incorporating surface line features, which can be identi- fied in vivo. Video of the liver phantom surface was recorded, imaging the surface point and line landmarks. A total of 2296 stereo Experiment 2: evaluation of in vivo data images were recorded (2 × 540 × 1920 pixels). The laparo- scope was moved steadily by hand to try and image each Data from nine patients have been analysed. Acquisitions landmark, at an average speed (measured at the lens) of were made during surgery with the laparoscope being slowly approximately 30 mm/s. A total of 68 images were manu- moved by hand. Acquisition time and speed varied, but typi- ally annotated with the positions of point and line landmarks cally consisted of 1–3 minutes of video with the laparoscope by an experienced research scientist, to give a total of 76 lens moving at around 10–20 mm/s. The number of point point landmarks and 104 line landmarks. The flexible sili- and line features used varied between patients. The mini- cone liver phantom was then removed from its base. A total mum number of points was 3 and the maximum was 7. The of 2460 stereo pairs of the securing pin heads were recorded, minimum number of lines was 5 and the maximum was 9. 123 872 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 (a) (b) Fig. 5 RMS RPE measured on visible features versus RMS TRE mea- removed from its base.) b shows the RMS RPE measured using 3 subsets sured at the pin heads for the phantom. a shows the RMS RPE measured of features on the on the liver phantom’s surface using 9 subsurface pin heads (i.e. with the silicone liver phantom An average 469 frames of video data were manually anno- Discussion tated per patient, with a minimum of 80 and a maximum of 1909 frames. Annotation of the video and CT was done by Several tentative conclusions can be drawn from Table 1.The an experienced laparoscopic surgeon. In all cases RPE was combination of dynamic and static deformation and laparo- measured on a static calibration pattern, on a set of triangu- scope tracking and calibration errors is at least 10 mm. This lated in vivo point landmarks, and a set of in vivo lines and is the best case accuracy for a laparoscopic IGS utilising points registered using point-based registration. The result- optical tracking and a rigid model. There is a slight improve- ing RMS RPE is recorded in the first three numerical columns ment in RMS RPE for retrospective manual alignment versus of Table 1. in theatre manual alignment, probably due the time pres- The last three numerical columns in Table 1 show the sure and ergonomic compromise present during surgery. The results of registrations performed using the SmartLiver sys- best RMS RPE was found using the surface-based ICP; how- tem’s user interface. In four cases, registration was performed ever, there remain significant challenges to make this process during surgery, (Manual Live Alignment), in three differ- robust. ent cases manual alignment was performed after surgery In vivo results indicate that it is possible to achieve appar- on recorded data (Manual Retro. Alignment). Registration ent accuracies (RPE) of around 12 mm, which correspond to using the surface-based Iterative Closest Point (ICP) algo- mean subsurface accuracies around 15 mm (green line in Fig. rithm was performed once, using surface patches grabbed 5b) with a rigid registration system. Whether such accuracy during surgery. Due to the small sample sizes, we have not is clinically useful is currently unknown. The SmartLiver performed any statistical comparison of the different regis- IGS system is at present the only laparoscopic liver surgery tration methods. system where an augmented reality overlay is attempted rou- As in our pre-clinical work [21], it is useful to analyse the tinely. Clinical evaluation is ongoing to try and link accuracy in vivo results in terms of what error sources contribute to achieved to clinical outcome. Clinical evaluation will also the overall error. The bottom 10 rows of Table 1 show which enable an analysis of the most useful way to report errors, error sources contribute to each result. i.e. here we report RMS errors, whereas it may be more rel- evant to focus on the extreme values. Anecdotally, surgeons were generally impressed with the overlays achieved, giving encouragement that the system may be useful at its current accuracy level. Our long-term aim is to develop a clinical guidance sys- Excluding two outliers the minimum number of frames used is 210, and the maximum 462. tem which can reliably achieve accuracies better than 5 mm, 123 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 873 Table 1 Average RMS RPE errors measured for the human clinical data, classified by what error sources contribute to each error measurement Registration method Calib. Error Triang. Points Point-Based Reg. Manual Live Align. Manual Retro. Align. ICP Retro. Align. Average RMS RPE 7.5 10.2 16.3 25.0 19.4 12.3 Standard deviation (Samples) 5.2 (9) 3.3 (9) 3.9 (9) 8.8 (4) 7.4 (3) − (1) Contributing errors Laparoscope tracking Laparoscope calibration Picking points in video Picking points in CT – – Ordered point-based registration – –  –– Manual registration – – –  – Operating room conditions – – –  –– ICP surface-based registration – – – – – Static deformation (insuflation) – – Dynamic deformation (breathing) – Cells containing ticks indicate that a given error source (rows) contributes to total error for a given registration method (columns) in order to allow the surgeon to navigate around vessels use for error estimation. We have begun work [18] looking at of that size. However, this target was set in the absence the what surface features provide the best registration, which of an agreed method to measure accuracy, so is somewhat could be extended so that the overlay only shows portions arbitrary. Nonetheless, the results presented here indicate of the liver edge to maximise correlation between apparent that accuracies better than 10 mm can only be achieved by RPE and TRE. deformable registration. Deformable registration and breath- ing motion compensation [16] of the liver has been shown to be technically possible by several groups [9,17]. This raises Conclusion the question of how the surgeon interprets alignment errors when the model has been computationally deformed. Fur- We have described some aspects of the in vivo clinical use of ther work could compare TRE and RPE over a wider range the SmartLiver AR IGS system. We have highlighted some of liver shapes and incorporating deformable registration. of the many challenges involved in the transition from pre- Our proposed approach of using the 2D projected organ out- clinical to clinical research in IGS. Not least of these is the line should continue to allow a rapid in vivo assessment of need for a clear and well-validated method to determine the error. in vivo accuracy. The algorithm we have presented, tested, Our phantom results, Fig. 5, indicate that the addition of and used should enable the evaluation of the IGS system line landmark features results in a smaller RPE for the same on a larger patient cohort, potentially showing a correlation TRE. This is likely due to the greater degree of freedom in between overlay accuracy and clinical outcomes. matching two lines. In this instance, this has helped bring the RPE values closer to TRE; however, this result may be Acknowledgements This publication presents independent research specific to the geometry tested. Further work is required to funded by the Health Innovation Challenge Fund (HICF-T4-317), determine whether this is true in a more general case. a parallel funding partnership between the Wellcome Trust and the Department of Health. The views expressed in this publication are those Based on the phantom results, positive correlation between of the author(s) and not necessarily those of the Wellcome Trust or the RPE measured at the surface and TRE at subsurface land- Department of Health. S.T. was further supported by the EPSRC grant marks breaks down below RMS RPE of around 6 mm when “Medical image computing for next-generation healthcare technology” using points and lines and 10 mm when using central points [EP/M020533/1]. S.O., D.H., and M.J.C. were supported by the Well- come/EPSRC [203145Z/16/Z]. only. The main cause of this is likely to be the geometric rela- tionship between the position of surface landmarks and the subsurface targets. In theory, the same rules that govern the Compliance with ethical standards design of fiducial markers and tracked instruments [11,19,24] can inform the ideal choice of in vivo surface landmarks to Conflict of interest The authors declare that they have no conflict of interest. 123 874 International Journal of Computer Assisted Radiology and Surgery (2018) 13:865–874 Ethical approval All procedures performed in studies involving human 11. Fitzpatrick J, West J (2001) The distribution of target registration participants were in accordance with the ethical standards of the insti- error in rigid-body point-based registration. IEEE Trans Med Imag- tutional and/or national research committee and with the 1964 Helsinki ing 20(9):917–927 Declaration and its later amendments or comparable ethical standards. 12. Hayashi Y, Misawa K, Hawkes DJ, Mori K (2016) Progressive internal landmark registration for surgical navigation in laparo- Informed consent Informed consent was obtained from all individual scopic gastrectomy for gastric cancer. Int J Comput Assist Radiol Surg 11(5):837–845 participants included in the study. 13. Kang X, Azizian M, Wilson E, Wu K, Martin AD, Kane TD, Peters Open Access This article is distributed under the terms of the Creative CA, Cleary K, Shekhar R (2014) Stereoscopic augmented reality Commons Attribution 4.0 International License (http://creativecomm for laparoscopic surgery. Surg Endosc 28(7):2227–2235 ons.org/licenses/by/4.0/), which permits unrestricted use, distribution, 14. Nolden M, Zelzer S, Seitel A, Wald D, Müller M, Franz AM, and reproduction in any medium, provided you give appropriate credit Maleike D, Fangerau M, Baumhauer M, Maier-Hein L, Maier-Hein to the original author(s) and the source, provide a link to the Creative KH, Meinzer HP, Wolf I (2013) The medical imaging interaction Commons license, and indicate if changes were made. toolkit: challenges and advances. Int J Comput Assist Radiol Surg 8(4):607–620 15. Pratt P, Mayer E, Vale J, Cohen D, Edwards E, Darzi A, Yang GZ (2012) An effective visualisation and registration system for References image-guided robotic partial nephrectomy. J Robot Surg 6:23–31 16. Ramalhinho J, Robu M, Thompson S, Edwards P, Schneider C, 1. Amir-Khalili A, Nosrati M, Peyrat JM, Hamarneh G, Abugharbieh Gurusamy K, Hawkes D, Davidson B, Barratt D, Clarkson MJ R (2013) Uncertainty-encoded augmented reality for robot-assisted (2017) Breathing motion compensated registration of laparoscopic partial nephrectomy: a phantom study. In: Medical Image Com- liver ultrasound to ct. In: SPIE Medical Imaging, pp 101,352V– puting and Computer-Assisted Intervention Workshop on Medical 101,352V. International Society for Optics and Photonics Imaging and Augmented Reality (MICCAI MIAR), vol 8090, pp 17. Reichard D, Häntsch D, Bodenstedt S, Suwelack S, Wagner M, 182–191 Kenngott H, Müller-Stich B, Maier-Hein L, Dillmann R, Speidel 2. Arun KS, Huang TS, Blostein SD (1987) Least-squares fitting of S (2017) Projective biomechanical depth matching for soft tissue two 3-D point sets. IEEE Trans Pattern Anal Mach Intell 5:698–700 registration in laparoscopic surgery. Int J Comput Assist Radiol 3. Bano J, Nicolau S, Hostettler A, Doignon C, Marescaux J, Soler Surg 12(7):1101–1110 L (2013) Registration of preoperative liver model for laparoscopic 18. Robu MR, Edwards P, Ramalhinho J, Thompson S, Davidson B, surgery from intraoperative 3D acquisition. In: Augmented reality Hawkes D, Stoyanov D, Clarkson MJ (2017) Intelligent viewpoint environments for medical imaging and computer-assisted interven- selection for efficient ct to video registration in laparoscopic liver tions, Lecture Notes in Computer Science, vol 8090, pp 201–210. surgery. Int J Comput Assist Radiol Surg 12(7):1079–1088 Springer, Berlin Heidelberg 19. Thompson S, Penney G, Dasgupta P, Hawkes D (2013) Improved 4. Bartoli A, Collins T, Bourdel N, Canis M (2012) Computer assisted modelling of tool tracking errors by modelling dependent marker minimally invasive surgery: is medical computer vision the answer errors. IEEE Trans Med Imaging 32(2):165–177 to improving laparosurgery? Med Hypotheses 79(6):858–863 20. Thompson S, Stoyanov D, Schneider C, Gurusamy K, Ourselin S, 5. Bernhardt S, Nicolau SA, Soler L, Doignon C (2017) The status of Davidson B, Hawkes D, Clarkson MJ (2016) Hand-eye calibration augmented reality in laparoscopic surgery as of 2016. Med Image for rigid laparoscopes using an invariant point. Int J Comput Assist Anal 37:66–90 Radiol Surg 11(6):1071–1080 6. Buchs NC, Volonte F, Pugin F, Toso C, Fusaglia M, Gavaghan 21. Thompson S, Totz J, Song Y, Johnsen S, Stoyanov D, Ourselin S, K, Majno PE, Peterhans M, Weber S, Morel P (2013) Augmented Gurusamy K, Schneider C, Davidson B, Hawkes D, Clarkson MJ environments for the targeting of hepatic lesions during image- (2015) Accuracy validation of an image guided laparoscopy system guided robotic liver surgery. J Surg Res 184(2):825–831 for liver resection. In: SPIE medical imaging. International society 7. Ciria R, Cherqui D, Geller D, Briceno J, Wakabayashi G (2016) for optics and photonice, vol 9415, pp 941509–941509–12. https:// Comparative short-term benefits of laparoscopic liver resection. doi.org/10.1117/12.2080974 Ann Surg 263(4):761–777 22. Totz J, Thompson S, Stoyanov D, Gurusamy K, Davidson B, 8. Clarkson M, Zombori G, Thompson S, Totz J, Song Y, Espak M, Hawkes DJ, Clarkson MJ (2014) Fast Semi-dense Surface Recon- Johnsen S, Hawkes D, Ourselin S (2015) The NifTK software struction from Stereoscopic Video in Laparoscopic Surgery. In: platform for image-guided interventions: platform overview and Information processing in computer-assisted interventions, Lec- NiftyLink messaging. Int J Comput Assist Radiol Surg 10(3):301– ture Notes in Computer Science, vol 8498. Springer International 316 Publishing, pp 206–215 9. Collins J, Weis J, Heiselman J, Clements L, Simpson A, Jarnagin 23. Tsai R (1987) A versatile camera calibration technique for high- W, Miga M (2017) Improving registration robustness for image- accuracy 3D machine vision metrology using off-the-shelf tv guided liver surgery in a novel human-to-phantom data framework. cameras and lenses. IEEE J Robot Autom 3(4):323–344 IEEE Trans Med Imaging 36:1502–1510 24. West JB, Maurer CR (2004) Designing optically tracked instru- 10. Conrad C, Fusaglia M, Peterhans M, Lu H, Weber S, Gayet B ments for image-guided surgery. IEEE Trans Med Imaging (2016) Augmented reality navigation surgery facilitates laparo- 23(5):533–545 scopic rescue of failed portal vein embolization. J Am Coll Surg 223(4):e31–e34

Journal

International Journal of Computer Assisted Radiology and SurgerySpringer Journals

Published: Apr 16, 2018

References

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off