A framework for three-dimensional statistical shape modeling of the proximal femur in Legg–Calvé–Perthes diseaseJohnson, Luke G.; Mozingo, Joseph D.; Atkins, Penny R.; Schwab, Seaton; Morris, Alan; Elhabian, Shireen Y.; Wilson, David R.; Kim, Harry K. W.; Anderson, Andrew E.
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03272-2pmid: 39377856
PurposeThe pathomorphology of Legg–Calvé–Perthes disease (LCPD) is a key contributor to poor long-term outcomes such as hip pain, femoroacetabular impingement, and early-onset osteoarthritis. Plain radiographs, commonly used for research and in the clinic, cannot accurately represent the full extent of LCPD deformity. The purpose of this study was to develop and evaluate a methodological framework for three-dimensional (3D) statistical shape modeling (SSM) of the proximal femur in LCPD.MethodsWe developed a framework consisting of three core steps: segmentation, surface mesh preparation, and particle-based correspondence. The framework aims to address challenges in modeling this rare condition, characterized by highly heterogeneous deformities across a wide age range and small sample sizes. We evaluated this framework by producing a SSM from clinical magnetic resonance images of 13 proximal femurs with LCPD deformity from 11 patients between the ages of six and 12 years.ResultsAfter removing differences in scale and pose, the dominant shape modes described morphological features characteristic of LCPD, including a broad and flat femoral head, high-riding greater trochanter, and reduced neck-shaft angle. The first four shape modes were chosen for the evaluation of the model’s performance, together describing 87.5% of the overall cohort variance. The SSM was generalizable to unfamiliar examples with an average point-to-point reconstruction error below 1mm. We observed strong Spearman rank correlations (up to 0.79) between some shape modes, 3D measurements of femoral head asphericity, and clinical radiographic metrics.ConclusionIn this study, we present a framework, based on SSM, for the objective description of LCPD deformity in three dimensions. Our methods can accurately describe overall shape variation using a small number of parameters, and are a step toward a widely accepted, objective 3D quantification of LCPD deformity.
Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic featuresRivera Monroy, Luis Carlos; Rist, Leonhard; Ostalecki, Christian; Bauer, Andreas; Vera, Julio; Breininger, Katharina; Maier, Andreas
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03277-xpmid: 39373802
PurposeThis study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles.MethodsWe utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness.ResultsIntegrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data.ConclusionRadiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study’s findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness.
Real-time ultrasound AR 3D visualization toward better topological structure perception for hepatobiliary surgeryJi, Yuqi; Huang, Tianqi; Wu, Yutong; Li, Ruiyang; Wang, Pengfei; Dong, Jiahong; Liao, Honegen
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03273-1pmid: 39400852
PurposeUltrasound serves as a crucial intraoperative imaging tool for hepatobiliary surgeons, enabling the identification of complex anatomical structures like blood vessels, bile ducts, and lesions. However, the reliance on manual mental reconstruction of 3D topologies from 2D ultrasound images presents significant challenges, leading to a pressing need for tools to assist surgeons with real-time identification of 3D topological anatomy.MethodsWe propose a real-time ultrasound AR 3D visualization method for intraoperative 2D ultrasound imaging. Our system leverages backward alpha blending to integrate multi-planar ultrasound data effectively. To ensure continuity between 2D ultrasound planes, we employ spatial smoothing techniques to interpolate the widely spaced ultrasound planes. A dynamic 3D transfer function is also developed to enhance spatial representation through color differentiation.ResultsComparative experiments involving our AR visualization of 3D ultrasound, alongside AR visualization of 2D ultrasound and 2D visualization of 3D ultrasound, demonstrated that the proposed method significantly reduced operational time(110.25 ± 27.83 s compared to 292 ± 146.63 s and 365.25 ± 131.62 s), improved depth perception and comprehension of complex topologies, contributing to reduced pressure and increased personal satisfaction among users.ConclusionQuantitative experimental results and feedback from both novice and experienced physicians highlight our system’s exceptional ability to enhance the understanding of complex topological anatomy. This improvement is crucial for accurate ultrasound diagnosis and informed surgical decision-making, underscoring the system’s clinical applicability.
Quantitative in-vitro assessment of a novel robot-assisted system for cochlear implant electrode insertionAebischer, Philipp; Anschuetz, Lukas; Caversaccio, Marco; Mantokoudis, Georgios; Weder, Stefan
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03276-ypmid: 39352456
PurposeAs an increasing number of cochlear implant candidates exhibit residual inner ear function, hearing preservation strategies during implant insertion are gaining importance. Manual implantation is known to induce traumatic force and pressure peaks. In this study, we use a validated in-vitro model to comprehensively evaluate a novel surgical tool that addresses these challenges through motorized movement of a forceps.MethodsUsing lateral wall electrodes, we examined two subgroups of insertions: 30 insertions were performed manually by experienced surgeons, and another 30 insertions were conducted with a robot-assisted system under the same surgeons’ supervision. We utilized a realistic, validated model of the temporal bone. This model accurately reproduces intracochlear frictional conditions and allows for the synchronous recording of forces on intracochlear structures, intracochlear pressure, and the position and deformation of the electrode array within the scala tympani.ResultsWe identified a significant reduction in force variation during robot-assisted insertions compared to the conventional procedure, with average values of 12 mN/s and 32 mN/s, respectively. Robotic assistance was also associated with a significant reduction of strong pressure peaks and a 17 dB reduction in intracochlear pressure levels. Furthermore, our study highlights that the release of the insertion tool represents a critical phase requiring surgical training.ConclusionRobotic assistance demonstrated more consistent insertion speeds compared to manual techniques. Its use can significantly reduce factors associated with intracochlear trauma, highlighting its potential for improved hearing preservation. Finally, the system does not mitigate the impact of subsequent surgical steps like electrode cable routing and cochlear access sealing, pointing to areas in need of further research.
Multimodal human–computer interaction in interventional radiology and surgery: a systematic literature reviewSchreiter, Josefine; Heinrich, Florian; Hatscher, Benjamin; Schott, Danny; Hansen, Christian
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03263-3pmid: 39467893
PurposeAs technology advances, more research dedicated to medical interactive systems emphasizes the integration of touchless and multimodal interaction (MMI). Particularly in surgical and interventional settings, this approach is advantageous because it maintains sterility and promotes a natural interaction. Past reviews have focused on investigating MMI in terms of technology and interaction with robots. However, none has put particular emphasis on analyzing these kind of interactions for surgical and interventional scenarios.MethodsTwo databases were included in the query to search for relevant publications within the past 10 years. After identification, two screening steps followed which included eligibility criteria. A forward/backward search was added to identify more relevant publications. The analysis incorporated the clustering of references in terms of addressed medical field, input and output modalities, and challenges regarding the development and evaluation.ResultsA sample of 31 references was obtained (16 journal articles, 15 conference papers). MMI was predominantly developed for laparoscopy and radiology and interaction with image viewers. The majority implemented two input modalities, with voice-hand interaction being the most common combination—voice for discrete and hand for continuous navigation tasks. The application of gaze, body, and facial control is minimal, primarily because of ergonomic concerns. Feedback was included in 81% publications, of which visual cues were most often applied.ConclusionThis work systematically reviews MMI for surgical and interventional scenarios over the past decade. In future research endeavors, we propose an enhanced focus on conducting in-depth analyses of the considered use cases and the application of standardized evaluation methods. Moreover, insights from various sectors, including but not limited to the gaming sector, should be exploited.
Semi-automatic robotic puncture system based on deformable soft tissue point cloud registrationZhang, Bo; Chen, Kui; Yao, Yuhang; Wu, Bo; Li, Qiang; Zhang, Zheming; Fan, Peihua; Wang, Wei; Lin, Manxia; Fujie, Masakatsu G.
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03247-3pmid: 39460860
PurposeTraditional surgical puncture robot systems based on computed tomography (CT) and infrared camera guidance have natural disadvantages for puncture of deformable soft tissues such as the liver. Liver movement and deformation caused by breathing are difficult to accurately assess and compensate by current technical solutions. We propose a semi-automatic robotic puncture system based on real-time ultrasound images to solve this problem.MethodReal-time ultrasound images and their spatial position information can be obtained by robot in this system. By recognizing target tissue in these ultrasound images and using reconstruction algorithm, 3D real-time ultrasound tissue point cloud can be constructed. Point cloud of the target tissue in the CT image can be obtained by using developed software. Through the point cloud registration method based on feature points, two point clouds above are registered. The puncture target will be automatically positioned, then robot quickly carries the puncture guide mechanism to the puncture site and guides the puncture. It takes about just tens of seconds from the start of image acquisition to completion of needle insertion. Patient can be controlled by a ventilator to temporarily stop breathing, and patient’s breathing state does not need to be the same as taking CT scan.ResultsThe average operation time of 24 phantom experiments is 64.5 s, and the average error between the needle tip and the target point after puncture is 0.8 mm. Two animal puncture surgeries were performed, and the results indicated that the puncture errors of these two experiments are 1.76 mm and 1.81 mm, respectively.ConclusionRobot system can effectively carry out and implement liver tissue puncture surgery, and the success rate of phantom experiments and experiments is 100%. It also shows that the puncture robot system has high puncture accuracy, short operation time, and great clinical value.
Adaptive infrared patterns for microscopic surface reconstructionsMilosavljevic, Srdjan; Bardosi, Zoltan; Oezbek, Yusuf; Freysinger, Wolfgang
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03242-8pmid: 39382789
PurposeMulti-zoom microscopic surface reconstructions of operating sites, especially in ENT surgeries, would allow multimodal image fusion for determining the amount of resected tissue, for recognizing critical structures, and novel tools for intraoperative quality assurance. State-of-the-art three-dimensional model creation of the surgical scene is challenged by the surgical environment, illumination, and the homogeneous structures of skin, muscle, bones, etc., that lack invariant features for stereo reconstruction.MethodsAn adaptive near-infrared pattern projector illuminates the surgical scene with optimized patterns to yield accurate dense multi-zoom stereoscopic surface reconstructions. The approach does not impact the clinical workflow. The new method is compared to state-of-the-art approaches and is validated by determining its reconstruction errors relative to a high-resolution 3D-reconstruction of CT data.Results200 surface reconstructions were generated for 5 zoom levels with 10 reconstructions for each object illumination method (standard operating room light, microscope light, random pattern and adaptive NIR pattern). For the adaptive pattern, the surface reconstruction errors ranged from 0.5 to 0.7 mm, as compared to 1–1.9 mm for the other approaches. The local reconstruction differences are visualized in heat maps.ConclusionAdaptive near-infrared (NIR) pattern projection in microscopic surgery allows dense and accurate microscopic surface reconstructions for variable zoom levels of small and homogeneous surfaces. This could potentially aid in microscopic interventions at the lateral skull base and potentially open up new possibilities for combining quantitative intraoperative surface reconstructions with preoperative radiologic imagery.
An intuitive guidewire control mechanism for robotic interventionDey, Rohit; Guo, Yichen; Liu, Yang; Puri, Ajit; Savastano, Luis; Zheng, Yihao
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03279-9pmid: 39370493
PurposeTeleoperated Interventional Robotic systems (TIRs) are developed to reduce radiation exposure and physical stress of the physicians and enhance device manipulation accuracy and stability. Nevertheless, TIRs are not widely adopted, partly due to the lack of intuitive control interfaces. Current TIR interfaces like joysticks, keyboards, and touchscreens differ significantly from traditional manual techniques, resulting in a shallow, longer learning curve. To this end, this research introduces a novel control mechanism for intuitive operation and seamless adoption of TIRs.MethodsAn off-the-shelf medical torque device augmented with a micro-electromagnetic tracker was proposed as the control interface to preserve the tactile sensation and muscle memory integral to interventionalists’ proficiency. The control inputs to drive the TIR were extracted via real-time motion mapping of the interface. To verify the efficacy of the proposed control mechanism to accurately operate the TIR, evaluation experiments using industrial grade encoders were conducted.ResultsA mean tracking error of 0.32 ± 0.12 mm in linear and 0.54 ± 0.07° in angular direction were achieved. The time lag in tracking was found to be 125 ms on average using pade approximation. Ergonomically, the developed control interface is 3.5 mm diametrically larger, and 4.5 g. heavier compared to traditional torque devices.ConclusionWith uncanny resemblance to traditional torque devices while maintaining results comparable to state-of-the-art commercially available TIRs, this research successfully provides an intuitive control interface for potential wider clinical adoption of robot-assisted interventions.
Towards multimodal visualization of esophageal motility: fusion of manometry, impedance, and videofluoroscopic image sequencesGeiger, Alexander; Bernhard, Lukas; Gassert, Florian; Feußner, Hubertus; Wilhelm, Dirk; Friess, Helmut; Jell, Alissa
2024 International Journal of Computer Assisted Radiology and Surgery
doi: 10.1007/s11548-024-03265-1pmid: 39379641
PurposeDysphagia is the inability or difficulty to swallow normally. Standard procedures for diagnosing the exact disease are, among others, X-ray videofluoroscopy, manometry and impedance examinations, usually performed consecutively. In order to gain more insights, ongoing research is aiming to collect these different modalities at the same time, with the goal to present them in a joint visualization. One idea to create a combined view is the projection of the manometry and impedance values onto the right location in the X-ray images. This requires to identify the exact sensor locations in the images.MethodsThis work gives an overview of the challenges associated with the sensor detection task and proposes a robust approach to detect the sensors in X-ray image sequences, ultimately allowing to project the manometry and impedance values onto the right location in the images.ResultsThe developed sensor detection approach is evaluated on a total of 14 sequences from different patients, achieving a F1-score of 86.36%. To demonstrate the robustness of the approach, another study is performed by adding different levels of noise to the images, with the performance of our sensor detection method only slightly decreasing in these scenarios. This robust sensor detection provides the basis to accurately project manometry and impedance values onto the images, allowing to create a multimodal visualization of the swallow process. The resulting visualizations are evaluated qualitatively by domain experts, indicating a great benefit of this proposed fused visualization approach.ConclusionUsing our preprocessing and sensor detection method, we show that the sensor detection task can be successfully approached with high accuracy. This allows to create a novel, multimodal visualization of esophageal motility, helping to provide more insights into swallow disorders of patients.