Home

Footer

DeepDyve Logo
FacebookTwitter

Features

  • Search and discover articles on DeepDyve, PubMed, and Google Scholar
  • Read the full-text of open access and premium content
  • Organize articles with folders and bookmarks
  • Collaborate on and share articles and folders

Info

  • Pricing
  • Enterprise Plans
  • Browse Journals & Topics
  • About DeepDyve

Help

  • Help
  • Publishers
  • Contact Us

Popular Topics

  • COVID-19
  • Climate Change
  • Biopharmaceuticals
Terms |
Privacy |
Security |
Help |
Enterprise Plans |
Contact Us

Select data courtesy of the U.S. National Library of Medicine.

© 2023 DeepDyve, Inc. All rights reserved.

International Journal of Computer Assisted Radiology and Surgery

Subject:
Computer Graphics and Computer-Aided Design
Publisher:
Springer International Publishing —
Springer Journals
ISSN:
1861-6410
Scimago Journal Rank:
53

2023

Volume OnlineFirst
SeptemberAugustJulyJuneMayAprilMarch
Volume 18
Supplement 1 (Jun)Issue 10 (Oct)Issue 9 (Sep)Issue 8 (Aug)Issue 7 (Jul)Issue 6 (Jun)Issue 5 (May)Issue 4 (Apr)Issue 3 (Mar)Issue 2 (Feb)Issue 1 (Jan)

2022

Volume 17
Supplement 1 (Jun)Issue 12 (Dec)Issue 11 (Nov)Issue 10 (Oct)Issue 9 (Sep)Issue 8 (Aug)Issue 7 (Jul)Issue 6 (Jun)Issue 5 (May)Issue 4 (Apr)Issue 3 (Mar)Issue 2 (Feb)Issue 1 (Jan)

2021

Volume 16
Supplement 1 (Jun)Issue 12 (Dec)Issue 11 (Nov)Issue 10 (Oct)Issue 9 (Jun)Issue 8 (May)Issue 7 (May)Issue 6 (May)Issue 5 (Apr)Issue 4 (Mar)Issue 3 (Feb)Issue 2 (Jan)

2020

Volume 2020
Issue 2005 (May)
Volume 16
Issue 2 (Dec)Issue 1 (Nov)
Volume 15
Supplement 1 (Jun)Issue 12 (Oct)Issue 11 (Nov)Issue 10 (Oct)Issue 9 (Sep)Issue 8 (Aug)Issue 7 (Jul)Issue 6 (Jun)Issue 5 (May)Issue 4 (Apr)Issue 3 (Mar)Issue 2 (Feb)Issue 1 (Jan)

2019

Volume 2019
Issue 1911 (Nov)Issue 1905 (May)
Volume 14
Issue 12 (Jul)Issue 11 (Jun)Issue 10 (Jul)Issue 9 (Jul)Issue 8 (May)Issue 7 (Apr)Issue 6 (Mar)Issue 5 (Mar)Issue 4 (Feb)Issue 3 (Jan)Issue 1 (Jan)

2018

Volume 2018
Issue 1811 (Nov)
Volume 14
Issue 5 (Oct)Issue 4 (Nov)Issue 3 (Dec)Issue 2 (Oct)Issue 1 (Aug)
Volume 13
Issue 12 (Aug)Issue 11 (Aug)Issue 10 (Jun)Issue 9 (May)Issue 8 (May)Issue 7 (Mar)Issue 6 (Apr)Issue 5 (Mar)Issue 4 (Feb)Issue 3 (Jan)Issue 1 (May)

2017

Volume 13
Issue 4 (Dec)Issue 3 (Aug)Issue 2 (Nov)Issue 1 (Oct)
Volume 12
Issue 12 (Aug)Issue 11 (Jun)Issue 10 (Jul)Issue 9 (Feb)Issue 8 (Jun)Issue 7 (May)Issue 6 (Mar)Issue 5 (Feb)Issue 4 (Jan)Issue 3 (Jan)Issue 1 (May)

2016

Volume 12
Issue 10 (Dec)Issue 9 (Nov)Issue 5 (Sep)Issue 4 (Oct)Issue 3 (Nov)Issue 2 (Sep)Issue 1 (Aug)
Volume 11
Issue 12 (Jun)Issue 11 (Jun)Issue 10 (May)Issue 9 (Mar)Issue 8 (Jan)Issue 6 (Mar)Issue 5 (Jan)Issue 1 (May)

2015

Volume 11
Issue 9 (Dec)Issue 8 (Oct)Issue 7 (Dec)Issue 5 (Nov)Issue 4 (Oct)Issue 3 (Sep)Issue 2 (Aug)Issue 1 (Jun)
Volume 10
Issue 12 (Jun)Issue 11 (Feb)Issue 10 (Mar)Issue 9 (Jan)Issue 8 (Jan)Issue 7 (Jun)Issue 6 (Apr)Issue 1 (May)

2014

Volume 10
Issue 11 (Dec)Issue 10 (Dec)Issue 9 (Dec)Issue 8 (Dec)Issue 7 (Sep)Issue 5 (Jun)Issue 4 (Jul)Issue 3 (May)Issue 2 (May)Issue 1 (May)
Volume 9
Issue 6 (Apr)Issue 5 (Jan)Issue 3 (Jan)Issue 1 (May)

2013

Volume 9
Issue 5 (Dec)Issue 4 (Oct)Issue 3 (Sep)Issue 2 (Jul)Issue 1 (Jun)
Volume 8
Issue 6 (Apr)Issue 5 (Jan)Issue 4 (Mar)Issue 1 (May)

2012

Volume 8
Issue 6 (Nov)Issue 5 (Dec)Issue 4 (Dec)Issue 3 (Aug)Issue 2 (Jul)Issue 1 (May)
Volume 7
Issue 6 (Jun)Issue 5 (Apr)Issue 4 (Jan)Issue 2 (Jan)Issue 1 (May)

2011

Volume 7
Issue 4 (Oct)Issue 3 (Jun)Issue 2 (Jun)Issue 1 (May)
Volume 6
Issue 6 (Apr)Issue 5 (Jan)Issue 4 (Feb)Issue 1 (May)

2010

Volume 6
Issue 5 (Sep)Issue 4 (Oct)Issue 3 (Jul)Issue 2 (Jun)Issue 1 (Jun)
Volume 5
Issue 6 (Apr)Issue 5 (Feb)Issue 4 (Apr)Issue 1 (May)

2009

Volume 5
Issue 3 (Jul)Issue 2 (Jun)Issue 1 (Jul)
Volume 4
Issue 6 (Jun)Issue 5 (Jun)Issue 4 (May)Issue 3 (Mar)Issue 2 (Feb)Issue 1 (May)

2008

Volume 4
Issue 2 (Nov)Issue 1 (Oct)
Volume 3
Issue 6 (Oct)Issue 5 (Jul)Issue 4 (Jun)Issue 2 (May)Issue 1 (May)
Volume 2
Issue 6 (Apr)Issue 5 (Jan)

2007

Volume 2
Issue 4 (Nov)Issue 2 (Jun)Issue 1 (Jun)
Volume 1
Issue 6 (Feb)Issue 5 (Feb)

2006

Volume 1
Issue 5 (Dec)Issue 4 (Nov)Issue 3 (Oct)Issue 2 (Jul)Issue 1 (May)
journal article
Open Access Collection
A simulation-based phantom model for generating synthetic mitral valve image data–application to MRI acquisition planning

Manini, Chiara; Nemchyna, Olena; Akansel, Serdar; Walczak, Lars; Tautz, Lennart; Kolbitsch, Christoph; Falk, Volkmar; Sündermann, Simon; Kühne, Titus; Schulz-Menger, Jeanette; Hennemuth, Anja

2023 International Journal of Computer Assisted Radiology and Surgery

doi: 10.1007/s11548-023-03012-ypmid: 37679657

PurposeNumerical phantom methods are widely used in the development of medical imaging methods. They enable quantitative evaluation and direct comparison with controlled and known ground truth information. Cardiac magnetic resonance has the potential for a comprehensive evaluation of the mitral valve (MV). The goal of this work is the development of a numerical simulation framework that supports the investigation of MRI imaging strategies for the mitral valve.MethodsWe present a pipeline for synthetic image generation based on the combination of individual anatomical 3D models with a position-based dynamics simulation of the mitral valve closure. The corresponding images are generated using modality-specific intensity models and spatiotemporal sampling concepts. We test the applicability in the context of MRI imaging strategies for the assessment of the mitral valve. Synthetic images are generated with different strategies regarding image orientation (SAX and rLAX) and spatial sampling density.ResultsThe suitability of the imaging strategy is evaluated by comparing MV segmentations against ground truth annotations. The generated synthetic images were compared to ones acquired with similar parameters, and the result is promising. The quantitative analysis of annotation results suggests that the rLAX sampling strategy is preferable for MV assessment, reaching accuracy values that are comparable to or even outperform literature values.ConclusionThe proposed approach provides a valuable tool for the evaluation and optimization of cardiac valve image acquisition. Its application to the use case identifies the radial image sampling strategy as the most suitable for MV assessment through MRI.
journal article
Open Access Collection
Augmented reality for sentinel lymph node biopsy

von Niederhäusern, Peter A.; Seppi, Carlo; Sandkühler, Robin; Nicolas, Guillaume; Haerle, Stephan K.; Cattin, Philippe C.

2023 International Journal of Computer Assisted Radiology and Surgery

doi: 10.1007/s11548-023-03014-w

IntroductionSentinel lymph node biopsy for oral and oropharyngeal squamous cell carcinoma is a well-established staging method. One variation is to inject a radioactive tracer near the primary tumor of the patient. After a few minutes, audio feedback from an external hand-held γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-detection probe can monitor the uptake into the lymphatic system. Such probes place a high cognitive load on the surgeon during the biopsy, as they require the simultaneous use of both hands and the skills necessary to correlate the audio signal with the location of tracer accumulation in the lymph nodes. Therefore, an augmented reality (AR) approach to directly visualize and thus discriminate nearby lymph nodes would greatly reduce the surgeons’ cognitive load.Materials and methodsWe present a proof of concept of an AR approach for sentinel lymph node biopsy by ex vivo experiments. The 3D position of the radioactive γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-sources is reconstructed from a single γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-image, acquired by a stationary table-attached multi-pinhole γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-detector. The position of the sources is then visualized using Microsoft’s HoloLens. We further investigate the performance of our SLNF algorithm for a single source, two sources, and two sources with a hot background.ResultsIn our ex vivo experiments, a single γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-source and its AR representation show good correlation with known locations, with a maximum error of 4.47 mm. The SLNF algorithm performs well when only one source is reconstructed, with a maximum error of 7.77 mm. For the more challenging case to reconstruct two sources, the errors vary between 2.23 mm and 75.92 mm.ConclusionThis proof of concept shows promising results in reconstructing and displaying one γ\documentclass[12pt]{minimal}\usepackage{amsmath}\usepackage{wasysym}\usepackage{amsfonts}\usepackage{amssymb}\usepackage{amsbsy}\usepackage{mathrsfs}\usepackage{upgreek}\setlength{\oddsidemargin}{-69pt}\begin{document}$$\gamma $$\end{document}-source. Two simultaneously recorded sources are more challenging and require further algorithmic optimization.
journal article
LitStream Collection
Dynamic surface reconstruction in robot-assisted minimally invasive surgery based on neural radiance fields

Sun, Xinan; Wang, Feng; Ma, Zhikang; Su, He

2023 International Journal of Computer Assisted Radiology and Surgery

doi: 10.1007/s11548-023-03016-8

PurposeThe purpose of this study was to improve surgical scene perception by addressing the challenge of reconstructing highly dynamic surgical scenes. We proposed a novel depth estimation network and a reconstruction framework that combines neural radiance fields to provide more accurate scene information for surgical task automation and AR navigation.MethodsWe added a spatial pyramid pooling module and a Swin-Transformer module to enhance the robustness of stereo depth estimation. We also improved depth accuracy by adding unique matching constraints from optimal transport. To avoid deformation distortion in highly dynamic scenes, we used neural radiance fields to implicitly represent scenes in the time dimension and optimized them with depth and color information in a learning-based manner.ResultsOur experiments on the KITTI and SCARED datasets show that the proposed depth estimation network performs close to the state-of-the-art method on natural images and surpasses the SOTA method on medical images with 1.12% in 3 px Error and 0.45 px in EPE. The proposed dynamic reconstruction framework successfully reconstructed the dynamic cardiac surface on a totally endoscopic coronary artery bypass video, achieving SOTA performance with 27.983 dB in PSNR, 0.812 in SSIM, and 0.189 in LPIPS.ConclusionOur proposed depth estimation network and reconstruction framework provide a significant contribution to the field of surgical scene perception. The framework achieves better results than SOTA methods on medical datasets, reducing mismatches on depth maps and resulting in more accurate depth maps with clearer edges. The proposed ER framework is verified on a series of dynamic cardiac surgical images. Future efforts will focus on improving the training speed and solving the problem of limited field of view.
Browse All Journals

Related Journals:

ACM Transactions on GraphicsProceedings - Graphics InterfaceIET Software