Photon counting performance of amorphous selenium and its dependence on detector structureStavro, Jann; Goldan, Amir H.; Zhao, Wei
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.043502pmid: 30840737
Abstract.Photon counting detectors (PCD) have the potential to improve x-ray imaging; however, they are still hindered by high costs and performance limitations. By using amorphous selenium (a-Se), the cost of PCDs can be significantly reduced compared with modern crystalline semiconductors, and enable large-area deposition. We are developing a direct conversion field-shaping multiwell avalanche detector (SWAD) to overcome the limitation of low carrier mobility and low charge conversion gain in a-Se. SWAD’s dual-grid design creates separate nonavalanche interaction (bulk) and avalanche sensing (well) regions, achieving depth-independent avalanche gain. Unipolar time differential (UTD) charge sensing, combined with tunable avalanche gain in the well region allows for fast response and high charge gain. We developed a probability-based numerical simulation to investigate the impact of UTD charge sensing and avalanche gain on the photon counting performance of different a-Se detector configurations. Pulse height spectra (PHS) for 59.5 and 30 keV photons were simulated. We observed excellent agreement between our model and previously published PHS measurements for a planar detector. The energy resolution significantly improved from 33 keV for the planar detector to ∼7 keV for SWAD. SWAD was found to have a linear response approaching 200 kcps / pixel.
Evaluation of a photon counting Medipix3RX cadmium zinc telluride spectral x-ray detectorMarsh, Jeffrey F.; Jorgensen, Steven M.; Rundle, David S.; Vercnocke, Andrew J.; Leng, Shuai; Butler, Philip H.; McCollough, Cynthia H.; Ritman, Erik L.
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.043503pmid: 30840738
Abstract.We assess the performance of a cadmium zinc telluride (CZT)-based Medipix3RX energy-resolving and photon-counting x-ray detector as a candidate for spectral microcomputed tomography (micro-CT) imaging. It features an array of 128 × 128, 110-μm2 pixels, each with four simultaneous threshold counters that utilize real-time charge summing. Each pixel’s response is assessed by imaging with a range of incident x-ray intensities and detector integration times. Energy-related assessments are made by exposing the detector to the emission from an I-125 radioisotope brachytherapy seed. Long-term stability is assessed by repeating identical exposures over the course of 1 h. The high yield of properly functioning pixels (98.8%), long-term stability (linear regression of whole-chip response over 1 h of acquisitions: y = − 0.0038x + 2284; standard deviation: 3.7 counts), and energy resolution [2.5 keV full-width half-maximum (FWHM) (single pixel), 3.7 keV FWHM (across the full image)] make this device suitable for spectral micro-CT.
MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement modelLei, Yang; Jeong, Jiwoong Jason; Wang, Tonghe; Shu, Hui-Kuo; Patel, Pretesh; Tian, Sibo; Liu, Tian; Shim, Hyunsuk; Mao, Hui; Jani, Ashesh B.; Curran, Walter J.; Yang, Xiaofeng
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.043504pmid: 30840748
Abstract.We develop a learning-based method to generate patient-specific pseudo computed tomography (CT) from routinely acquired magnetic resonance imaging (MRI) for potential MRI-based radiotherapy treatment planning. The proposed pseudo CT (PCT) synthesis method consists of a training stage and a synthesizing stage. During the training stage, patch-based features are extracted from MRIs. Using a feature selection, the most informative features are identified as an anatomical signature to train a sequence of alternating random forests based on an iterative refinement model. During the synthesizing stage, we feed the anatomical signatures extracted from an MRI into the sequence of well-trained forests for a PCT synthesis. Our PCT was compared with original CT (ground truth) to quantitatively assess the synthesis accuracy. The mean absolute error, peak signal-to-noise ratio, and normalized cross-correlation indices were 60.87 ± 15.10 HU, 24.63 ± 1.73 dB, and 0.954 ± 0.013 for 14 patients’ brain data and 29.86 ± 10.4 HU, 34.18 ± 3.31 dB, and 0.980 ± 0.025 for 12 patients’ pelvic data, respectively. We have investigated a learning-based approach to synthesize CTs from routine MRIs and demonstrated its feasibility and reliability. The proposed PCT synthesis technique can be a useful tool for MRI-based radiation treatment planning.
Imaging biomarkers in thyroid eye disease and their clinical associationsChaganti, Shikha; Nelson, Katrina; Mundy, Kevin; Harrigan, Robert; Galloway, Robert; Mawn, Louise A.; Landman, Bennett
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.044001pmid: 30345325
Abstract.The purpose of this study is to understand the phenotypes of thyroid eye disease (TED) through data derived from a multiatlas segmentation of computed tomography (CT) imaging. Images of 170 orbits of 85 retrospectively selected TED patients were analyzed with the developed automated segmentation tool. Twenty-five bilateral orbital structural metrics were used to perform principal component analysis (PCA). PCA of the 25 structural metrics identified the two most dominant structural phenotypes or characteristics, the “big volume phenotype” and the “stretched optic nerve phenotype,” that accounted for 60% of the variance. Most of the subjects in the study have either of these characteristics or a combination of both. A Kendall rank correlation between the principal components (phenotypes) and clinical data showed that the big volume phenotype was very strongly correlated (p-value <0.05) with motility defects, and loss of visual acuity. Whereas, the stretched optic nerve phenotype was strongly correlated (p-value <0.05) with an increased Hertel measurement, relatively better visual acuity, and smoking. Two clinical subtypes of TED, type 1 with enlarged muscles and type 2 with proptosis, are recognizable in CT imaging. Our automated algorithm identifies the phenotypes and finds associations with clinical markers.
Automated segmentation of cellular images using an effective region forceMohiuddin, Khadeejah; Wan, Justin W. L.
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.044002pmid: 30345326
Abstract.Understanding the behavior of cells is an important problem for biologists. Significant research has been done to facilitate this by automating the segmentation of microscopic cellular images. Bright-field images of cells prove to be particularly difficult to segment, due to features such as low contrast, missing boundaries, and broken halos. We present two algorithms for automated segmentation of cellular images. These algorithms are based on a graph-partitioning approach, where each pixel is modeled as a node of a weighted graph. The method combines an effective region force with the Laplacian and total variation boundary forces, respectively, to give the two models. This region force can be interpreted as a conditional probability of a pixel belonging to a certain class (cell or background) given a small set of already labeled pixels. For practicality, we use a small set of only background pixels from the border of cell images as the labeled set. Both algorithms are tested on bright-field images to give good results. Due to faster performance, the Laplacian-based algorithm is also tested on a variety of other datasets, including fluorescent images, phase-contrast images, and 2-D and 3-D simulated images. The results show that the algorithm performs well and consistently across a range of various cell image features, such as the cell shape, size, contrast, and noise levels.
Deep convolutional neural network-based patch classification for retinal nerve fiber layer defect detection in early glaucomaPanda, Rashmi; Puhan, Niladri B.; Rao, Aparna; Mandal, Bappaditya; Padhy, Debananda; Panda, Ganapati
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.044003pmid: 30840736
Abstract.Glaucoma is a progressive optic neuropathy characterized by peripheral visual field loss, which is caused by degeneration of retinal nerve fibers. The peripheral vision loss due to glaucoma is asymptomatic. If not detected and treated at an early stage, it leads to complete blindness, which is irreversible in nature. The retinal nerve fiber layer defect (RNFLD) provides an earliest objective evidence of glaucoma. In this regard, we explore cost-effective redfree fundus imaging for RNFLD detection to be practically useful for computer-assisted early glaucoma risk assessment. RNFLD appears as a wedge shaped arcuate structure radiating from the optic disc. The very low contrast between RNFLD and background makes its visual detection quite challenging even by medical experts. In our study, we formulate a deep convolutional neural network (CNN) based patch classification strategy for RNFLD boundary localization. A large number of RNFLD and background image patches train the deep CNN model, which extracts sufficient discriminative information from the patches and results in accurate RNFLD boundary pixel classification. The proposed approach is found to achieve enhanced RNFLD detection performance with sensitivity of 0.8205 and false positive per image of 0.2000 on a newly created early glaucomatic fundus image database.
Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networksSmistad, Erik; Johansen, Kaj Fredrik; Iversen, Daniel Høyer; Reinertsen, Ingerid
2018 Journal of Medical Imaging
doi: 10.1117/1.JMI.5.4.044004pmid: 30840734
Abstract.Ultrasound images acquired during axillary nerve block procedures can be difficult to interpret. Highlighting the important structures, such as nerves and blood vessels, may be useful for the training of inexperienced users. A deep convolutional neural network is used to identify the musculocutaneous, median, ulnar, and radial nerves, as well as the blood vessels in ultrasound images. A dataset of 49 subjects is collected and used for training and evaluation of the neural network. Several image augmentations, such as rotation, elastic deformation, shadows, and horizontal flipping, are tested. The neural network is evaluated using cross validation. The results showed that the blood vessels were the easiest to detect with a precision and recall above 0.8. Among the nerves, the median and ulnar nerves were the easiest to detect with an F-score of 0.73 and 0.62, respectively. The radial nerve was the hardest to detect with an F-score of 0.39. Image augmentations proved effective, increasing F-score by as much as 0.13. A Wilcoxon signed-rank test showed that the improvement from rotation, shadow, and elastic deformation augmentations were significant and the combination of all augmentations gave the best result. The results are promising; however, there is more work to be done, as the precision and recall are still too low. A larger dataset is most likely needed to improve accuracy, in combination with anatomical and temporal models.