Impact of blue light filtering glasses on computer vision syndrome in radiology residents: a pilot studyDabrowiecki, Alexander; Villalobos, Alexander; Krupinski, Elizabeth A.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022402pmid: 31824984
Abstract.Computer vision syndrome (CVS) is an umbrella term for a pattern of symptoms associated with prolonged digital screen exposure, such as eyestrain, headaches, blurred vision, and dry eyes. Commercially available blue light filtering lenses (BLFL) are advertised as improving CVS. Our pilot study evaluates the effectiveness of BLFL on reducing CVS symptoms and fatigue in a cohort of radiologists. A prospective crossover study was conducted with ten radiology residents randomized into two cohorts: one wearing BLFL first then a sham pair (non-BLFL), and the other wearing a sham pair first then BLFL, over two weeks during normal clinical work. Participants filled out a questionnaire using the validated computer vision syndrome questionnaire (CVS-Q) and the Swedish Occupational Fatigue Inventory (SOFI). The majority of symptoms [11/16 (68.8%) and 13/16 (81.3%) symptoms on the CVS-Q and SOFI, respectively] were reduced (i.e., symptoms less severe) with the BLFL compared to the sham glasses. Females rated symptoms of sleepiness and physical discomfort in the SOFI, and overall CVS-Q, as more severe. Postgraduate year (PGY)-2 residents rated all symptoms as more severe than PGY-3/4s. BLFL may ameliorate CVS symptoms. Future studies with larger sample sizes and participants of different ages are required to verify the potential of BLFL.
Gist processing in digital breast tomosynthesisWu, Chia-Chien; D’Ardenne, Nicholas M.; Nishikawa, Robert M.; Wolfe, Jeremy M.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022403pmid: 31853462
Abstract.Evans et al. (2016) showed that radiologists can classify the mammograms as normal or abnormal at above-chance levels after a 250-ms exposure. Our study documents a similar gist signal in digital breast tomosynthesis (DBT) images. DBT is a relatively new technology that creates a three-dimensional image set of slices through the volume of the breast. It improves performance over two-dimensional (2-D) mammography but at a cost in reading time. In the experiment presented, radiologists (N = 16) viewed “movies” of DBT images from single breasts for an average of 1.5 s per case. Observers then marked the most likely lesion position on a blank outline and rated each case on a six-point scale from (1) certainly normal to (6) certainly recall. Results show that radiologists can discriminate normal from abnormal DBT cases at above-chance levels as in 2-D mammography. Ability was correlated with experience reading DBT. Observers performed at above-chance levels, even on those images where they could not localize the target, suggesting that this is a global signal that could prove valuable in the clinic.
Translation of adapting quantitative CT data from research to local clinical practice: validation evaluation of fully automated procedures to provide lung volumes and percent emphysemaLeung, Krystle M.; Curran-Everett, Douglas; Regan, Elizabeth A.; Lynch, David A.; Jacobson, Francine L.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022404pmid: 31824985
Abstract.Current clinical chest CT reporting includes limited qualitative assessment of emphysema with rare mention of lung volumes and limited reporting of emphysema, based upon retrospective review of CT reports. Quantitative CT analysis performed in COPDGene and other research cohorts utilize semiautomated segmentation procedures and well-established research method (Thirona). We compared this reference QCT data with fully automated QCT analysis that can be obtained at the time of CT scan and sent to PACS along with standard chest CT images. 164 COPDGene® cohort study subjects enrolled at Brigham and Women’s Hospital had baseline and 5-year follow-up CT scans. Subjects included 17 nonsmoking controls, 92 smokers with normal spirometry, 15 preserved ratio impaired spirometry (PRISm) patients, 12 GOLD 1, 20 GOLD 2, and 8 GOLD 3–4. 97% (n = 319) of clinical reports did not mention lung volumes, and 14% (n = 46) made no mention of emphysema. Total lung volumes determined by the fully automated algorithm were consistently 47 milliliters (ml) less than the Thirona reference value for all subjects (95% confidence interval −62 to −32 ml). Percent emphysema values were equivalent to the Thirona reference values. Well-established research reference data can be used to evaluate and validate automated QCT software. Validation can be repeated as software is updated.
Is there a safety-net effect with computer-aided detection?Du-Crow, Ethan; Astley, Susan M.; Hulleman, Johan
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022405pmid: 31903408
Abstract.Computer-aided detection (CAD) systems are used to aid readers interpreting screening mammograms. An expert reader searches the image initially unaided and then once again with the aid of CAD, which prompts automatically detected suspicious regions. This could lead to a “safety-net” effect, where the initial unaided search of the image is adversely affected by the fact that it is preliminary to an additional search with CAD and may, therefore, be less thorough. To investigate the existence of such an effect, we created a visual search experiment for nonexpert observers mirroring breast screening with CAD. Each observer searched 100 images for microcalcification clusters within synthetic images in both prompted (CAD) and unprompted (no-CAD) conditions. Fifty-two participants were recruited for the study, 48 of whom had their eye movements tracked in real-time; the other 4 participants could not be accurately calibrated, so only behavioral data were collected. In the CAD condition, before prompts were displayed, image coverage was significantly lower than coverage in the no-CAD condition (t47 = 5.29, p < 0.0001). Observer sensitivity was significantly greater for targets marked by CAD than the same targets in the no-CAD condition (t51 = 6.56, p < 0.001). For targets not marked by CAD, there was no significant difference in observer sensitivity in the CAD condition compared with the same targets in the no-CAD condition (t51 = 0.54, p = 0.59). These results suggest that the initial search may be influenced by the subsequent availability of CAD; if so, cross-sectional CAD efficacy studies should account for the effect when estimating benefit.
Rapid perceptual processing in two- and three-dimensional prostate imagesTreviño, Melissa; Turkbey, Baris; Wood, Bradford J.; Pinto, Peter A.; Czarniecki, Marcin; Choyke, Peter L.; Horowitz, Todd S.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022406pmid: 31930156
Abstract.Radiologists can identify whether a radiograph is abnormal or normal at above chance levels in breast and lung images presented for half a second or less. This early perceptual processing has only been demonstrated in static two-dimensional images (e.g., mammograms). Can radiologists rapidly extract the “gestalt” from more complex imaging modalities? For example, prostate multiparametric magnetic resonance imaging (mpMRI) displays a series of images as a virtual stack and comprises multiple imaging sequences: anatomical information from the T2-weighted (T2W) sequence, functional information from diffusion-weighted imaging, and apparent diffusion coefficient sequences. We first tested rapid perceptual processing in static T2W images then among the two functional sequences. Finally, we examined whether this rapid radiological perception could be observed using T2W multislice imaging. Readers with experience in prostate mpMRI could detect and localize lesions in all sequences after viewing a 500-ms static image. Experienced prostate readers could also detect and localize lesions when viewing multislice image stacks presented as brief movies, with image slices presented at either 48, 96, or 144 ms. The ability to quickly extract the perceptual gestalt may be a general property of expert perception, even in complex imaging modalities.
Perceptual training: learning versus attentional shiftBanerjee, Soham; Drew, Trafton; Mills, Megan K.; Auffermann, William F.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022407pmid: 31903409
Abstract.Prior research has demonstrated that perceptual training can improve the ability of healthcare trainees in identifying abnormalities on medical images, but it is unclear if the improved performance is due to learning or attentional shift—the diversion of perceptional resources away from other activities to a specified task. Our objective is to determine if research subject performance in perceiving the central venous catheter position on radiographs is improved after perceptional training and if improved performance is due to learning or an attentional shift. Forty-one physician assistant students were educated on the appropriate radiographic position of central venous catheters and then asked to evaluate the catheter position in two sets of radiographic cases. The experimental group was provided perceptional training between case sets one and two. The control group was not. Participants were asked to characterize central venous catheters for appropriate positioning (task of interest) and to assess radiographs for cardiomegaly (our marker for attentional shift). Our results demonstrated increased confidence in localization in the experimental group (p-value <0.001) but not in the control group (p-value = 0.882). The ability of subjects to locate the catheter tip significantly improved in both control and experimental groups. Both the experimental (p-value = 0.007) and control groups (p-value = 0.001) demonstrated equivalent decreased performance in assessing cardiomegaly; the difference between groups was not significant (p-value = 0.234). This suggests the performance improvement was secondary to learning not due to an attentional shift.
Interpretation time for screening mammography as a function of the number of computer-aided detection marksSchwartz, Tayler M.; Hillis, Stephen L.; Sridharan, Radhika; Lukyanchenko, Olga; Geiser, William; Whitman, Gary J.; Wei, Wei; Haygood, Tamara Miner
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022408pmid: 32042859
Abstract.Purpose: Computer-aided detection (CAD) alerts radiologists to findings potentially associated with breast cancer but is notorious for creating false-positive marks. Although a previous paper found that radiologists took more time to interpret mammograms with more CAD marks, our impression was that this was not true in actual interpretation. We hypothesized that radiologists would selectively disregard these marks when present in larger numbers.Approach: We performed a retrospective review of bilateral digital screening mammograms. We use a mixed linear regression model to assess the relationship between number of CAD marks and ln (interpretation time) after adjustment for covariates. Both readers and mammograms were treated as random sampling units.Results: Ten radiologists, with median experience after residency of 12.5 years (range 6 to 24) interpreted 1832 mammograms. After accounting for number of images, Breast Imaging Reporting and Data System category, and breast density, the number of CAD marks was positively associated with longer interpretation time, with each additional CAD mark proportionally increasing median interpretation time by 4.35% for a typical reader.Conclusions: We found no support for our hypothesis that radiologists will selectively disregard CAD marks when they are present in larger numbers.
Influence of background lung characteristics on nodule detection with computed tomographyLi, Boning; Smith, Taylor B.; Choudhury, Kingshuk R.; Harrawood, Brian; Ebner, Lukas; Roos, Justus E.; Rubin, Geoffrey D.
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022409pmid: 32016136
Abstract.We sought to characterize local lung complexity in chest computed tomography (CT) and to characterize its impact on the detectability of pulmonary nodules. Forty volumetric chest CT scans were created by embedding between three and five simulated 5-mm lung nodules into one of three volumetric chest CT datasets. Thirteen radiologists evaluated 157 nodules, resulting in 2041 detection opportunities. Analyzing the substrate CT data prior to nodule insertion, 14 image features were measured within a region around each nodule location. A generalized linear mixed-effects statistical model was fit to the data to verify the contribution of each metric on detectability. The model was tuned for simplicity, interpretability, and generalizability using stepwise regression applied to the primary features and their interactions. We found that variables corresponding to each of five categories (local structural distractors, local intensity, global context, local vascularity, and contiguity with structural distractors) were significant (p < 0.01) factors in a standardized model. Moreover, reader-specific models conveyed significant differences among readers with significant distraction (missed detections) influenced by local intensity- versus local-structural characteristics being mutually exclusive. Readers with significant local intensity distraction (n = 10) detected substantially fewer lung nodules than those who were significantly distracted by local structure (n = 2), 46.1% versus 65.3% mean nodules detected, respectively.
Deep learning can be used to train naïve, nonprofessional observers to detect diagnostic visual patterns of certain cancers in mammograms: a proof-of-principle studyHegdé, Jay
2020 Journal of Medical Imaging
doi: 10.1117/1.JMI.7.2.022410pmid: 32042860
Abstract.The scientific, clinical, and pedagogical significance of devising methodologies to train nonprofessional subjects to recognize diagnostic visual patterns in medical images has been broadly recognized. However, systematic approaches to doing so remain poorly established. Using mammography as an exemplar case, we use a series of experiments to demonstrate that deep learning (DL) techniques can, in principle, be used to train naïve subjects to reliably detect certain diagnostic visual patterns of cancer in medical images. In the main experiment, subjects were required to learn to detect statistical visual patterns diagnostic of cancer in mammograms using only the mammograms and feedback provided following the subjects’ response. We found not only that the subjects learned to perform the task at statistically significant levels, but also that their eye movements related to image scrutiny changed in a learning-dependent fashion. Two additional, smaller exploratory experiments suggested that allowing subjects to re-examine the mammogram in light of various items of diagnostic information may help further improve DL of the diagnostic patterns. Finally, a fourth small, exploratory experiment suggested that the image information learned was similar across subjects. Together, these results prove the principle that DL methodologies can be used to train nonprofessional subjects to reliably perform those aspects of medical image perception tasks that depend on visual pattern recognition expertise.