Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

A Note on Meta-analysis of Diagnostic Test Studies—Reply

A Note on Meta-analysis of Diagnostic Test Studies—Reply In reply My colleagues and I thank Dr Casazza for his critical comments. However, we do not agree with his assertion that it is inappropriate to fit an SROCcurve if the Spearman rank correlation coefficient between sensitivity and 1 − specificity is lower than 0.6. The SROC curve is constructed byregressing the difference of the logits (log odds) on the sum of the logits.The difference of the logits is equivalent to the log of the odds ratio betweena positive test result and disease status. The sum of the logits is a measure of the test threshold. The intercept of the regression model is an odds ratio,and the regression coefficient quantifies the extent to which the odds ratiois dependent on the threshold used. In their excellent review on the method of SROC curves, Irwig et al1 wrote "If the regression coefficient is near zero and not statistically significant, test accuracy for each primary study canbe summarized as the odds ratio and these odds ratios can be combined using various techniques." That means that if the odds ratio is constant acrosstest thresholds, test accuracy can be summarized by a common odds ratio or combined by other approaches such as fixed- or random-effect estimation methods.2 The intercept of our regression model including results of 30 studies was 0.21 with an SE of 0.52 and statistically not different from zero (P= .96), suggesting that the log odds ratio is independentof the test threshold. Hence, according to Irwig et al,1 it is appropriate to construct an SROC curve and to summarize test accuracy bya common odds ratio. As Dr Casazza correctly points out, constructing SROC curves circumvents the problem of different test thresholds among studies but cannot compensatefor other sources of heterogeneity. Dr Casazza faulted our use of a fixed-effect model as opposed to a random-effect model for combining odds ratios. In ananalysis based on a fixed-effects model, inference is conditional on the studies already done. In an analysis based on a random-effects model, inference isbased on the assumption that the studies are a random sample of a hypothetical population of studies. It has been shown that, under rare circumstances, bothmethods may theoretically give misleading results.3 However, since the primary aim of our study was to summarize the results of the studiesat hand, we found a fixed-effects model appropriate. Heterogeneity across studies was a major point of our analysis. We performed regression analysis and subgroup analyses including important study characteristicsto evaluate study differences. As described in the Results section, we found that the type of image s used (eg, dermoscopic vs clinical images) had an impacton the reported diagnostic performance, which means that the type of imageused is a possible source of between-study heterogeneity. References 1. Irwig LTosteson ANGatsonis C et al. Guidelines for meta-analyses evaluating diagnostic tests Ann Intern Med. 1994;120667- 676PubMedGoogle ScholarCrossref 2. Irwig LMacaskill PGlasziou PFahey M Meta-analytic methods for diagnostic test accuracy J Clin Epidemiol. 1995;48119- 130PubMedGoogle ScholarCrossref 3. Petitti D Meta-Analysis, Decision Analysis, and Cost-Effectiveness Analysis: Methods for Quantitative Synthesis in Medicine. New York, NY Oxford University Press1994; http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Dermatology American Medical Association

A Note on Meta-analysis of Diagnostic Test Studies—Reply

Archives of Dermatology , Volume 140 (3) – Mar 1, 2004

Loading next page...
 
/lp/american-medical-association/a-note-on-meta-analysis-of-diagnostic-test-studies-reply-7a2rgRnkD8
Publisher
American Medical Association
Copyright
Copyright © 2004 American Medical Association. All Rights Reserved.
ISSN
0003-987X
eISSN
1538-3652
DOI
10.1001/archderm.140.3.364
Publisher site
See Article on Publisher Site

Abstract

In reply My colleagues and I thank Dr Casazza for his critical comments. However, we do not agree with his assertion that it is inappropriate to fit an SROCcurve if the Spearman rank correlation coefficient between sensitivity and 1 − specificity is lower than 0.6. The SROC curve is constructed byregressing the difference of the logits (log odds) on the sum of the logits.The difference of the logits is equivalent to the log of the odds ratio betweena positive test result and disease status. The sum of the logits is a measure of the test threshold. The intercept of the regression model is an odds ratio,and the regression coefficient quantifies the extent to which the odds ratiois dependent on the threshold used. In their excellent review on the method of SROC curves, Irwig et al1 wrote "If the regression coefficient is near zero and not statistically significant, test accuracy for each primary study canbe summarized as the odds ratio and these odds ratios can be combined using various techniques." That means that if the odds ratio is constant acrosstest thresholds, test accuracy can be summarized by a common odds ratio or combined by other approaches such as fixed- or random-effect estimation methods.2 The intercept of our regression model including results of 30 studies was 0.21 with an SE of 0.52 and statistically not different from zero (P= .96), suggesting that the log odds ratio is independentof the test threshold. Hence, according to Irwig et al,1 it is appropriate to construct an SROC curve and to summarize test accuracy bya common odds ratio. As Dr Casazza correctly points out, constructing SROC curves circumvents the problem of different test thresholds among studies but cannot compensatefor other sources of heterogeneity. Dr Casazza faulted our use of a fixed-effect model as opposed to a random-effect model for combining odds ratios. In ananalysis based on a fixed-effects model, inference is conditional on the studies already done. In an analysis based on a random-effects model, inference isbased on the assumption that the studies are a random sample of a hypothetical population of studies. It has been shown that, under rare circumstances, bothmethods may theoretically give misleading results.3 However, since the primary aim of our study was to summarize the results of the studiesat hand, we found a fixed-effects model appropriate. Heterogeneity across studies was a major point of our analysis. We performed regression analysis and subgroup analyses including important study characteristicsto evaluate study differences. As described in the Results section, we found that the type of image s used (eg, dermoscopic vs clinical images) had an impacton the reported diagnostic performance, which means that the type of imageused is a possible source of between-study heterogeneity. References 1. Irwig LTosteson ANGatsonis C et al. Guidelines for meta-analyses evaluating diagnostic tests Ann Intern Med. 1994;120667- 676PubMedGoogle ScholarCrossref 2. Irwig LMacaskill PGlasziou PFahey M Meta-analytic methods for diagnostic test accuracy J Clin Epidemiol. 1995;48119- 130PubMedGoogle ScholarCrossref 3. Petitti D Meta-Analysis, Decision Analysis, and Cost-Effectiveness Analysis: Methods for Quantitative Synthesis in Medicine. New York, NY Oxford University Press1994;

Journal

Archives of DermatologyAmerican Medical Association

Published: Mar 1, 2004

Keywords: heterogeneity,spearman rank correlation coefficient,primary literature,dermoscopy,inference,test threshold,laboratory test finding

References