Imputing missing standard deviations in meta-analyses
can provide accurate results
Toshi A Furukawa
, Corrado Barbui
, Andrea Cipriani
, Paolo Brambilla
, Norio Watanabe
Department of Psychiatry and Cognitive-Behavioral Medicine, Nagoya City University Graduate School of Medical Sciences,
Mizuho-cho, Mizuho-ku, Nagoya 467-8601 Japan
Department of Medicine and Public Health, Section of Psychiatry, University of Verona, Ospedale Policlinico, 37134 Verona, Italy
Department of Pathology and Experimental and Clinical Medicine, Section of Psychiatry, University of Udine,
Policlinico Universitario via Colugna 50, 33100 Udine, Udine, Italy
Health Services Research Department, Institute of Psychiatry, King’s College London, University of London,
De Crespigny Park, London SE5 8AF, UK
Accepted 16 June 2005
Background and Objectives: Many reports of randomized controlled trials (RCTs) fail to provide standard deviations (SDs) of their
continuous outcome measures. Some meta-analysts substitute them by those reported in other studies, either from another meta-analysis or
from other studies in the same meta-analysis. But the validity of such practices has never been empirically examined.
Methods: We compared the actual standardized mean difference (SMD) of individual RCTs and the meta-analytically pooled SMD of
all RCTs against those based on the above-mentioned two imputation methods in two meta-analyses of antidepressants.
Results: Two meta-analyses included 39 RCTs of ﬂuoxetine (n 5 3,681) and 25 RCTs of amitriptyline (n 5 1,832), which had actually
reported means and SDs of the Hamilton Rating Scale for Depression. According to either of the two proposed imputation methods, the
agreement between actual SMDs and imputed SMDs for individual RCTs was very good with ANOVA intraclass correlation coefﬁcients
between 0.61 and 0.97. The agreement between the actual pooled SMD and the imputed one was even better, with minimal differences in
both their point estimates and 95% conﬁdence intervals.
Conclusion: For a systematic review where some of the identiﬁed trials do not report SDs, it appears safe to borrow SDs from other
studies. Ó 2006 Elsevier Inc. All rights reserved.
Keywords: Meta-analysis; Standard deviation; Missing data; Imputation; Depressive disorder
Conduct of a systematic review or a meta-analysis in-
volves comprehensive search of relevant randomized con-
trolled trials (RCTs) and their quantitative or qualitative
synthesis. To pool results on a continuous outcome measure
of the identiﬁed RCTs quantitatively, one needs both means
and standard deviations (SDs) on that outcome measure for
Many reports of RCTs, however, fail to provide SDs for
their continuous outcomes. It is sometimes possible to use
P or t or F values, reported in the original RCTs, to calcu-
late exact SDs . When none of these is available, it is
recommended that one should contact primary authors
. However, the yield is very often very low; some are in-
contactable, some never respond, and others report that the
data are discarded, lost or irretrievable because there are no
longer any computers to read the tapes.
Some meta-analysts then resort to substitution of SDs of
known outcome measures by those reported in other stud-
ies, either from another meta-analysis or from other studies
in the same meta-analysis. But the validity of such practices
has never been empirically examined.
The present article therefore aims to examine empirically
the validity of borrowing SDs from other studies when indi-
vidual RCTs fail to report SDs in a meta-analysis, by simu-
lating the above-mentioned two imputation methods for SDs
in two meta-analyses on antidepressants that we have con-
ducted [3,4]. Systematic reviews for depression are particu-
larly suitable for this purpose, because Hamilton Rating
Scale for Depression  (HRSD) is the de facto standard
in symptom assessment and is used in many depression
trials identiﬁed for overviews.
* Corresponding author. Tel.: 181 52 853 8271; fax: 181 52 852
E-mail address: email@example.com (T.A. Furukawa).
0895-4356/06/$ – see front matter Ó 2006 Elsevier Inc. All rights reserved.
Journal of Clinical Epidemiology 59 (2006) 7–10