Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Extent and Reporting of Patient Nonenrollment in Influential Randomized Clinical Trials, 2002 to 2010

Extent and Reporting of Patient Nonenrollment in Influential Randomized Clinical Trials, 2002 to... Because they assign patients to treatment conditions, randomized clinical trials (RCTs) offer unparalleled internal validity for drawing inferences about the efficacy of a medical treatment. Whether such inferences can be generalized is not always clear because many RCTs enroll a low and unrepresentative proportion of all patients.1-6 The challenges of judging the clinical utility of clinical trial results are increased by poor reporting. The study by Gross et al7 of trials published in leading medical journals from 1999 through 2000 found that only 28% reported the proportion of screened patients who were enrolled. These deficiencies may have been ameliorated in the past decade because the CONSORT statement was revised in 2001 to require more complete information on the enrollment process in reports of clinical trials,8 and because many treatment research fields have been showing greater concern about generating knowledge that better informs clinical practice. Accordingly, the present study assessed the extent to which low enrollment rates are still characteristic of widely cited clinical trials, and whether reporting of enrollment information has improved. Methods A Web of Science search was used to identify the 20 most influential English-language RCTs for each of 14 prevalent chronic disorders (alcohol dependence, Alzheimer disease, breast cancer, colorectal cancer, chronic obstructive pulmonary disorder, depression, diabetes mellitus, drug dependence, human immunodeficiency virus/AIDS, hypertension, ischemic heart disease, lung cancer, nicotine dependence, and schizophrenia) published from 2002 to 2010 (see eTable for search terms and citations returned). We sorted the results on citations per year rather than total citations so that recently published trials would still have the chance to rank as influential. Top-cited articles that were not RCTs (eg, major literature reviews) were excluded. The final data set comprised 280 studies (20 studies for each of 14 conditions). Raters double-coded the studies on the number of patients with the disorder of interest who were screened for trial eligibility, the number who were eligible, and the number of eligible patients who agreed to enroll. When available, the reason for nonenrollment also was recorded. For these studies, we recorded the number of nonenrollments that were due to participants not meeting study eligibility criteria, the number excluded for other reasons (eg, administrative errors), and the number of eligible participants who refused to participate (this included individuals who initially refused to participate and those who initially agreed but then did not return for the start of the study). Results Only 145 studies (51.8%) provided sufficient information to allow calculation of the nonenrollment rate. These RCTs had a mean (SD) nonenrollment rate of 40.1% (23.7%). For 6 of the 14 diseases, the influential trials included at least 1 study with a nonenrollment rate higher than 90%. No association emerged between year of publication and the proportion of patients not enrolled (r = −0.08; P = .37). However, year of publication was positively associated with adequate reporting of enrollment information (odds ratio, 1.19; P = .003). In 2002, only 45% of the trials reported enrollment information, but this proportion rose to 75% by 2010. Only 35.0% of studies (n = 98) provided sufficient information to categorize reasons for nonenrollment. In these studies, an average of 27.3% of participants did not meet eligibility criteria, 11.2% refused participation, and 3.7% were not enrolled owing to other reasons. Discussion Highly cited clinical trials do not enroll an average of 40.1% of identified patients with the disorder being studied, primarily owing to eligibility criteria. Low enrollment rates can lower external validity because, by definition, eligibility criteria create trial research samples that differ from real-world patient samples. The larger the proportion of patients not enrolled, the more likely it is that the results of the study will not reflect what the intervention would produce in front-line clinical practice. Although exclusion criteria are sometimes essential in trials, including to protect patient safety, we add our voices to those of others who have suggested that treatment researchers use them as minimally as possible and only with good justification. On a more positive note, from 2002 through 2010, the proportion of clinical trials reporting complete enrollment information increased from 45% to 75%. Improved reporting may reflect the accrued influence of the CONSORT guidelines, as more authors and editors become aware of them, as well as the impact of numerous studies and editorials raising concerns about unrepresentative research samples. We close with an important caution. Gandhi et al9 found that publications of trial results tend to underreport the number of exclusion criteria that were in the approved protocol. Furthermore, in some trials, insufficient effort is put into tracking data on nonenrollment.7 Therefore, even though we have identified high rates of nonenrollment, our results may nonetheless understate the degree to which this is a reality of current clinical trial research. Back to top Article Information Correspondence: Dr Humphreys, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 N Quarry Rd (MC:5717), Stanford, CA 94305 (knh@stanford.edu). Published Online: April 22, 2013. doi:10.1001/jamainternmed.2013.496 Author Contributions: Dr Humphreys had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Humphreys. Acquisition of data: Maisel, Blodgett, Fuh, and Finney. Analysis and interpretation of data: Humphreys, Maisel, Blodgett, and Fuh. Drafting of the manuscript: Humphreys and Maisel. Critical revision of the manuscript for important intellectual content: Maisel, Blodgett, Fuh, and Finney. Statistical analysis: Maisel and Blodgett. Obtained funding: Humphreys and Finney. Administrative, technical, and material support: Finney. Study supervision: Humphreys, Maisel, and Finney. Conflict of Interest Disclosures: None reported. Funding/Support: This study was supported by a VA Health Services Research and Development Senior Research Career Scientist award (Dr Humphreys) and National Institute on Alcohol Abuse and Alcoholism (NIAAA) grant No. AA008689 (Dr Finney). Disclaimer: The views expressed are those of the authors and do not necessarily represent the views of Department of Veterans Affairs, the NIAAA, or any other US government entity. References 1. Hlatky MA, Lee KL, Harrell FE Jr, et al. Tying clinical research to patient care by use of an observational database. Stat Med. 1984;3(4):375-3876396793PubMedGoogle ScholarCrossref 2. Haberfellner EM. Recruitment of depressive patients for a controlled clinical trial in a psychiatric practice. Pharmacopsychiatry. 2000;33(4):142-14410958263PubMedGoogle ScholarCrossref 3. Blanco C, Olfson M, Okuda M, Nunes EV, Liu SM, Hasin DS. Generalizability of clinical trials for alcohol dependence to community samples. Drug Alcohol Depend. 2008;98(1-2):123-12818579319PubMedGoogle ScholarCrossref 4. Humphreys K, Weisner C. Use of exclusion criteria in selecting research subjects and its effect on the generalizability of alcohol treatment outcome studies. Am J Psychiatry. 2000;157(4):588-59410739418PubMedGoogle ScholarCrossref 5. Le Strat Y, Rehm J, Le Foll B. How generalisable to community samples are clinical trial results for treatment of nicotine dependence: a comparison of common eligibility criteria with respondents of a large representative general population survey. Tob Control. 2011;20(5):338-34321212379PubMedGoogle ScholarCrossref 6. Melberg HO, Humphreys K. Ineligibility and refusal to participate in randomised trials of treatments for drug dependence. Drug Alcohol Rev. 2010;29(2):193-20120447229PubMedGoogle ScholarCrossref 7. Gross CP, Mallory R, Heiat A, Krumholz HM. Reporting the recruitment process in clinical trials: who are these patients and how did they get there? Ann Intern Med. 2002;137(1):10-1612093240PubMedGoogle ScholarCrossref 8. Moher D, Schulz KF, Altman DG.CONSORT. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1(1):211336663PubMedGoogle ScholarCrossref 9. Gandhi M, Ameli N, Bacchetti P, et al. Eligibility criteria for HIV clinical trials and generalizability of results: the gap between published reports and study protocols. AIDS. 2005;19(16):1885-189616227797PubMedGoogle ScholarCrossref http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA Internal Medicine American Medical Association

Extent and Reporting of Patient Nonenrollment in Influential Randomized Clinical Trials, 2002 to 2010

Loading next page...
 
/lp/american-medical-association/extent-and-reporting-of-patient-nonenrollment-in-influential-K4vXJDn400

References (13)

Publisher
American Medical Association
Copyright
Copyright © 2013 American Medical Association. All Rights Reserved.
ISSN
2168-6106
eISSN
2168-6114
DOI
10.1001/jamainternmed.2013.496
Publisher site
See Article on Publisher Site

Abstract

Because they assign patients to treatment conditions, randomized clinical trials (RCTs) offer unparalleled internal validity for drawing inferences about the efficacy of a medical treatment. Whether such inferences can be generalized is not always clear because many RCTs enroll a low and unrepresentative proportion of all patients.1-6 The challenges of judging the clinical utility of clinical trial results are increased by poor reporting. The study by Gross et al7 of trials published in leading medical journals from 1999 through 2000 found that only 28% reported the proportion of screened patients who were enrolled. These deficiencies may have been ameliorated in the past decade because the CONSORT statement was revised in 2001 to require more complete information on the enrollment process in reports of clinical trials,8 and because many treatment research fields have been showing greater concern about generating knowledge that better informs clinical practice. Accordingly, the present study assessed the extent to which low enrollment rates are still characteristic of widely cited clinical trials, and whether reporting of enrollment information has improved. Methods A Web of Science search was used to identify the 20 most influential English-language RCTs for each of 14 prevalent chronic disorders (alcohol dependence, Alzheimer disease, breast cancer, colorectal cancer, chronic obstructive pulmonary disorder, depression, diabetes mellitus, drug dependence, human immunodeficiency virus/AIDS, hypertension, ischemic heart disease, lung cancer, nicotine dependence, and schizophrenia) published from 2002 to 2010 (see eTable for search terms and citations returned). We sorted the results on citations per year rather than total citations so that recently published trials would still have the chance to rank as influential. Top-cited articles that were not RCTs (eg, major literature reviews) were excluded. The final data set comprised 280 studies (20 studies for each of 14 conditions). Raters double-coded the studies on the number of patients with the disorder of interest who were screened for trial eligibility, the number who were eligible, and the number of eligible patients who agreed to enroll. When available, the reason for nonenrollment also was recorded. For these studies, we recorded the number of nonenrollments that were due to participants not meeting study eligibility criteria, the number excluded for other reasons (eg, administrative errors), and the number of eligible participants who refused to participate (this included individuals who initially refused to participate and those who initially agreed but then did not return for the start of the study). Results Only 145 studies (51.8%) provided sufficient information to allow calculation of the nonenrollment rate. These RCTs had a mean (SD) nonenrollment rate of 40.1% (23.7%). For 6 of the 14 diseases, the influential trials included at least 1 study with a nonenrollment rate higher than 90%. No association emerged between year of publication and the proportion of patients not enrolled (r = −0.08; P = .37). However, year of publication was positively associated with adequate reporting of enrollment information (odds ratio, 1.19; P = .003). In 2002, only 45% of the trials reported enrollment information, but this proportion rose to 75% by 2010. Only 35.0% of studies (n = 98) provided sufficient information to categorize reasons for nonenrollment. In these studies, an average of 27.3% of participants did not meet eligibility criteria, 11.2% refused participation, and 3.7% were not enrolled owing to other reasons. Discussion Highly cited clinical trials do not enroll an average of 40.1% of identified patients with the disorder being studied, primarily owing to eligibility criteria. Low enrollment rates can lower external validity because, by definition, eligibility criteria create trial research samples that differ from real-world patient samples. The larger the proportion of patients not enrolled, the more likely it is that the results of the study will not reflect what the intervention would produce in front-line clinical practice. Although exclusion criteria are sometimes essential in trials, including to protect patient safety, we add our voices to those of others who have suggested that treatment researchers use them as minimally as possible and only with good justification. On a more positive note, from 2002 through 2010, the proportion of clinical trials reporting complete enrollment information increased from 45% to 75%. Improved reporting may reflect the accrued influence of the CONSORT guidelines, as more authors and editors become aware of them, as well as the impact of numerous studies and editorials raising concerns about unrepresentative research samples. We close with an important caution. Gandhi et al9 found that publications of trial results tend to underreport the number of exclusion criteria that were in the approved protocol. Furthermore, in some trials, insufficient effort is put into tracking data on nonenrollment.7 Therefore, even though we have identified high rates of nonenrollment, our results may nonetheless understate the degree to which this is a reality of current clinical trial research. Back to top Article Information Correspondence: Dr Humphreys, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, 401 N Quarry Rd (MC:5717), Stanford, CA 94305 (knh@stanford.edu). Published Online: April 22, 2013. doi:10.1001/jamainternmed.2013.496 Author Contributions: Dr Humphreys had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: Humphreys. Acquisition of data: Maisel, Blodgett, Fuh, and Finney. Analysis and interpretation of data: Humphreys, Maisel, Blodgett, and Fuh. Drafting of the manuscript: Humphreys and Maisel. Critical revision of the manuscript for important intellectual content: Maisel, Blodgett, Fuh, and Finney. Statistical analysis: Maisel and Blodgett. Obtained funding: Humphreys and Finney. Administrative, technical, and material support: Finney. Study supervision: Humphreys, Maisel, and Finney. Conflict of Interest Disclosures: None reported. Funding/Support: This study was supported by a VA Health Services Research and Development Senior Research Career Scientist award (Dr Humphreys) and National Institute on Alcohol Abuse and Alcoholism (NIAAA) grant No. AA008689 (Dr Finney). Disclaimer: The views expressed are those of the authors and do not necessarily represent the views of Department of Veterans Affairs, the NIAAA, or any other US government entity. References 1. Hlatky MA, Lee KL, Harrell FE Jr, et al. Tying clinical research to patient care by use of an observational database. Stat Med. 1984;3(4):375-3876396793PubMedGoogle ScholarCrossref 2. Haberfellner EM. Recruitment of depressive patients for a controlled clinical trial in a psychiatric practice. Pharmacopsychiatry. 2000;33(4):142-14410958263PubMedGoogle ScholarCrossref 3. Blanco C, Olfson M, Okuda M, Nunes EV, Liu SM, Hasin DS. Generalizability of clinical trials for alcohol dependence to community samples. Drug Alcohol Depend. 2008;98(1-2):123-12818579319PubMedGoogle ScholarCrossref 4. Humphreys K, Weisner C. Use of exclusion criteria in selecting research subjects and its effect on the generalizability of alcohol treatment outcome studies. Am J Psychiatry. 2000;157(4):588-59410739418PubMedGoogle ScholarCrossref 5. Le Strat Y, Rehm J, Le Foll B. How generalisable to community samples are clinical trial results for treatment of nicotine dependence: a comparison of common eligibility criteria with respondents of a large representative general population survey. Tob Control. 2011;20(5):338-34321212379PubMedGoogle ScholarCrossref 6. Melberg HO, Humphreys K. Ineligibility and refusal to participate in randomised trials of treatments for drug dependence. Drug Alcohol Rev. 2010;29(2):193-20120447229PubMedGoogle ScholarCrossref 7. Gross CP, Mallory R, Heiat A, Krumholz HM. Reporting the recruitment process in clinical trials: who are these patients and how did they get there? Ann Intern Med. 2002;137(1):10-1612093240PubMedGoogle ScholarCrossref 8. Moher D, Schulz KF, Altman DG.CONSORT. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Med Res Methodol. 2001;1(1):211336663PubMedGoogle ScholarCrossref 9. Gandhi M, Ameli N, Bacchetti P, et al. Eligibility criteria for HIV clinical trials and generalizability of results: the gap between published reports and study protocols. AIDS. 2005;19(16):1885-189616227797PubMedGoogle ScholarCrossref

Journal

JAMA Internal MedicineAmerican Medical Association

Published: Jun 10, 2013

There are no references for this article.