Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Poor-Quality Medical Research: What Can Journals Do?

Poor-Quality Medical Research: What Can Journals Do? Abstract The aim of medical research is to advance scientific knowledge and hence—directly or indirectly—lead to improvements in the treatment and prevention of disease. Each research project should continue systematically from previous research and feed into future research. Each project should contribute beneficially to a slowly evolving body of research. A study should not mislead; otherwise it could adversely affect clinical practice and future research. In 1994 I observed that research papers commonly contain methodological errors, report results selectively, and draw unjustified conclusions. Here I revisit the topic and suggest how journal editors can help. There is considerable evidence that many published reports of randomized controlled trials (RCTs) are poor or even wrong, despite their clear importance.1 The results of several reviews of published trials are briefly summarized in Table 1. Poor methodology and reporting are widespread. Table 1. Summary of Empirical Evidence of Prevalence of Methodological Problems in Published Reports of Randomized Trials* Table 1. Summary of Empirical Evidence of Prevalence of Methodological Problems in Published Reports of Randomized Trials* View LargeDownload Similar problems afflict other study types. A review of 308 phase 2 trials in cancer (295 of which were single-arm studies) found that 250 (81%) did not report an identifiable statistical design. Further, positive findings were reported in 48% of designed studies but 70% of studies with no reported design (P = .003).3 Of 40 molecular genetics articles published in leading general medical journals, 15 (38%) failed to meet at least 2 of 7 methodological standards. The authors wrote: "Without suitable attention to fundamental methodological standards, the expected benefits of molecular genetic testing may not be achieved."4 In recent years, systematic reviews have become common. In these, all reliable evidence relating to a clinical question is sought, systematically appraised, and, if suitable, combined statistically in a meta-analysis.5 A key component is an assessment of the methodological quality of the individual (primary) studies.6 Reviewers often conclude that the available evidence is of poor scientific quality,7,8 sometimes leading to heated debate about interpretation.9 General reviews also find much to be concerned about. Serious statistical errors were found in 40% of 164 articles published in a psychiatry journal10 and in 19% of 145 articles published in an obstetrics and gynecology journal.11 I suspect that many basic errors have become less common, but statistics has become more complex, and there is evidence of frequent misapplication of newer advanced techniques.12 Also, when interpreting a study, readers need to know how it relates to existing knowledge. Many authors interpret their findings narrowly, failing to either identify previous studies or place their findings in the context of those previous studies.13 Why Are There So Many Errors in Medical Articles? Errors in published research articles indicate poor research that has survived the peer-review process. But the problems arise earlier, so a more important question is, Why are submitted articles poor? Much research is done without the benefit of anyone with adequate training in quantitative methods.14 Many investigators are not professional researchers; they are primarily clinicians. ". . . [I]f they had any training in research methods it was usually a single course in statistics in the first or second year of their degree, before they really appreciated how important rigorous research methods are in order to do good science."15 Also, training in statistics often focuses on data analysis, an emphasis reinforced by several statistics textbooks, often by nonstatisticians, in which design issues are not addressed.16 Yet study design is a crucial element of education in research methods and appropriately forms a key aspect of training in critical appraisal generally and evidence-based medicine in particular.17,18 A contributory reason is inadequate review by research ethics committees (institutional review boards). Such review should detect studies with important flaws in design but clearly often fails to do so. Unfortunately, committees tend to use a narrow interpretation of ethics that downplays scientific quality, despite the clear ethical implications of allowing research that is not scientifically valid.19 A further issue is the copying of incorrect or inappropriate methods. Once incorrect procedures become common, it can be hard to stop them from spreading through the medical literature like a genetic mutation. Many editors have wrestled with the problem of authors objecting to a reviewer's criticism on the grounds that the same methods have appeared in previous articles, quite possibly by the same authors in the same journal. Examples of incorrect practices that persist despite published warnings include using the correlation to compare 2 methods of measurement,20 using significance tests to compare baseline characteristics in randomized trials,21 conducting multiple tests of data recorded at multiple times,22 and ignoring the clustering in the design and analysis of cluster randomized trials.23 Peer review can and should weed out serious methodological errors. However, expert methodological input is in short supply. Only a third of high-impact journals reported statistical review of all published manuscripts.24 The vast majority of research is published in low-impact journals where peer review is undoubtedly less thorough. Postpublication Peer Review Many readers seem to assume that articles published in peer-reviewed journals are scientifically sound, despite much evidence to the contrary. It is important, therefore, that misleading work be identified after publication. As Gehlbach25 noted, "[t]he ultimate interpretation and decision about the value of an article rests with the reader." Recent draft recommendations from the World Association of Medical Editors say that "[e]ditors should promote self-correction in science and participate in efforts to improve the practice of scientific investigation by . . . publishing corrections, retractions, and letters critical of articles published in their own journal."26 Although journals do publish correspondence, there are weaknesses in the way they do so. Most obviously, editors select which letters to publish. Editors should give special attention to letters making criticisms of methodology. They should do one of the following: satisfy themselves (perhaps by having the letter peer reviewed) that the criticisms are unfounded or unimportant, agree to publish the letter and invite the authors to respond, or invite a response from the authors and then decide whether to publish. Letters should not be rejected because of previously published correspondence (making different points) or lack of space. Time limitation on correspondence denies readers the opportunity to draw attention to methodological deficiencies. Table 2 shows the current rules of 6 general medical journals. In effect, there is a statute of limitations by which authors of articles in these journals are immune to disclosure of methodological weaknesses once some arbitrary (short) period has elapsed, which cannot be right. Table 2. Time Limit on Submitting Letters Commenting on Published Articles Table 2. Time Limit on Submitting Letters Commenting on Published Articles View LargeDownload None of these journals suggests that there are exceptions, but from personal experience, at least 3 of them have occasionally published letters received beyond the stated time limit. The BMJ recently published a letter pointing out errors in an article published 6 years earlier. In it, Bland commented: "Potentially incorrect conclusions, based on faulty analysis, should not be allowed to remain in the literature to be cited uncritically by others."27 A time limit discourages potential postpublication peer review; potential correspondents will surely be deterred by the unambiguous cutoff. Journals with such a policy should reconsider. A few journals (eg, BMJ and the CMAJ) have rapid publication of correspondence on their Web pages. All (or most) letters are published, and there is no apparent time limit. Nor is there the same limit on length as in print journals (Table 2). Electronic letters are linked to the original publication and are relatively easily accessed. It is remarkable and disappointing that as yet so few journals have such a capability. Restricting the facility to current subscribers, as currently done by Neurology and Pediatrics, is inadequate. A weakness yet to be resolved is the absence of pressure on authors to respond to criticisms.28 For such journals there is uncertainty about which version is definitive. Although the BMJ considers bmj.com to be the definitive version,29 only the letters that appear in the print journal are indexed on MEDLINE. What Journal Editors Can Do Authors and editors should have the same goals: the advancement of scientific understanding and improvement in the treatment and prevention of disease. Poor research is the fault of authors, not journals. Poor research methods, unnecessary research, redundant or duplicate publication, thinly sliced study results, selective reporting, and scientific fraud, as well as a general tendency to inflate the importance of the results, should all be resisted vigorously. All could be less likely if research were not a career necessity for physicians. Rather than abandon peer review, as some have suggested, journals should work to strengthen it. In particular, methodological review should be implemented much more widely. It will never be possible to eliminate misleading studies, but our imperfect peer-review system is a safeguard without which the quality of published research would be lower. Journals can also help improve the literature by requiring the full and transparent reporting of research. Guidelines have been developed for RCTs,30 systematic reviews and meta-analyses of RCTs31 and observational studies,32 and studies of diagnostic tests,33 and other initiatives are under way. Editors should continue to be involved in the development of reporting recommendations and explicitly require authors to follow them. Journals can enable and encourage the publication of research protocols.34-36 They can use their Web pages to publish extended versions of articles. They should also enable and encourage publication of the raw data used in medical research articles (eg, Clinical Chemistry and Neurology). If journals are willing to publish data, they should explicitly suggest this possibility to authors. References 1. Altman DG. The scandal of poor medical research. BMJ.1994;308:283-284.Google Scholar 2. Altman DG, Schulz KF, Moher D. et al. for the CONSORT Group. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med.2001;134:663-694.Google Scholar 3. Mariani L, Marubini E. Content and quality of currently published phase II cancer trials. J Clin Oncol.2000;18:429-436.Google Scholar 4. Bogardus Jr ST, Concato J, Feinstein AR. Clinical epidemiological quality in molecular genetic research: the need for methodological standards. JAMA.1999;281:1919-1926.Google Scholar 5. Egger M, Davey Smith G, Altman DG. Systematic Reviews in Health Care: Meta-analysis in Context. 2nd ed. London, England: BMJ Books; 2001. 6. Juni P, Altman DG, Egger M. Assessing the quality of controlled clinical trials. BMJ.2001;323:42-46.Google Scholar 7. Hotopf M, Lewis G, Normand C. Putting trials on trial—the costs and consequences of small trials in depression: a systematic review of methodology. J Epidemiol Community Health.1997;51:354-358.Google Scholar 8. Lawlor DA, Hopker SW. The effectiveness of exercise as an intervention in the management of depression: systematic review and meta-regression analysis of randomised controlled trials. BMJ.2001;322:763-767.Google Scholar 9. Olsen O, Gotzsche PC. Cochrane review on screening for breast cancer with mammography. Lancet.2001;358:1340-1342.Google Scholar 10. McGuigan SM. The use of statistics in the British Journal of Psychiatry. Br J Psychiatry.1995;167:683-688.Google Scholar 11. Welch II GE, Gabbe SG. Review of statistics usage in the American Journal of Obstetrics and Gynecology. Am J Obstet Gynecol.1996;175:1138-1141.Google Scholar 12. Schwarzer G, Vach W, Schumacher M. On the misuses of artificial neural networks for prognostic and diagnostic classification in oncology. Stat Med.2000;19:541-561.Google Scholar 13. Clarke M, Alderson P, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals. JAMA.2002;287:2799-2801.Google Scholar 14. Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA.2002;287:2817-2820.Google Scholar 15. Chanter DO. Maintaining the integrity of the scientific record: new policy is unlikely to give investigators more control over studies [letter]. BMJ.2002;324:169.Google Scholar 16. Bland JM, Altman DG. Caveat doctor: a grim tale of medical statistics textbooks. BMJ.1987;295:979.Google Scholar 17. Sackett DL, Straus S, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Edinburgh, Scotland: Churchill-Livingstone; 2000. 18. Guyatt G, Rennie D. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago, Ill: AMA Press; 2002. 19. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA.2000;283:2701-2711.Google Scholar 20. Westgard JO, Hunt MR. Use and interpretation of common statistical tests in method comparison studies. Clin Chem.1973;19:49-57.Google Scholar 21. Rothman KJ. Epidemiologic methods in clinical trials. Cancer.1977;39(4 suppl):1771-1775.Google Scholar 22. Oldham PD. A note on the analysis of repeated measurements of the same subjects. J Chronic Dis.1962;15:969-977.Google Scholar 23. Donner A, Brown KS, Brasher P. A methodological review of non-therapeutic intervention trials employing cluster randomization, 1979-1989. Int J Epidemiol.1990;19:795-800.Google Scholar 24. Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals: caveat lector? J Gen Intern Med.1998;13:753-756.Google Scholar 25. Gehlbach SH. Interpreting the Medical Literature: A Clinician's Guide. 3rd ed. New York, NY: McGraw-Hill; 1993. 26. Report of the World Association of Medical Editors (WAME): an agenda for the future. Available at: http://www.wame.org/bellagioreport_1.htm. Accessed January 20, 2002. 27. Bland M. Fatigue and psychological distress: statistics are improbable. BMJ.2000;320:515-516.Google Scholar 28. Rennie D. Freedom and responsibility in medical publication: setting the balance right. JAMA.1998;280:300-302.Google Scholar 29. Smith R. The BMJ: moving on. BMJ.2002;324:5-6.Google Scholar 30. Moher D, Schulz KF, Altman D.for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA.2001;285:1987-1991.Google Scholar 31. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF.for the QUOROM Group. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Lancet.1999;354:1896-1900.Google Scholar 32. Stroup DF, Berlin JA, Morton SC. et al. for the Meta-analysis of Observational Studies in Epidemiology (MOOSE) Group. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA.2000;283:2008-2012.Google Scholar 33. The STARD Group. The STARD initiative: towards complete and accurate reporting of studies on diagnostic accuracy. Available at: http://www.consort-statement.org/stardstatement.htm. Accessibility verified May 1, 2002. 34. Horton R. Pardonable revisions and protocol reviews [commentary]. Lancet.1997;349:6.Google Scholar 35. Chalmers I, Altman DG. How can medical journals help prevent poor medical research? some opportunities presented by electronic publishing. Lancet.1999;353:490-493.Google Scholar 36. Godlee F. Publishing study protocols: making them visible will encourage registration, reporting and recruitment [editorial]. BMC News Views.2001;2:4.Google Scholar http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JAMA American Medical Association

Poor-Quality Medical Research: What Can Journals Do?

JAMA , Volume 287 (21) – Jun 5, 2002

Loading next page...
 
/lp/american-medical-association/poor-quality-medical-research-what-can-journals-do-J2bkG81Pz8
Publisher
American Medical Association
Copyright
Copyright © 2002 American Medical Association. All Rights Reserved.
ISSN
0098-7484
eISSN
1538-3598
DOI
10.1001/jama.287.21.2765
Publisher site
See Article on Publisher Site

Abstract

Abstract The aim of medical research is to advance scientific knowledge and hence—directly or indirectly—lead to improvements in the treatment and prevention of disease. Each research project should continue systematically from previous research and feed into future research. Each project should contribute beneficially to a slowly evolving body of research. A study should not mislead; otherwise it could adversely affect clinical practice and future research. In 1994 I observed that research papers commonly contain methodological errors, report results selectively, and draw unjustified conclusions. Here I revisit the topic and suggest how journal editors can help. There is considerable evidence that many published reports of randomized controlled trials (RCTs) are poor or even wrong, despite their clear importance.1 The results of several reviews of published trials are briefly summarized in Table 1. Poor methodology and reporting are widespread. Table 1. Summary of Empirical Evidence of Prevalence of Methodological Problems in Published Reports of Randomized Trials* Table 1. Summary of Empirical Evidence of Prevalence of Methodological Problems in Published Reports of Randomized Trials* View LargeDownload Similar problems afflict other study types. A review of 308 phase 2 trials in cancer (295 of which were single-arm studies) found that 250 (81%) did not report an identifiable statistical design. Further, positive findings were reported in 48% of designed studies but 70% of studies with no reported design (P = .003).3 Of 40 molecular genetics articles published in leading general medical journals, 15 (38%) failed to meet at least 2 of 7 methodological standards. The authors wrote: "Without suitable attention to fundamental methodological standards, the expected benefits of molecular genetic testing may not be achieved."4 In recent years, systematic reviews have become common. In these, all reliable evidence relating to a clinical question is sought, systematically appraised, and, if suitable, combined statistically in a meta-analysis.5 A key component is an assessment of the methodological quality of the individual (primary) studies.6 Reviewers often conclude that the available evidence is of poor scientific quality,7,8 sometimes leading to heated debate about interpretation.9 General reviews also find much to be concerned about. Serious statistical errors were found in 40% of 164 articles published in a psychiatry journal10 and in 19% of 145 articles published in an obstetrics and gynecology journal.11 I suspect that many basic errors have become less common, but statistics has become more complex, and there is evidence of frequent misapplication of newer advanced techniques.12 Also, when interpreting a study, readers need to know how it relates to existing knowledge. Many authors interpret their findings narrowly, failing to either identify previous studies or place their findings in the context of those previous studies.13 Why Are There So Many Errors in Medical Articles? Errors in published research articles indicate poor research that has survived the peer-review process. But the problems arise earlier, so a more important question is, Why are submitted articles poor? Much research is done without the benefit of anyone with adequate training in quantitative methods.14 Many investigators are not professional researchers; they are primarily clinicians. ". . . [I]f they had any training in research methods it was usually a single course in statistics in the first or second year of their degree, before they really appreciated how important rigorous research methods are in order to do good science."15 Also, training in statistics often focuses on data analysis, an emphasis reinforced by several statistics textbooks, often by nonstatisticians, in which design issues are not addressed.16 Yet study design is a crucial element of education in research methods and appropriately forms a key aspect of training in critical appraisal generally and evidence-based medicine in particular.17,18 A contributory reason is inadequate review by research ethics committees (institutional review boards). Such review should detect studies with important flaws in design but clearly often fails to do so. Unfortunately, committees tend to use a narrow interpretation of ethics that downplays scientific quality, despite the clear ethical implications of allowing research that is not scientifically valid.19 A further issue is the copying of incorrect or inappropriate methods. Once incorrect procedures become common, it can be hard to stop them from spreading through the medical literature like a genetic mutation. Many editors have wrestled with the problem of authors objecting to a reviewer's criticism on the grounds that the same methods have appeared in previous articles, quite possibly by the same authors in the same journal. Examples of incorrect practices that persist despite published warnings include using the correlation to compare 2 methods of measurement,20 using significance tests to compare baseline characteristics in randomized trials,21 conducting multiple tests of data recorded at multiple times,22 and ignoring the clustering in the design and analysis of cluster randomized trials.23 Peer review can and should weed out serious methodological errors. However, expert methodological input is in short supply. Only a third of high-impact journals reported statistical review of all published manuscripts.24 The vast majority of research is published in low-impact journals where peer review is undoubtedly less thorough. Postpublication Peer Review Many readers seem to assume that articles published in peer-reviewed journals are scientifically sound, despite much evidence to the contrary. It is important, therefore, that misleading work be identified after publication. As Gehlbach25 noted, "[t]he ultimate interpretation and decision about the value of an article rests with the reader." Recent draft recommendations from the World Association of Medical Editors say that "[e]ditors should promote self-correction in science and participate in efforts to improve the practice of scientific investigation by . . . publishing corrections, retractions, and letters critical of articles published in their own journal."26 Although journals do publish correspondence, there are weaknesses in the way they do so. Most obviously, editors select which letters to publish. Editors should give special attention to letters making criticisms of methodology. They should do one of the following: satisfy themselves (perhaps by having the letter peer reviewed) that the criticisms are unfounded or unimportant, agree to publish the letter and invite the authors to respond, or invite a response from the authors and then decide whether to publish. Letters should not be rejected because of previously published correspondence (making different points) or lack of space. Time limitation on correspondence denies readers the opportunity to draw attention to methodological deficiencies. Table 2 shows the current rules of 6 general medical journals. In effect, there is a statute of limitations by which authors of articles in these journals are immune to disclosure of methodological weaknesses once some arbitrary (short) period has elapsed, which cannot be right. Table 2. Time Limit on Submitting Letters Commenting on Published Articles Table 2. Time Limit on Submitting Letters Commenting on Published Articles View LargeDownload None of these journals suggests that there are exceptions, but from personal experience, at least 3 of them have occasionally published letters received beyond the stated time limit. The BMJ recently published a letter pointing out errors in an article published 6 years earlier. In it, Bland commented: "Potentially incorrect conclusions, based on faulty analysis, should not be allowed to remain in the literature to be cited uncritically by others."27 A time limit discourages potential postpublication peer review; potential correspondents will surely be deterred by the unambiguous cutoff. Journals with such a policy should reconsider. A few journals (eg, BMJ and the CMAJ) have rapid publication of correspondence on their Web pages. All (or most) letters are published, and there is no apparent time limit. Nor is there the same limit on length as in print journals (Table 2). Electronic letters are linked to the original publication and are relatively easily accessed. It is remarkable and disappointing that as yet so few journals have such a capability. Restricting the facility to current subscribers, as currently done by Neurology and Pediatrics, is inadequate. A weakness yet to be resolved is the absence of pressure on authors to respond to criticisms.28 For such journals there is uncertainty about which version is definitive. Although the BMJ considers bmj.com to be the definitive version,29 only the letters that appear in the print journal are indexed on MEDLINE. What Journal Editors Can Do Authors and editors should have the same goals: the advancement of scientific understanding and improvement in the treatment and prevention of disease. Poor research is the fault of authors, not journals. Poor research methods, unnecessary research, redundant or duplicate publication, thinly sliced study results, selective reporting, and scientific fraud, as well as a general tendency to inflate the importance of the results, should all be resisted vigorously. All could be less likely if research were not a career necessity for physicians. Rather than abandon peer review, as some have suggested, journals should work to strengthen it. In particular, methodological review should be implemented much more widely. It will never be possible to eliminate misleading studies, but our imperfect peer-review system is a safeguard without which the quality of published research would be lower. Journals can also help improve the literature by requiring the full and transparent reporting of research. Guidelines have been developed for RCTs,30 systematic reviews and meta-analyses of RCTs31 and observational studies,32 and studies of diagnostic tests,33 and other initiatives are under way. Editors should continue to be involved in the development of reporting recommendations and explicitly require authors to follow them. Journals can enable and encourage the publication of research protocols.34-36 They can use their Web pages to publish extended versions of articles. They should also enable and encourage publication of the raw data used in medical research articles (eg, Clinical Chemistry and Neurology). If journals are willing to publish data, they should explicitly suggest this possibility to authors. References 1. Altman DG. The scandal of poor medical research. BMJ.1994;308:283-284.Google Scholar 2. Altman DG, Schulz KF, Moher D. et al. for the CONSORT Group. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med.2001;134:663-694.Google Scholar 3. Mariani L, Marubini E. Content and quality of currently published phase II cancer trials. J Clin Oncol.2000;18:429-436.Google Scholar 4. Bogardus Jr ST, Concato J, Feinstein AR. Clinical epidemiological quality in molecular genetic research: the need for methodological standards. JAMA.1999;281:1919-1926.Google Scholar 5. Egger M, Davey Smith G, Altman DG. Systematic Reviews in Health Care: Meta-analysis in Context. 2nd ed. London, England: BMJ Books; 2001. 6. Juni P, Altman DG, Egger M. Assessing the quality of controlled clinical trials. BMJ.2001;323:42-46.Google Scholar 7. Hotopf M, Lewis G, Normand C. Putting trials on trial—the costs and consequences of small trials in depression: a systematic review of methodology. J Epidemiol Community Health.1997;51:354-358.Google Scholar 8. Lawlor DA, Hopker SW. The effectiveness of exercise as an intervention in the management of depression: systematic review and meta-regression analysis of randomised controlled trials. BMJ.2001;322:763-767.Google Scholar 9. Olsen O, Gotzsche PC. Cochrane review on screening for breast cancer with mammography. Lancet.2001;358:1340-1342.Google Scholar 10. McGuigan SM. The use of statistics in the British Journal of Psychiatry. Br J Psychiatry.1995;167:683-688.Google Scholar 11. Welch II GE, Gabbe SG. Review of statistics usage in the American Journal of Obstetrics and Gynecology. Am J Obstet Gynecol.1996;175:1138-1141.Google Scholar 12. Schwarzer G, Vach W, Schumacher M. On the misuses of artificial neural networks for prognostic and diagnostic classification in oncology. Stat Med.2000;19:541-561.Google Scholar 13. Clarke M, Alderson P, Chalmers I. Discussion sections in reports of controlled trials published in general medical journals. JAMA.2002;287:2799-2801.Google Scholar 14. Altman DG, Goodman SN, Schroter S. How statistical expertise is used in medical research. JAMA.2002;287:2817-2820.Google Scholar 15. Chanter DO. Maintaining the integrity of the scientific record: new policy is unlikely to give investigators more control over studies [letter]. BMJ.2002;324:169.Google Scholar 16. Bland JM, Altman DG. Caveat doctor: a grim tale of medical statistics textbooks. BMJ.1987;295:979.Google Scholar 17. Sackett DL, Straus S, Richardson WS, Rosenberg W, Haynes RB. Evidence-Based Medicine: How to Practice and Teach EBM. 2nd ed. Edinburgh, Scotland: Churchill-Livingstone; 2000. 18. Guyatt G, Rennie D. Users' Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice. Chicago, Ill: AMA Press; 2002. 19. Emanuel EJ, Wendler D, Grady C. What makes clinical research ethical? JAMA.2000;283:2701-2711.Google Scholar 20. Westgard JO, Hunt MR. Use and interpretation of common statistical tests in method comparison studies. Clin Chem.1973;19:49-57.Google Scholar 21. Rothman KJ. Epidemiologic methods in clinical trials. Cancer.1977;39(4 suppl):1771-1775.Google Scholar 22. Oldham PD. A note on the analysis of repeated measurements of the same subjects. J Chronic Dis.1962;15:969-977.Google Scholar 23. Donner A, Brown KS, Brasher P. A methodological review of non-therapeutic intervention trials employing cluster randomization, 1979-1989. Int J Epidemiol.1990;19:795-800.Google Scholar 24. Goodman SN, Altman DG, George SL. Statistical reviewing policies of medical journals: caveat lector? J Gen Intern Med.1998;13:753-756.Google Scholar 25. Gehlbach SH. Interpreting the Medical Literature: A Clinician's Guide. 3rd ed. New York, NY: McGraw-Hill; 1993. 26. Report of the World Association of Medical Editors (WAME): an agenda for the future. Available at: http://www.wame.org/bellagioreport_1.htm. Accessed January 20, 2002. 27. Bland M. Fatigue and psychological distress: statistics are improbable. BMJ.2000;320:515-516.Google Scholar 28. Rennie D. Freedom and responsibility in medical publication: setting the balance right. JAMA.1998;280:300-302.Google Scholar 29. Smith R. The BMJ: moving on. BMJ.2002;324:5-6.Google Scholar 30. Moher D, Schulz KF, Altman D.for the CONSORT Group. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA.2001;285:1987-1991.Google Scholar 31. Moher D, Cook DJ, Eastwood S, Olkin I, Rennie D, Stroup DF.for the QUOROM Group. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Lancet.1999;354:1896-1900.Google Scholar 32. Stroup DF, Berlin JA, Morton SC. et al. for the Meta-analysis of Observational Studies in Epidemiology (MOOSE) Group. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA.2000;283:2008-2012.Google Scholar 33. The STARD Group. The STARD initiative: towards complete and accurate reporting of studies on diagnostic accuracy. Available at: http://www.consort-statement.org/stardstatement.htm. Accessibility verified May 1, 2002. 34. Horton R. Pardonable revisions and protocol reviews [commentary]. Lancet.1997;349:6.Google Scholar 35. Chalmers I, Altman DG. How can medical journals help prevent poor medical research? some opportunities presented by electronic publishing. Lancet.1999;353:490-493.Google Scholar 36. Godlee F. Publishing study protocols: making them visible will encourage registration, reporting and recruitment [editorial]. BMC News Views.2001;2:4.Google Scholar

Journal

JAMAAmerican Medical Association

Published: Jun 5, 2002

Keywords: medical research,disease prevention

References