The Need for Systematic Reviews in Oncology

The Need for Systematic Reviews in Oncology Reviews are critical to the theory and practice of oncology. Reviews not only summarize large bodies of information, but also inform medical and public health decision-making and guide policy and research priorities (1–7). Reviews enhance transparency and increase efficiency in the scientific process (8). The purpose of this Commentary is to encourage submissions of systematic reviews (with or without meta-analyses) to the Journal. In addition, I will describe the reasons why systematic reviews are superior to so-called “narrative” reviews, provide guidelines for prospective authors of systematic reviews, and provide guidelines for evaluating the quality of any published review, systematic or not. Those of us who rely almost exclusively on peer-reviewed scientific literature recognize that some studies are more reliable and valid than others. This phenomenon is due, at least in part, to differences in study design. Some studies can test hypotheses, whereas others do not. The best example might be the difference between a case series and a randomized clinical trial; the former introduces a hypothesis, and the latter actually tests a hypothesis. Another example is the difference between a narrative review and a systematic review. Both can be concerned with the same question in etiology, prevention, or treatment, but the former is typically less reliable and less valid than the latter. To put it another way, unsystematic narrative reviews are more prone to bias (6,9–12). Recognizing the problems with unsystematic reviews is often credited to Mulrow, who, 30 years ago, carefully examined 50 reviews in the medical literature that had been published in 1985–1986. She concluded that “current medical reviews do not routinely use scientific methods to identify, assess, and synthesize information” (13). In the ensuing years, guidelines for systematizing review articles were developed (1,4,14,15), along with criteria for judging the quality of published reviews (1,11,16–18). Ten, even 20 years later, systematic evaluations of the quality of published reviews revealed that often a majority of reviews of some topics failed to use systematic approaches or, even more troubling, claimed to be systematic yet still had serious methodological flaws. Examples included reviews in general oncology (19), epidemiology (20), basic (animal) research (21), adverse events of pharmaceuticals (22), and pediatric oncology (23). The situation across all of medicine was serious enough that the Institute of Medicine issued a report in 2008 (24) stating that “under the status quo, the quality of systematic reviews is variable and findings are often unreliable even when published in peer-reviewed scientific journals.” Soon after, the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” (PRISMA) guidelines were published, a 27-item checklist for reporting a systematic review with or without an accompanying meta-analysis (7). As the authors noted, “The overall aim of PRISMA is to help ensure the clarity and transparency of reporting systematic reviews” and not to be used as a quality assessment tool. Subsequently, an assessment tool for evaluating the quality of systematic reviews—indeed, any review—was published (25,26). That tool, called AMSTAR, has been found to be valid and reliable (27,28). One might assume that with all this emphasis on the need to systematize reviews, the oncologic community would “see the light” and adhere to both principle and practice, producing high-quality systematic reviews, minimizing bias, and enhancing transparency, thus providing the best information from which discussions about etiology, prevention, treatment—indeed, public policy—can emerge. A recent review of systematic reviews of interventions to improve quality of life in cancer survivors, however, revealed that of the 21 publications evaluated for quality using a reliable and valid tool (ie, AMSTAR), only seven (33%) were of high quality, 11 (57%) were of moderate quality, and three (14%) were of low quality (29). Similarly, an assessment of 55 systematic reviews and meta-analyses of risk factors for gastric cancer, using a revised version of AMSTAR, revealed that approximately half of the publications were of low quality (30). Also concerning is a recent systematic assessment of the quality of systematic reviews in radiation oncology (31). Evaluated were 157 reviews published between 1991 and 2015; the average AMSTAR score was 3 (maximum score = 11) with no relationship by year of publication. What these results reveal is that many (not all) systematic reviews in these important disciplines are not as scientifically credible as they should be. It appears that the oncology community has not fully incorporated and applied the systematic review methodology found in the PRISMA guidelines and the AMSTAR evaluation tool, to name some of the most popular examples. While much progress has been made in the past 30 years—since Mulrow’s 1987 classic paper—more progress is needed. Systematic Reviews at JNCI This introduction sets the stage for the next phase of development for systematic reviews at JNCI. The Journal welcomes, indeed encourages, submission of high-quality systematic reviews with or without meta-analysis on any and all topics relevant to oncology: causation, prevention, therapy, palliative care, and mechanism. A systematic review is defined as a review with a clearly formulated research question that uses explicit methods to identify, select, and critically appraise relevant research and to collect and analyze data from the studies included in the review (32). As noted above, guidelines for reporting these critically important publications can be found in the published literature (7). For a brief description of the components of a systematic review in oncology, see Weed (14). The quality of published reviews in any journal (including JNCI) is as much the responsibility of peer reviewers as the authors themselves. The Journal expects its peer reviewers to pay special attention to the methods used in systematic reviews submitted to the Journal. Simply stating that a review followed PRISMA (or any other set of) guidelines is insufficient. Demonstrating that the guidelines have been followed should be considered. In addition, peer reviewers may choose to evaluate the quality of systematic reviews using tools such as AMSTAR or other published guidelines. Finally, peer reviewers may want to consider the following guidelines (33): whether the authors rely primarily upon method or primarily upon their subjective judgment; whether a method for performing a systematic review is described; whether the method described is one generally recognized in the scientific community and referenced there; whether the authors’ description of that method is accurate (ie, whether it reasonably conforms to the descriptions of that method in the published literature or misrepresents [deviates prominently from] those same descriptions); whether that method is appropriate for the scientific question at hand; and whether the method selected by the author was used appropriately to interpret results. It is our intent at the Journal to provide the best—most rigorous—scientific assessments of evidence regarding cancer epidemiology, prevention, treatment, outcomes, and biology. Despite all the strengths of systematic reviews, there are some limitations that relate primarily to the quality of the studies reviewed. Simply put, the quality of the studies reviewed can affect the conclusions of a systematic review. The systematic process remains the same, but, for example, a systematic review of case reports cannot make valid conclusions about causality, regardless of the quality of the review. Another limitation of systematic reviews relates, once again, to problems with the studies. Heterogeneity in the individual studies can also affect the quality of a systematic review, for example, if it prohibits the quantitative summarization of results across studies. Publication bias is another concern. These are not limitations with the process of systematic reviews but rather with the studies that form the input to systematic reviews. Narrative (unsystematic) reviews will still be considered by JNCI, especially if they are clear in stating that they lack a structured methodology and are primarily designed to introduce and discuss a hypothesis, or present basic (indeed critically important) information on incidence, mortality, and survival or treatment options, much like a textbook chapter or an instructional course lecture. The same can be said for narrative reviews of interesting and potentially important mechanistic insights; unstructured narrative reviews may represent an effective way to introduce these ideas. In this age of information, with almost unlimited access to views on scientific and medical concerns, the scientific and medical communities can take some comfort in the fact that the information found in peer-reviewed journals like the Journal of the National Cancer Institute has not only been vetted, but also represents the best science that modern medicine has to offer. To that end, the Journal encourages authors to submit structured systematic reviews whenever possible. Notes Affiliation of author: DLW Consulting Services, LLC, Salt Lake City, UT. The author would like to acknowledge Dr. Graça Dores for her insightful comments on an earlier version. References 1 Oxman AD, Guyatt GH. Guidelines for reading literature reviews. CMAJ.  1988; 138 8: 697– 703. Google Scholar PubMed  2 Woolf SH. Review articles and disclosure of methods. Am J Prev Med.  1991; 7 1: 53– 54. Google Scholar CrossRef Search ADS PubMed  3 Hutchison BG. Critical appraisal of review articles. Can Fam Physician.  1993; 39: 1097– 1102. Google Scholar PubMed  4 Milne R, Chambers L. Assessing the scientific quality of review articles. J Epidemiol Commun Health.  1993; 47 3: 169– 170. Google Scholar CrossRef Search ADS   5 Neely JG. Literature review articles as a research form. Otolaryngol Head Neck Surg.  1993; 108 6: 743– 748. Google Scholar CrossRef Search ADS PubMed  6 Crowther MA, Cook DJ. Trials and tribulations of systematic reviews and meta-analyses. Hematology Am Soc Hematol Educ Program.  2007: 493– 497. 7 Liberati A, Altman DG, Tegzlaff Jet al.   The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Ann Int Med . 2009; 151 4: W65– W94. Google Scholar CrossRef Search ADS PubMed  8 Birnbaum LS, Thayer KA, Bucher JRet al.  , Implementing systematic reviews at the National Toxicology Program: Status and next steps. Env Health Perspect.  2013; 121 4: A108– A109. Google Scholar CrossRef Search ADS   9 Mulrow CD. Rationale for systematic reviews. BMJ.  1994; 309 6954: 597– 599. Google Scholar CrossRef Search ADS PubMed  10 Petticrew M. Systematic reviews from astronomy to zoology: Myths and misconceptions. BMJ.  2001; 322 7278: 98– 101. Google Scholar CrossRef Search ADS PubMed  11 Bhandari M, Devereqeux PJ, Montori Vet al.  , Users’ guide to the surgical literature: How to use a systematic literature review and meta-analysis. J Can Chir . 2004; 47 1: 60– 67. 12 Noordzij M, Hooft L, Dekker FWet al.   Systematic reviews and meta-analyses: When they are useful and when to be careful. Kidney Int.  2009; 76 11: 1130– 1136. Google Scholar CrossRef Search ADS PubMed  13 Mulrow CD. The medical review article: State of the science. Ann Intern Med.  1987; 106 3: 485– 488. Google Scholar CrossRef Search ADS PubMed  14 Weed DL. Methodological guidelines for review papers. J Natl Cancer Inst.  1997; 89 1: 6– 7. Google Scholar CrossRef Search ADS   15 Lichtenstein AH, Yetley EA, Lau J. Application of systematic review methodology to the field of nutrition. J Nutrition . 2008; 138 12: 2297– 2306. Google Scholar CrossRef Search ADS   16 Oxman AD. Checklists for review articles. BMJ.  1994; 309 6955: 648– 651. Google Scholar CrossRef Search ADS PubMed  17 Montori VM, Swiontkowski MF, Cook DJet al.  , Methodologic issues in systematic reviews and meta-analyses. Clin Orthop Rel Res.  2003; 413: 43– 54. Google Scholar CrossRef Search ADS   18 Mullen PD, Ramirez G. The promise and pitfalls of systematic reviews. Annu Rev Public Health.  2006; 27: 81– 102. Google Scholar CrossRef Search ADS PubMed  19 Bramwell VHC, Williams CJ. Do authors of review articles use systematic methods to identify, assess, and synthesize information? Ann Oncol.  1997; 8 12: 1185– 1195. Google Scholar CrossRef Search ADS PubMed  20 Breslow RA, Ross SA, Weed DL. Quality of reviews in epidemiology. Am J Pub Health.  1998; 88 3: 475– 477. Google Scholar CrossRef Search ADS   21 Mignini LE, Khan KS. Methodological quality of systematic reviews of animal studies: A survey of reviews of basic research. BMC Med Res Methodol.  2006; 6: 10. Google Scholar CrossRef Search ADS PubMed  22 Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol.  2008; 61 5: 440– 448. Google Scholar CrossRef Search ADS PubMed  23 Lundh A, Knijnenburg S, Jorgensen AWet al.  , Quality of systematic reviews in pediatric oncology-a systematic review. Cancer Treat Rev.  2009; 35 8: 645– 652. Google Scholar CrossRef Search ADS PubMed  24 Institute of Medicine. Knowing What Works in Health Care: A Roadmap for the Nation. Report Brief . Washington, DC: National Academies; 2008. 25 Shea BJ, Grimshaw JM, Wells GAet al.  , Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol.  2007; 7: 10. Google Scholar CrossRef Search ADS PubMed  26 Shea BJ, Bouter LM, Peterson Jet al.  , External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One.  2007; 2 12: e1350. Google Scholar CrossRef Search ADS PubMed  27 Shea BJ, Hamel C, Wells GAet al.  , AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol.  2009; 62: 1013– 1020. Google Scholar CrossRef Search ADS PubMed  28 Kung J, Chiappelli F, Cajulis OOet al.  , From systematic reviews to clinical recommendations for evidence-based health care: Validation of revised assessment of multiple systematic reviews (R-AMSTAR) for grading of clinical relevance. Open Dent J.  2010; 4: 84– 91. Google Scholar PubMed  29 Duncan M, Moschopoulou E, Herrington Eet al.  , Review of systematic reviews of non-pharmacological interventions to improve quality of life in cancer survivors. BMJ Open.  2017; 7 11: e015860. Google Scholar CrossRef Search ADS PubMed  30 Li L, Ying XJ, Sun TTet al.  , Overview of methodological quality of systematic reviews about gastric cancer risk and protective factors. Asian Pac J Cancer Prev.  2012; 13 5: 20169– 20179. 31 Hasan H, Muhammed T, Yu Jet al.  , Assessing the methodological quality of systematic reviews in radiation oncology: A systematic review. Cancer Epidemiol.  2017; 50( Pt A): 141– 149. Google Scholar CrossRef Search ADS PubMed  32 Volmink J, Siegfried N, Robertson Ket al.  , Research synthesis and dissemination as a bridge to knowledge management: The Cochrane Collaboration. Bull World Health Organ.  2004; 82 10: 778– 783. Google Scholar PubMed  33 Weed DL. Causal inference in epidemiology: Potential outcomes, pluralism, and peer review. Int J Epidemiol.  2016; 45 6: 1838– 1840. Google Scholar PubMed  © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png JNCI: Journal of the National Cancer Institute Oxford University Press

The Need for Systematic Reviews in Oncology

Loading next page...
 
/lp/ou_press/the-need-for-systematic-reviews-in-oncology-sqRiEtFW33
Publisher
Oxford University Press
Copyright
© The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com
ISSN
0027-8874
eISSN
1460-2105
D.O.I.
10.1093/jnci/djy050
Publisher site
See Article on Publisher Site

Abstract

Reviews are critical to the theory and practice of oncology. Reviews not only summarize large bodies of information, but also inform medical and public health decision-making and guide policy and research priorities (1–7). Reviews enhance transparency and increase efficiency in the scientific process (8). The purpose of this Commentary is to encourage submissions of systematic reviews (with or without meta-analyses) to the Journal. In addition, I will describe the reasons why systematic reviews are superior to so-called “narrative” reviews, provide guidelines for prospective authors of systematic reviews, and provide guidelines for evaluating the quality of any published review, systematic or not. Those of us who rely almost exclusively on peer-reviewed scientific literature recognize that some studies are more reliable and valid than others. This phenomenon is due, at least in part, to differences in study design. Some studies can test hypotheses, whereas others do not. The best example might be the difference between a case series and a randomized clinical trial; the former introduces a hypothesis, and the latter actually tests a hypothesis. Another example is the difference between a narrative review and a systematic review. Both can be concerned with the same question in etiology, prevention, or treatment, but the former is typically less reliable and less valid than the latter. To put it another way, unsystematic narrative reviews are more prone to bias (6,9–12). Recognizing the problems with unsystematic reviews is often credited to Mulrow, who, 30 years ago, carefully examined 50 reviews in the medical literature that had been published in 1985–1986. She concluded that “current medical reviews do not routinely use scientific methods to identify, assess, and synthesize information” (13). In the ensuing years, guidelines for systematizing review articles were developed (1,4,14,15), along with criteria for judging the quality of published reviews (1,11,16–18). Ten, even 20 years later, systematic evaluations of the quality of published reviews revealed that often a majority of reviews of some topics failed to use systematic approaches or, even more troubling, claimed to be systematic yet still had serious methodological flaws. Examples included reviews in general oncology (19), epidemiology (20), basic (animal) research (21), adverse events of pharmaceuticals (22), and pediatric oncology (23). The situation across all of medicine was serious enough that the Institute of Medicine issued a report in 2008 (24) stating that “under the status quo, the quality of systematic reviews is variable and findings are often unreliable even when published in peer-reviewed scientific journals.” Soon after, the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” (PRISMA) guidelines were published, a 27-item checklist for reporting a systematic review with or without an accompanying meta-analysis (7). As the authors noted, “The overall aim of PRISMA is to help ensure the clarity and transparency of reporting systematic reviews” and not to be used as a quality assessment tool. Subsequently, an assessment tool for evaluating the quality of systematic reviews—indeed, any review—was published (25,26). That tool, called AMSTAR, has been found to be valid and reliable (27,28). One might assume that with all this emphasis on the need to systematize reviews, the oncologic community would “see the light” and adhere to both principle and practice, producing high-quality systematic reviews, minimizing bias, and enhancing transparency, thus providing the best information from which discussions about etiology, prevention, treatment—indeed, public policy—can emerge. A recent review of systematic reviews of interventions to improve quality of life in cancer survivors, however, revealed that of the 21 publications evaluated for quality using a reliable and valid tool (ie, AMSTAR), only seven (33%) were of high quality, 11 (57%) were of moderate quality, and three (14%) were of low quality (29). Similarly, an assessment of 55 systematic reviews and meta-analyses of risk factors for gastric cancer, using a revised version of AMSTAR, revealed that approximately half of the publications were of low quality (30). Also concerning is a recent systematic assessment of the quality of systematic reviews in radiation oncology (31). Evaluated were 157 reviews published between 1991 and 2015; the average AMSTAR score was 3 (maximum score = 11) with no relationship by year of publication. What these results reveal is that many (not all) systematic reviews in these important disciplines are not as scientifically credible as they should be. It appears that the oncology community has not fully incorporated and applied the systematic review methodology found in the PRISMA guidelines and the AMSTAR evaluation tool, to name some of the most popular examples. While much progress has been made in the past 30 years—since Mulrow’s 1987 classic paper—more progress is needed. Systematic Reviews at JNCI This introduction sets the stage for the next phase of development for systematic reviews at JNCI. The Journal welcomes, indeed encourages, submission of high-quality systematic reviews with or without meta-analysis on any and all topics relevant to oncology: causation, prevention, therapy, palliative care, and mechanism. A systematic review is defined as a review with a clearly formulated research question that uses explicit methods to identify, select, and critically appraise relevant research and to collect and analyze data from the studies included in the review (32). As noted above, guidelines for reporting these critically important publications can be found in the published literature (7). For a brief description of the components of a systematic review in oncology, see Weed (14). The quality of published reviews in any journal (including JNCI) is as much the responsibility of peer reviewers as the authors themselves. The Journal expects its peer reviewers to pay special attention to the methods used in systematic reviews submitted to the Journal. Simply stating that a review followed PRISMA (or any other set of) guidelines is insufficient. Demonstrating that the guidelines have been followed should be considered. In addition, peer reviewers may choose to evaluate the quality of systematic reviews using tools such as AMSTAR or other published guidelines. Finally, peer reviewers may want to consider the following guidelines (33): whether the authors rely primarily upon method or primarily upon their subjective judgment; whether a method for performing a systematic review is described; whether the method described is one generally recognized in the scientific community and referenced there; whether the authors’ description of that method is accurate (ie, whether it reasonably conforms to the descriptions of that method in the published literature or misrepresents [deviates prominently from] those same descriptions); whether that method is appropriate for the scientific question at hand; and whether the method selected by the author was used appropriately to interpret results. It is our intent at the Journal to provide the best—most rigorous—scientific assessments of evidence regarding cancer epidemiology, prevention, treatment, outcomes, and biology. Despite all the strengths of systematic reviews, there are some limitations that relate primarily to the quality of the studies reviewed. Simply put, the quality of the studies reviewed can affect the conclusions of a systematic review. The systematic process remains the same, but, for example, a systematic review of case reports cannot make valid conclusions about causality, regardless of the quality of the review. Another limitation of systematic reviews relates, once again, to problems with the studies. Heterogeneity in the individual studies can also affect the quality of a systematic review, for example, if it prohibits the quantitative summarization of results across studies. Publication bias is another concern. These are not limitations with the process of systematic reviews but rather with the studies that form the input to systematic reviews. Narrative (unsystematic) reviews will still be considered by JNCI, especially if they are clear in stating that they lack a structured methodology and are primarily designed to introduce and discuss a hypothesis, or present basic (indeed critically important) information on incidence, mortality, and survival or treatment options, much like a textbook chapter or an instructional course lecture. The same can be said for narrative reviews of interesting and potentially important mechanistic insights; unstructured narrative reviews may represent an effective way to introduce these ideas. In this age of information, with almost unlimited access to views on scientific and medical concerns, the scientific and medical communities can take some comfort in the fact that the information found in peer-reviewed journals like the Journal of the National Cancer Institute has not only been vetted, but also represents the best science that modern medicine has to offer. To that end, the Journal encourages authors to submit structured systematic reviews whenever possible. Notes Affiliation of author: DLW Consulting Services, LLC, Salt Lake City, UT. The author would like to acknowledge Dr. Graça Dores for her insightful comments on an earlier version. References 1 Oxman AD, Guyatt GH. Guidelines for reading literature reviews. CMAJ.  1988; 138 8: 697– 703. Google Scholar PubMed  2 Woolf SH. Review articles and disclosure of methods. Am J Prev Med.  1991; 7 1: 53– 54. Google Scholar CrossRef Search ADS PubMed  3 Hutchison BG. Critical appraisal of review articles. Can Fam Physician.  1993; 39: 1097– 1102. Google Scholar PubMed  4 Milne R, Chambers L. Assessing the scientific quality of review articles. J Epidemiol Commun Health.  1993; 47 3: 169– 170. Google Scholar CrossRef Search ADS   5 Neely JG. Literature review articles as a research form. Otolaryngol Head Neck Surg.  1993; 108 6: 743– 748. Google Scholar CrossRef Search ADS PubMed  6 Crowther MA, Cook DJ. Trials and tribulations of systematic reviews and meta-analyses. Hematology Am Soc Hematol Educ Program.  2007: 493– 497. 7 Liberati A, Altman DG, Tegzlaff Jet al.   The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Ann Int Med . 2009; 151 4: W65– W94. Google Scholar CrossRef Search ADS PubMed  8 Birnbaum LS, Thayer KA, Bucher JRet al.  , Implementing systematic reviews at the National Toxicology Program: Status and next steps. Env Health Perspect.  2013; 121 4: A108– A109. Google Scholar CrossRef Search ADS   9 Mulrow CD. Rationale for systematic reviews. BMJ.  1994; 309 6954: 597– 599. Google Scholar CrossRef Search ADS PubMed  10 Petticrew M. Systematic reviews from astronomy to zoology: Myths and misconceptions. BMJ.  2001; 322 7278: 98– 101. Google Scholar CrossRef Search ADS PubMed  11 Bhandari M, Devereqeux PJ, Montori Vet al.  , Users’ guide to the surgical literature: How to use a systematic literature review and meta-analysis. J Can Chir . 2004; 47 1: 60– 67. 12 Noordzij M, Hooft L, Dekker FWet al.   Systematic reviews and meta-analyses: When they are useful and when to be careful. Kidney Int.  2009; 76 11: 1130– 1136. Google Scholar CrossRef Search ADS PubMed  13 Mulrow CD. The medical review article: State of the science. Ann Intern Med.  1987; 106 3: 485– 488. Google Scholar CrossRef Search ADS PubMed  14 Weed DL. Methodological guidelines for review papers. J Natl Cancer Inst.  1997; 89 1: 6– 7. Google Scholar CrossRef Search ADS   15 Lichtenstein AH, Yetley EA, Lau J. Application of systematic review methodology to the field of nutrition. J Nutrition . 2008; 138 12: 2297– 2306. Google Scholar CrossRef Search ADS   16 Oxman AD. Checklists for review articles. BMJ.  1994; 309 6955: 648– 651. Google Scholar CrossRef Search ADS PubMed  17 Montori VM, Swiontkowski MF, Cook DJet al.  , Methodologic issues in systematic reviews and meta-analyses. Clin Orthop Rel Res.  2003; 413: 43– 54. Google Scholar CrossRef Search ADS   18 Mullen PD, Ramirez G. The promise and pitfalls of systematic reviews. Annu Rev Public Health.  2006; 27: 81– 102. Google Scholar CrossRef Search ADS PubMed  19 Bramwell VHC, Williams CJ. Do authors of review articles use systematic methods to identify, assess, and synthesize information? Ann Oncol.  1997; 8 12: 1185– 1195. Google Scholar CrossRef Search ADS PubMed  20 Breslow RA, Ross SA, Weed DL. Quality of reviews in epidemiology. Am J Pub Health.  1998; 88 3: 475– 477. Google Scholar CrossRef Search ADS   21 Mignini LE, Khan KS. Methodological quality of systematic reviews of animal studies: A survey of reviews of basic research. BMC Med Res Methodol.  2006; 6: 10. Google Scholar CrossRef Search ADS PubMed  22 Golder S, Loke Y, McIntosh HM. Poor reporting and inadequate searches were apparent in systematic reviews of adverse effects. J Clin Epidemiol.  2008; 61 5: 440– 448. Google Scholar CrossRef Search ADS PubMed  23 Lundh A, Knijnenburg S, Jorgensen AWet al.  , Quality of systematic reviews in pediatric oncology-a systematic review. Cancer Treat Rev.  2009; 35 8: 645– 652. Google Scholar CrossRef Search ADS PubMed  24 Institute of Medicine. Knowing What Works in Health Care: A Roadmap for the Nation. Report Brief . Washington, DC: National Academies; 2008. 25 Shea BJ, Grimshaw JM, Wells GAet al.  , Development of AMSTAR: A measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol.  2007; 7: 10. Google Scholar CrossRef Search ADS PubMed  26 Shea BJ, Bouter LM, Peterson Jet al.  , External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One.  2007; 2 12: e1350. Google Scholar CrossRef Search ADS PubMed  27 Shea BJ, Hamel C, Wells GAet al.  , AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol.  2009; 62: 1013– 1020. Google Scholar CrossRef Search ADS PubMed  28 Kung J, Chiappelli F, Cajulis OOet al.  , From systematic reviews to clinical recommendations for evidence-based health care: Validation of revised assessment of multiple systematic reviews (R-AMSTAR) for grading of clinical relevance. Open Dent J.  2010; 4: 84– 91. Google Scholar PubMed  29 Duncan M, Moschopoulou E, Herrington Eet al.  , Review of systematic reviews of non-pharmacological interventions to improve quality of life in cancer survivors. BMJ Open.  2017; 7 11: e015860. Google Scholar CrossRef Search ADS PubMed  30 Li L, Ying XJ, Sun TTet al.  , Overview of methodological quality of systematic reviews about gastric cancer risk and protective factors. Asian Pac J Cancer Prev.  2012; 13 5: 20169– 20179. 31 Hasan H, Muhammed T, Yu Jet al.  , Assessing the methodological quality of systematic reviews in radiation oncology: A systematic review. Cancer Epidemiol.  2017; 50( Pt A): 141– 149. Google Scholar CrossRef Search ADS PubMed  32 Volmink J, Siegfried N, Robertson Ket al.  , Research synthesis and dissemination as a bridge to knowledge management: The Cochrane Collaboration. Bull World Health Organ.  2004; 82 10: 778– 783. Google Scholar PubMed  33 Weed DL. Causal inference in epidemiology: Potential outcomes, pluralism, and peer review. Int J Epidemiol.  2016; 45 6: 1838– 1840. Google Scholar PubMed  © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

JNCI: Journal of the National Cancer InstituteOxford University Press

Published: Mar 29, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off