Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Interobserver Agreement in the Diagnosis of Stroke Type

Interobserver Agreement in the Diagnosis of Stroke Type Abstract • Interobserver agreement is essential to the reliability of clinical data from cooperative studies and provides the foundation for applying research results to clinical practice. In the Stroke Data Bank, a large cooperative study of stroke, we sought to establish the reliability of a key aspect of stroke diagnosis: the mechanism of stroke. Seventeen patients were evaluated by six neurologists. Interobserver agreement was measured when diagnosis was based on patient history and neurologic examination only, as well as when it was based on results of a completed workup, including a computed tomographic scan. Initial clinical impressions, based solely on history and one neurologic examination, were fairly reliable in establishing the mechanism of stroke (ie, distinguishing among infarcts, subarachnoid hemorrhages, and parenchymatous hemorrhages). Classification into one of nine stroke subtypes was substantially reliable when diagnoses were based on a completed workup. Compared with previous findings for the same physicians and patients, the diagnosis of stroke type was generally more reliable than individual signs and symptoms. These results suggest that multicentered studies can rely on the independent diagnostic choices of several physicians when common definitions are employed and data from a completed workup are available. Furthermore, reliability may be less for individual measurements such as signs or symptoms than for more-complex judgments such as diagnoses. References 1. Kunitz SC, Gross CR, Heyman A, et al: The Pilot Stroke Data Bank: Definition, design, and data . Stroke 1984;15:740-746.Crossref 2. Koran LM: The reliability of clinical methods, data and judgments: I . N Engl J Med 1975;293:642-646.Crossref 3. Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario: Clinical disagreement: I. How often it occurs and why . Can Med Assoc J 1980;123:499-504. 4. Koran LM: The reliability of clinical methods, data and judgments: II . N Engl J Med 1975;293:695-701.Crossref 5. Garland LH: The problem of observer error . Bull NY Acad Med 1960;36:570-584. 6. Aoki N, Horibe H, Ohno Y, et al: Epidemiological evaluation of funduscopic findings in cerebrovascular diseases: III. Observer variability and reproducibility for funduscopic findings . Jpn Circ J 1977;41:11-17.Crossref 7. McCance C, Watt JA, Hall DJ: An evaluation of the reliability and validity of the plantar response in a psychogeriatric population . J Chronic Dis 1968;21:369-374.Crossref 8. Tomasello F, Mariani F, Fieschi C, et al: Assessment of inter-observer differences in the Italian multicenter study on reversible cerebral ischemia . Stroke 1982;13:32-35.Crossref 9. Shinar D, Gross CR, Mohr JP, et al: Interobserver variability in the assessment of neurologic history and examination in the Stroke Data Bank . Arch Neurol 1985;42:557-565.Crossref 10. Sisk C, Ziegler DK, Zileli T: Discrepancies in recorded results from duplicate neurological history and examination in patients studied for prognosis in cerebrovascular disease . Stroke 1970;1:14-18.Crossref 11. Kraaijeveld CL, van Gijn J, Schouten HJA, et al: Interobserver agreement for the diagnosis of transient ischemic attacks . Stroke 1984;15:723-725.Crossref 12. Calanchini PR, Swanson PD, Gotshall RA, et al: Cooperative study of hospital frequency and character of transient ischemic attacks: IV. The reliability of diagnosis . JAMA 1977;238:2029-2033.Crossref 13. Mohr JP, Nichols FT, Tatemichi TK: Classification and diagnosis of stroke . Int Angiol 1984;3:431-439. 14. Fleiss JL: Measuring nominal scale agreement among many raters . Psychol Bull 1971;76:378-382.Crossref 15. Fleiss JL, Nee JCM, Landis JR: Large sample variance of K in the case of different sets of raters . Psychol Bull 1979;86:974-977.Crossref 16. Landis JR, Koch GG: The measurement of observer agreement for categorical data . Biometrics 1977;33:159-174.Crossref 17. Armitage P: Statistical Methods in Medical Research . Boston, Blackwell Scientific Publications Inc, 1971. 18. Walter SD: Measuring the reliability of clinical data: The case for using three observers . Rev Epidemiol Sante Publique 1984;32:206-211. 19. Feinstein AR: Clinical Judgment . Baltimore, Williams & Wilkins, 1967. 20. Eddy DM, Clanton CH: The art of diagnosis . N Engl J Med 1982;306:1263-1268.Crossref 21. Payne JW: Information processing theory: Some concepts and methods applied to decision research , in Wallsten TS (ed): Cognitive Processes in Choice and Decision Behavior . Hillsdale, NJ, Lawrence Erlbaum Assoc Inc, 1980. 22. Schustack MW, Sternberg RJ: Evaluation of evidence in causal inference . J Exp Psychol Gen 1981;110:101-120.Crossref 23. Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario: Clinical disagreement: II. How to avoid it and how to learn from one's mistakes . Can Med Assoc J 1980;123:613-617. 24. Westlund KB, Kurland LT: Studies on multiple sclerosis in Winnipeg, Manitoba and New Orleans, Louisiana . Am J Hyg 1953;57:380-396. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Archives of Neurology American Medical Association

Loading next page...
 
/lp/american-medical-association/interobserver-agreement-in-the-diagnosis-of-stroke-type-00cXuPuXCV

References (27)

Publisher
American Medical Association
Copyright
Copyright © 1986 American Medical Association. All Rights Reserved.
ISSN
0003-9942
eISSN
1538-3687
DOI
10.1001/archneur.1986.00520090031012
Publisher site
See Article on Publisher Site

Abstract

Abstract • Interobserver agreement is essential to the reliability of clinical data from cooperative studies and provides the foundation for applying research results to clinical practice. In the Stroke Data Bank, a large cooperative study of stroke, we sought to establish the reliability of a key aspect of stroke diagnosis: the mechanism of stroke. Seventeen patients were evaluated by six neurologists. Interobserver agreement was measured when diagnosis was based on patient history and neurologic examination only, as well as when it was based on results of a completed workup, including a computed tomographic scan. Initial clinical impressions, based solely on history and one neurologic examination, were fairly reliable in establishing the mechanism of stroke (ie, distinguishing among infarcts, subarachnoid hemorrhages, and parenchymatous hemorrhages). Classification into one of nine stroke subtypes was substantially reliable when diagnoses were based on a completed workup. Compared with previous findings for the same physicians and patients, the diagnosis of stroke type was generally more reliable than individual signs and symptoms. These results suggest that multicentered studies can rely on the independent diagnostic choices of several physicians when common definitions are employed and data from a completed workup are available. Furthermore, reliability may be less for individual measurements such as signs or symptoms than for more-complex judgments such as diagnoses. References 1. Kunitz SC, Gross CR, Heyman A, et al: The Pilot Stroke Data Bank: Definition, design, and data . Stroke 1984;15:740-746.Crossref 2. Koran LM: The reliability of clinical methods, data and judgments: I . N Engl J Med 1975;293:642-646.Crossref 3. Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario: Clinical disagreement: I. How often it occurs and why . Can Med Assoc J 1980;123:499-504. 4. Koran LM: The reliability of clinical methods, data and judgments: II . N Engl J Med 1975;293:695-701.Crossref 5. Garland LH: The problem of observer error . Bull NY Acad Med 1960;36:570-584. 6. Aoki N, Horibe H, Ohno Y, et al: Epidemiological evaluation of funduscopic findings in cerebrovascular diseases: III. Observer variability and reproducibility for funduscopic findings . Jpn Circ J 1977;41:11-17.Crossref 7. McCance C, Watt JA, Hall DJ: An evaluation of the reliability and validity of the plantar response in a psychogeriatric population . J Chronic Dis 1968;21:369-374.Crossref 8. Tomasello F, Mariani F, Fieschi C, et al: Assessment of inter-observer differences in the Italian multicenter study on reversible cerebral ischemia . Stroke 1982;13:32-35.Crossref 9. Shinar D, Gross CR, Mohr JP, et al: Interobserver variability in the assessment of neurologic history and examination in the Stroke Data Bank . Arch Neurol 1985;42:557-565.Crossref 10. Sisk C, Ziegler DK, Zileli T: Discrepancies in recorded results from duplicate neurological history and examination in patients studied for prognosis in cerebrovascular disease . Stroke 1970;1:14-18.Crossref 11. Kraaijeveld CL, van Gijn J, Schouten HJA, et al: Interobserver agreement for the diagnosis of transient ischemic attacks . Stroke 1984;15:723-725.Crossref 12. Calanchini PR, Swanson PD, Gotshall RA, et al: Cooperative study of hospital frequency and character of transient ischemic attacks: IV. The reliability of diagnosis . JAMA 1977;238:2029-2033.Crossref 13. Mohr JP, Nichols FT, Tatemichi TK: Classification and diagnosis of stroke . Int Angiol 1984;3:431-439. 14. Fleiss JL: Measuring nominal scale agreement among many raters . Psychol Bull 1971;76:378-382.Crossref 15. Fleiss JL, Nee JCM, Landis JR: Large sample variance of K in the case of different sets of raters . Psychol Bull 1979;86:974-977.Crossref 16. Landis JR, Koch GG: The measurement of observer agreement for categorical data . Biometrics 1977;33:159-174.Crossref 17. Armitage P: Statistical Methods in Medical Research . Boston, Blackwell Scientific Publications Inc, 1971. 18. Walter SD: Measuring the reliability of clinical data: The case for using three observers . Rev Epidemiol Sante Publique 1984;32:206-211. 19. Feinstein AR: Clinical Judgment . Baltimore, Williams & Wilkins, 1967. 20. Eddy DM, Clanton CH: The art of diagnosis . N Engl J Med 1982;306:1263-1268.Crossref 21. Payne JW: Information processing theory: Some concepts and methods applied to decision research , in Wallsten TS (ed): Cognitive Processes in Choice and Decision Behavior . Hillsdale, NJ, Lawrence Erlbaum Assoc Inc, 1980. 22. Schustack MW, Sternberg RJ: Evaluation of evidence in causal inference . J Exp Psychol Gen 1981;110:101-120.Crossref 23. Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario: Clinical disagreement: II. How to avoid it and how to learn from one's mistakes . Can Med Assoc J 1980;123:613-617. 24. Westlund KB, Kurland LT: Studies on multiple sclerosis in Winnipeg, Manitoba and New Orleans, Louisiana . Am J Hyg 1953;57:380-396.

Journal

Archives of NeurologyAmerican Medical Association

Published: Sep 1, 1986

There are no references for this article.