IntroductionIn biomedical studies, researchers are often interested in assessing agreement on measurements taken on the same subjects using different methods or by different raters. There has been extensive literature on assessing agreement and making appropriate inference. For continuous data, various measures including both scaled measures, such as concordance correlation coefficient (CCC) (Lin et al., ) and unscaled ones, such as total deviation index (Lin et al., ), have been proposed. For categorical data, the kappa coefficient and its extensions (Cohen, , ) have been widely used.All of the aforementioned methods take the strategy of quantifying the agreement of interest by a global summary measure. While being simple, they have been criticized for their limitations in fully capturing agreement information (Tanner and Young, ; Darroch and McCloud, ). For example, with two categorical scales, Agresti () showed that when a simple quasi‐symmetry model holds for the contingency table, the kappa contains all relevant information about the structure of the agreement. However, when the quasi‐symmetry model fails, various agreement patterns can produce the same kappa value and the kappa coefficient alone is not capable of distinguishing different agreement patterns. Given the limitation of kappa coefficient, Tanner and Young (), Agresti (),
Biometrics – Wiley
Published: Jan 1, 2018
Keywords: ; ; ;
It’s your single place to instantly
discover and read the research
that matters to you.
Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month
Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly
Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.
All the latest content is available, no embargo periods.
“Whoa! It’s like Spotify but for academic articles.”@Phil_Robichaud