TURKISH JOURNAL OF MEDICAL SCIENCES, vol.41, no.5, pp.939-944, 2011 (SCI-Expanded)
Aim: Agreement between 2 or more independent raters evaluating the same items and same scale can be measured by kappa coefficient. In recent years, modeling agreement among raters rather than summarizing indices has been preferred. In this study, the disadvantages of kappa are reviewed. Agreement models are introduced and these models are applied to a real data set.