Cohen's kappa coefficient is a commonly used method for estimating interrater agreement for nominal and/or ordinal data; thus agreement is adjusted for that expected by chance. The weighted kappa statistic is used as an agreement index for ordinal data. The weights quantify the degree of discrepancy between the two categories. The choice of this particular set of weights affects the value of kappa. The common scores are Cicchetti-Allison and Fleiss-Cohen weights. In this article, we discuss the use of ridit type and exponential scores to compute kappa statistics in general.