Assessing items and rater agreement through prevalence and bias for dichotomous and polytomous data

Frank Edward Williams, Fordham University

Abstract

The effects of prevalence and bias on kappa have been noted but only for dichotomous data (Byrt et al., 1993). In this research study, prevalence and bias have been analyzed for dichotomous and polytomous data of various weights for kappa. The current indices used to calculate prevalence and bias were altered in order to include items of any scale size. Another index was created to calculate the simultaneous effect of prevalence and bias. Using a simulated dataset and three archived data sets, unweighted, linear and quadratic weighted kappa was calculated in addition to adjusted kappa values which were formulated by eliminating the effects of prevalence, bias and prevalence/bias through modifying the joint distribution of agreement between raters. Spearman's rho correlation found a monotonic relationship between the new indices and the difference between the adjusted kappa values and the observed kappa values. This relationship was analyzed further to investigate whether the new indices could detect amounts of prevalence, bias and prevalence/bias as negligible or consequential. Receiver Operating Characteristic (ROC) analyses were utilized and found threshold values to quantify the amounts of the three effects. This research showed that it is possible to investigate items of various scale size for prevalence, bias and prevalence/bias and that similar procedures can be used to assess the quality of items and raters.

Subject Area

Educational tests & measurements|Quantitative psychology

Recommended Citation

Williams, Frank Edward, "Assessing items and rater agreement through prevalence and bias for dichotomous and polytomous data" (2013). ETD Collection for Fordham University. AAI3598862.
https://research.library.fordham.edu/dissertations/AAI3598862

Share

COinS