Evaluating the Inter-Rater Reliability of a Court Based Assessment Quality Assurance Tool

Taylor R Karbowski, Fordham University

Abstract

Following a study conducted in 2015 which found that only about one third of court-based assessment recommendations completed by court-based clinicians were included by juvenile probation officers (JPOs) in their initial case plans (Morin et al., 2015), one institution retrained their clinical staff on a forensic formulation model of case report writing. The current study provides an initial evaluation of an extended quality assurance tool that will be used to comprehensively evaluate key indicators focused on the content and structure of the report. Inter-rater reliability must be established as a key first step before the tool can be used to systematically evaluate a larger number of reports. Of the 27 total items that were evaluated for inter-rater reliability, eight had ICC values in the excellent reliability range, five had good reliability, and six had moderate reliability. ICC values in general were notably lower than ICC values found in Morin et al. (2015), which may indicate a need for a larger sample size, further development of the current quality assurance tool, or a need for additional rater training on the tool.

Subject Area

Psychology|Clinical psychology|Criminology|Behavioral psychology

Recommended Citation

Karbowski, Taylor R, "Evaluating the Inter-Rater Reliability of a Court Based Assessment Quality Assurance Tool" (2019). ETD Collection for Fordham University. AAI22618446.
https://research.library.fordham.edu/dissertations/AAI22618446

Share

COinS