Abstract
There has been a lack of research regarding faculty training in the grading of student reflective journals (RJs). Whether or how one should evaluate RJs remains contentious. This quasi-experimental study assessed whether providing faculty in-service training on scoring RJs using a rubric would result in statistically significant inter-rater reliability.Prior to the study, faculty raters received training on reflective practice and scoring RJs with a rubric based on five levels of reflection. Percent agreement between rater pairs, with 80% set as the inter-rater reliability benchmark, was utilized. Faculty raters scored anonymous BSW and MSW RJs assigned in cultural diversity and oppression courses. Expected learning outcomes included critical and reflective thinking; social justice; application and synthesis of classroom learning to social work practice; ethical awareness; and self-awareness. Fifty percent of RJs collected twice over one term were selected randomly. One faculty pair was selected by chance and assigned under blinded conditions to score either BSW or MSW RJs. Inter-rater reliability of BSW RJ scores ranged from 86% for the first set to 98% for the second set. For the MSW RJs, scores ranged from 85.5% to 83.2%. These findings were all statistically significant and indicated that, with prior training on the purpose of RJs and in using a rubric, faculty may be better able to evaluate RJs fairly.
Recommended Citation
Alschuler, Mari
(2017)
"Faculty Inter - Rater Reliability of a Reflective Journaling Rubric -- RESEARCH,"
Kentucky Journal of Excellence in College Teaching and Learning: Vol. 14, Article 1.
Available at:
https://encompass.eku.edu/kjectl/vol14/iss/1