•  
  •  
 

Abstract

There has been a lack of research regarding faculty training in the grading of student reflective journals (RJs). Whether or how one should evaluate RJs remains contentious. This quasi-experimental study assessed whether providing faculty in-service training on scoring RJs using a rubric would result in statistically significant inter-rater reliability.Prior to the study, faculty raters received training on reflective practice and scoring RJs with a rubric based on five levels of reflection. Percent agreement between rater pairs, with 80% set as the inter-rater reliability benchmark, was utilized. Faculty raters scored anonymous BSW and MSW RJs assigned in cultural diversity and oppression courses. Expected learning outcomes included critical and reflective thinking; social justice; application and synthesis of classroom learning to social work practice; ethical awareness; and self-awareness. Fifty percent of RJs collected twice over one term were selected randomly. One faculty pair was selected by chance and assigned under blinded conditions to score either BSW or MSW RJs. Inter-rater reliability of BSW RJ scores ranged from 86% for the first set to 98% for the second set. For the MSW RJs, scores ranged from 85.5% to 83.2%. These findings were all statistically significant and indicated that, with prior training on the purpose of RJs and in using a rubric, faculty may be better able to evaluate RJs fairly.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.