•  
  •  
 

SMU Data Science Review

Abstract

For English teachers and students who are dissatisfied with the one-size-fits-all approach of current Automated Essay Scoring (AES) systems, this research uses Natural Language Processing (NLP) techniques that provide a focus on configurability and interpretability. Unlike traditional AES models which are designed to provide an overall score based on pre-trained criteria, this tool allows teachers to tailor feedback based upon specific focus areas. The tool implements a user-interface that serves as a customizable rubric. Students’ essays are inputted into the tool either by the student or by the teacher via the application’s user-interface. Based on the rubric settings, the tool evaluates the essay and provides instant feedback. In addition to rubric-based feedback, the tool also implements a Multi-Armed Bandit recommender engine to suggest educational resources to the student that align with the rubric. Thus, reducing the amount of time teachers spend grading essay drafts and re-teaching. The tool developed and deployed as part of this research reduces the burden on teachers and provides instant, customizable feedback to students. Our minimum estimation for time savings to students and teachers is 117 hours per semester. The effectiveness of the feedback criteria for predicting if an essay was proficient or needs improvement was measured using recall. The recall for the model built for the persuasive essays was 0.96 and 0.86 for the source dependent essay model.

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial 4.0 License

Share

COinS