EURO-Online login
- New to EURO? Create an account
- I forgot my username and/or my password.
- Help with cookies
(important for IE8 users)
2626. Analysis of characteristics of human score and improvement of feedback of evaluation results of automated essay scoring system
Invited abstract in session TB-31: Analytics and the link with stochastic dynamics II, stream Analytics.
Tuesday, 10:30-12:00Room: 046 (building: 208)
Authors (first author is the speaker)
1. | Megumi Yamamoto
|
School of Contemporary International Studies, Nagoya University of Foreign Studies |
Abstract
We propose and construct an automated essay scoring system based on rubric to reduce the workload on teachers. This system calculates thirteen items, including written content, document style, writing skills, and output the overall evaluation results. We collected and analyzed human score data to improve accuracy of overall evaluation. Specifically, we asked 6 faculty members to score 25 essays manually using our automated scoring system rubric to identify the characteristics of items that affect the overall evaluation. The results revealed that low-rated essays had similar evaluation results among faculty members. Correlations were also obtained between the automated and human scores for document style, skill, and contents among the rubric items. However, we could not obtain the common evaluation items among the teachers for high-rated essays and established the absence of an effective automatic calculation method for the overall evaluation. Therefore, we classify low-rated essays and others using the accumulated low-rated essay data and present a comprehensive evaluation of low-rated essays by automated scoring. On the other hand, for others including high-rated essays we suggest that a feedback system highlighting essential evaluation contents rather than the overall evaluation would be beneficial.
Keywords
- Analytics and Data Science
Status: accepted
Back to the list of papers