The present study aimed at exploring the feasibility of the automated scoring in a writing test of Korean language. To this end, a scoring model was constructed and score prediction was performed by utilizing machine learning based on writing responses and their scoring data from the pesudo-test of Sejong Korean language Assessment (SKA). Random Forest, one of the representative supervised learning algorithms, was used for this machine learning, and the performance of the scoring model was validated in terms of ‘Accuracy,’ ‘Precision,’ ‘Recall,’ ‘F1,’ and ‘Kappa’ values. As a result, the performance of the scoring model in the two scoring domains of ‘language use’ and ‘content’ was good even though the data size for machine learning in this study was very small and the scoring rubric was not customized for the SKA writing test items. This results showed very positive implications for applying an automated scoring to the SKA writing test. In particular, the correlation between scores predicted by the automated scoring model and given by the human rater was smaller than that between scores by two human raters, however, the difference of the two groups was just about .10, which might be overcome by using a larger amount of data for machine learning and customizing automated scoring features.
1. 서론
2. 선행연구 분석
3. 연구 방법
4. 연구 결과
5. 요약 및 논의
참고문헌