상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
학술저널

Building an Automated Scoring System for EFL Learners’ Paraphrases via a Customized GPT

  • 31
250328_영어어문교육 31-1호 앞표지.jpg

This study investigates the potential of a customized GPT as an automated paraphrase scoring (APS) system for EFL learners’ paraphrases with implications for reducing teachers’ grading workloads and achieving unbiased rating in classroom settings. A total of 1,000 paraphrases written by 100 Korean EFL learners were evaluated with the analytic and holistic scoring rubrics. The analytic rubric included syntactic change, word change, semantic equivalency, and grammatical accuracy, which are crucial to paraphrasing. A mixed-methods approach was employed to evaluate the APS’s reliability and effectiveness. Quantitative analysis examined the reliability and consistency of the APS were with Pearson and Intraclass Correlation Coefficients. Inter-rater reliability of the scores between APS and human raters was analyzed in various comparison and demonstrated strong alignment. Additionally, the consistency of APS across two rubrics indicated moderate reliability overall. Qualitative analysis further investigated the nature of the scores generated by the APS and its pedagogical implications. These findings suggest the APS via a custom GPT has its potential as an automated tool for writing assessment, providing reliable feedback to students while replacing human raters. Blending the automatic evaluation with customized GPTs into classrooms can be a solution for some challenges detected in manual scoring in educational context.

I. INTRODUCTION

II. LITERATURE REVIEW

III. METHOD

IV. RESULT

V. DISCUSSION

VI. CONCLUSION

REFERENCES

(0)

(0)

로딩중