상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
학술저널

AI–인간 평가자 간 독해 지문 난이도 평정 비교: 9급 국가직 공무원 영어시험(2019~2023년)을 중심으로

A Comparison of AI and Human Raters in Reading Text Difficulty Ratings: Focusing on the Grade 9 National Civil Service English Examination (2019~2023)

  • 39
융합영어영문학 제10권 3호.png

This study examined the degree of agreement between AI-based CEFR difficulty analyses and human raters’ judgments of English reading passages from the Korean Grade 9 National Civil Service Examination (2019–2023). Fifty passages were analyzed using the Cathoven AI CEFR Checker, and six trained human raters independently evaluated text difficulty on a five-point scale. Weighted Cohen’s κ and Spearman’s ρ were used to examine inter-rater reliability and AI–human agreement. Results showed moderate to high agreement among human raters but relatively low alignment between AI and human ratings. AI tended to overestimate difficulty in texts with complex syntax and advanced vocabulary, whereas human raters emphasized content familiarity and cognitive accessibility, suggesting that AI can serve as a complementary assessment tool providing objective linguistic indicators rather than replacing human judgment.

Ⅰ. 서론

Ⅱ. 선행연구

Ⅲ. 연구 방법

Ⅳ. 연구 결과

Ⅴ. 결론 및 제언점

인용문헌

(0)

(0)

로딩중