
A Case Study of Applying Automated Essay Scoring Programs within a Military English Context
A Case Study of Applying Automated Essay Scoring Programs within a Military English Context
- 한국ESP학회
- ESP Review
- Vol.4 No.1
- 2022.06
- 33 - 47 (15 pages)
The present study discusses the feasibility of implementing automated essay scoring (AES) software and explores its ability to analyze essays that deal with topics related to the military. Since the teaching of English to military personnel plays a pivotal role in a country’s defense and national security, there is a need to teach military-related terms and expressions. To help learners improve their English writing skills with the support of timely and proper feedback, the researcher explored the possibility of utilizing two types of AES software and their feedback functions. After collecting three student essays written on a military-related topic, the collected essays were edited by three different groups: Grammarly, e-rater, and three human raters. Analysis was completed by comparing the accuracy of the corrective feedback provided by Grammarly and e-rater as opposed to the native English-speaking raters. The error analysis was conducted using a qualitative approach and showed that there were some differences in categories such as verbs, and symbol- mechanics among the three feedback groups. These findings suggest that these AES programs can be adapted to an EFL classroom designed to teach military English but are limited in assuring the same degree of linguistic accuracy as human raters.
Ⅰ. INTRODUCTION
Ⅱ. LITERATURE REVIEW
Ⅲ. RESEARCH METHOD
Ⅳ. RESULTS & DISCUSSION
Ⅴ. CONCLUSION
REFERENCES