This study explores the utilization of ChatGPT for evaluating English oral presentations by Korean university students. By comparing assessments made by an instructor and ChatGPT, this study aims to determine the efficacy and reliability of AI-assisted evaluation in language education. The findings indicate that ChatGPT can effectively learn and apply the evaluation criteria provided by instructors, particularly in aspects such as delivery and time management. However, limitations exist, including the AI’s tendency to overlook critical areas emphasized by human evaluators, such as content depth and the logical cohesion of ideas. Additionally, non-linguistic elements like body posture and eye contact were less accurately assessed by the AI. These discrepancies highlight the need for continuous refinement of AI algorithms to better mimic human judgment and incorporate detailed evaluation factors. The study also underscores the potential of AI-generated feedback to support a learner-centric approach, enabling students to access tailored feedback and improve their language skills independently. Future research should focus on refining AI evaluation algorithms and exploring the long-term impact of AI-assisted feedback on student performance. The integration of AI tools like ChatGPT into educational practices holds promise, but ongoing evaluation and refinement are crucial to ensure they meet pedagogical goals.
1. Introduction
2. Literature Review
3. Methodology
4. Results and Discussion
5. Conclusion and Implications
References