The purpose of this study is to develop an English speaking skills assessment model using AI chatbot. The model includes the tasks, the rubric for grading and the test results. Ten English teachers and 64 students were asked to report on the existing English speaking assessment to diagnose any potential problems. The assessment model is designed with four different tasks, which consider the Korean National curriculum and the literature research. The criteria of the assessment consist of content delivery, accuracy and task completion and it determines the score of achievement. Before applying the test, a pre-inspection was performed by ten English teachers to improve the validity and the practicality of the test. After taking the test, 64 students answered the questionnaire, which explored their reactions to the test. For the purpose of verifying reliability, the test was conducted with a time difference and the results were analyzed. The results indicated that the assessment model has a high validity and reliability. Additionally, the students’ reactions revealed that they had more confidence than the in-person interview. Furthermore, they enhanced their speaking proficiency by uptaking the given feedbacks. Finally, the pedagogical significance implicated that individualized learning and scaffolded assessment is possible.
II. LITERATURE REVIEW
IV. FINDINGS & DISCUSSION