상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
국가지식-학술정보

Fine-tuning and Evaluation of LLaMA Models for Correcting Korean Particle Substitution Errors in Beginner Vietnamese Learners - Focusing on eun/neun (은/는), i/ka (이/가), e (에), and eso (에서)

  • 0
커버이미지 없음

Korean grammatical particles present a persistent challenge for Vietnamese learners due to fundamental syntactic differences between the two languages. Vietnamese lacks case-marking particles, often leading to substitution errors involving eun/ neun (은/는), i/ka (이/가), e (에), and eso (에서). Traditional teaching methods offer limited success in addressing these issues. Motivated by the need for more adaptive and learner-sensitive solutions, this paper explores the fine-tuning of the LLaMA 3.2.1B language model to correct Korean particle substitution errors commonly made by beginner Vietnamese learners. A custom dataset was developed by generating simulated learner errors based on authentic sentence structures. The model was fine-tuned using Low-Rank Adaptation (LoRA) and instruction-based prompts to ensure efficiency and contextual accuracy. Evaluation on a 5,800-sentence test set demonstrated a sentence-level accuracy of 91.15%, compared to just 8.36% for the pre-trained baseline. With appropriate fine-tuning, these results endorse the capacity of large language models for providing sound grammatical corrections that are personally suited to the requirements of the learners. This technology exhibits promising potential for intelligent tutoring systems in facilitating one-to-one, real-time feedback in second language learning environments.

(0)

(0)

로딩중