The rapid growth of streaming platforms has increased the demand for high-quality subtitle translation, underscoring the need for domain adaptation in machine translation. This study fine-tuned the NLLB-200- distilled-600M model for Korean-French subtitles using BF16 mixed- precision training on NVIDIA H100 GPUs, with FLORES tags and forced BOS tokens for direction control. Trained on AI-Hub film and drama data with strict non-overlapping splits, the model achieved BLEU 84.33 and chrF 95.39 on 2,039 test sentences, with notable variation by length and genre. Qualitative review highlighted errors in discourse markers, cultural references, and colloquial forms. The study contributes a practical fine-tuning recipe, an efficient H100, BF16 setup, reproducibility protocols, and pedagogical insights, demonstrating that subtitle translation requires both technical accuracy and cultural equivalence.
Ⅰ. Introduction
Ⅱ. Research Background
Ⅲ. Methods & Experimental Design
Ⅳ. Results and Discussion
Ⅴ. Conclusion and Discussion
References
(0)
(0)