상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
한국IT서비스학회.jpg
KCI등재 학술저널

Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization

DOI : 10.9716/KITS.2022.21.2.085
  • 41

This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST’(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm’s inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word’s attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

1. 서 론

2. 이론적 배경

3. exBERT 소개

4. 연구모형

5. 연구결과

6. 결 론

참고문헌

로딩중