상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
162413.jpg
KCI등재 학술저널

L2 영어 교과서를 ‘학습’한 L2-신경망 언어 모델의 문법 일반화 양상

Grammatical Generalizations in Neural Language Models Trained on L2 Textbooks

  • 32

Recent studies employing state-of-the-art neural network language models (NLMs) have reported their human-like performances in ‘understanding’ various linguistic phenomena particularly through the Benchmark of Linguistic Minimal Pairs (BLiMP), which is a challenge test dataset of sentences to be used for evaluating the linguistic knowledge of NLMs on major grammatical phenomena in English (Warstadt et al., 2020). Adopting the methodology at hand, this paper aims to assess the level of linguistic knowledge acquired by L2-NLMs trained on English textbooks (alias the K-English datasets) published in Korea and compare it with the corresponding different levels in English native speakers and L1-NLMs. Assuming that an NLM is also a language learner, we used the BLiMP to evaluate the grammaticality rating performances of L2-NLMs based on Generation Pre-trained Transformer-2 (GPT-2) and Long Short-Term Memory (LSTM). In conclusion, this study demonstrates that the L2-NLMs have attained a substantially lower level of grammatical generalization than L1 counterparts as well as English native speakers. The results imply that the K-English training datasets are not robust enough for L2 NLMs to make substantial grammatical generalizations.

1. 서론

2. 연구 방법론

3. 선행 연구의 GPT-2와 LSTM 활용 언어 현상 연구

4. 문법 일반화 실험 연구

5. 전체 상관관계

6. 논의 및 결론

로딩중