상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
현대문법연구 제122호.jpg
KCI등재 학술저널

On Relative Clause Attachment Preferences in the L2 LSTM LM

On Relative Clause Attachment Preferences in the L2 LSTM LM

A well-known evaluation technique of neural language models (NLMs) is how models correctly assign probabilities to valid versus invalid syntactic constructions. This methodology suggests a grammatical sentence is more probable than an ungrammatical sentence. In this study, we use ambiguous relative clause attachment to extend such evaluation to cases of multiple simultaneous valid interpretations except for grammaticality differences. We compare model performance in English and Korean L2 learners of English to probe the biases of the L2 LM for ambiguous relative clause attachments. As an initial research step, we implement the L2 Long-Short Term Memory (LSTM) model using the K-English Textbook corpus. We then test the attachment preferences of the L2 LM using the stimuli from Davis (2022). In so doing, we confirm that the L2 LSTM LM prefers a LOW attachment in all test sentences, as shown by the L1 LMs. In addition, the L2 LM’s knowledge of implicit causality is not as robust as that of humans. We find a mismatch between human attachment preferences and NLMs. Through several experiments, we provide additional compelling evidence that there is a broader gap between NLMs and humans during comprehension tasks.

1. Introduction

2. Davis (2022)

3. The L2 LSTM LM and Its Attchment Preferences

4. General Discussion

5. Conclusion

References

로딩중