상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
학술저널

Does BERT Learn Syntactic and Semantic Preferences in Picture Noun Phrase Interpretation?

Does BERT Learn Syntactic and Semantic Preferences in Picture Noun Phrase Interpretation?

  • 14
영어학연구 제30권 2호.jpg

This research investigates the Bidirectional Encoder Representations from Transformers (BERT) model’s ability to understand semantic and syntactic preferences of low-frequency expressions through reference resolution in picture noun phrases (PNPs). To this end, we report on three experiments that evaluate BERT’s understanding of reference resolution differences between personal pronouns and reflexives in possessor-less and possessed PNPs. Our experiments show that BERT exhibits human-like referential preferences with reflexives but not with personal pronouns. The findings for reflexive resolution suggest that BERT’s deep learning training does not solely rely on frequency information but serves as a mechanism for acquiring more systematic linguistic soft constraints Moreover, the different resolution patterns from the pronouns could be attributed to the reflexives’ more explicit referential dependency and their relatively low frequency.

1. Introduction

2. Background

3. Experiments

4. General Discussion

References

(0)

(0)

로딩중