This research investigates the Bidirectional Encoder Representations from Transformers (BERT) model’s ability to understand semantic and syntactic preferences of low-frequency expressions through reference resolution in picture noun phrases (PNPs). To this end, we report on three experiments that evaluate BERT’s understanding of reference resolution differences between personal pronouns and reflexives in possessor-less and possessed PNPs. Our experiments show that BERT exhibits human-like referential preferences with reflexives but not with personal pronouns. The findings for reflexive resolution suggest that BERT’s deep learning training does not solely rely on frequency information but serves as a mechanism for acquiring more systematic linguistic soft constraints Moreover, the different resolution patterns from the pronouns could be attributed to the reflexives’ more explicit referential dependency and their relatively low frequency.
1. Introduction
2. Background
3. Experiments
4. General Discussion
References
(0)
(0)