상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
현대문법연구 제119호.jpg
KCI등재 학술저널

Transformer-Based Language Models as Psycholinguistic Subjects: Focusing on Understanding Metaphor

Transformer-Based Language Models as Psycholinguistic Subjects: Focusing on Understanding Metaphor

DOI : 10.14342/smog.2023.119.87
  • 7

Metaphor is a fundamental aspect of human language and cognition, playing a crucial role in communication, comprehension, and creative expression. In light of the recent advancements demonstrated by prominent language models, a pivotal question arises: Can these expansive language models effectively discern metaphorical knowledge? The primary objective involves comparing the surprisal values estimated from neural network language models like autoregressive and bidirectional language models to the reaction times of human when exposed to both metaphorical and literal sentences. Our secondary objective involves assessing the AI's comprehension of metaphors by utilizing the sensicality ratings generated by sophisticated ChatGPT. To achieve this, we used psycholinguistic methods, and adopted the experimental materials from Lai, Currana, and Menna (2009). We found the surprisal values estimated from the autoregressive language model demonstrate metaphor processing that closely resembles that of native speakers. Furthermore, ChatGPT's processing of conventional metaphorical sentences closely resembles its approach to literal sentences, mirroring the convergence observed in native speakers' ERP response to conventional metaphorical sentences and their alignment with that of literal sentences.

1. Introduction

2. Previous Studies

3. Methodology

4. Results

5. Discussion

6. Conclusion

References

로딩중