A Bibliometric Analysis of Large Language Models' Trustworthiness from a Dynamic Perspective
A Bibliometric Analysis of Large Language Models' Trustworthiness from a Dynamic Perspective
- 한국인터넷방송통신학회
- International journal of advanced smart convergence
- Vol.14No.2
-
2025.0146 - 59 (14 pages)
- 0
The trustworthiness of large language models (LLMs) is becoming increasingly important, but extant review studies have shown two major limitations in dynamically elucidating it over time. First, as of 2024, they have not elucidated the most recent studies on the trustworthiness of LLMs. Second, they have focused on the trustworthiness of LLMs over a limited timespan without considering how it changes over time. To overcome these limitations, this research carried out a state-of-the-art bibliometric analysis on 117 articles on the trustworthiness of LLMs based on two stages of change from a dynamic perspective. Our study revealed the following four findings. First, article publications and citations grew drastically in the first half of 2024, and the trustworthiness of LLMs was confirmed as a recent promising research area in artificial intelligence (AI). Second, business, medicine, and education were especially noteworthy research areas related to the trustworthiness of LLMs. Third, LLM governance was an important recent emergent topic. Fourth, multinational collaboration for the trustworthiness of LLMs was strengthened. We suggest the following topics for future studies on the trustworthiness of LLMs: further promoting LLM governance, employing multidisciplinary and interdisciplinary approaches, and strengthening multinational collaboration.
(0)
(0)