상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
국가지식-학술정보

Understanding Review Helpfulness through Diagnosticity and Cognitive Load: Comparative Analysis of LLM and ML Models on Restaurant Reviews

Understanding Review Helpfulness through Diagnosticity and Cognitive Load: Comparative Analysis of LLM and ML Models on Restaurant Reviews

  • 3
커버이미지 없음

This study investigates the determinants of review helpfulness and evaluates the predictive performance of traditional machine learning models and large language models (LLMs) using a 14-year dataset of 46,392 user-generated reviews from the OpenTable restaurant reservation platform. We compare four traditional machine learning (ML) classifiers-logistic regression, decision tree, random forest, and gradient boost tree-with a fine-tuned version of distilBERT, a lightweight large language model (LLM) based on bidirectional encoder representations from transformers (BERT). While previous studies on review helpfulness have primarily focused on surface-level features such as length, sentiment, or rating, we address a critical gap by incorporating both information diagnosticity and cognitive load as core theoretical perspectives. Specifically, we apply information diagnosticity theory (IDT) and cognitive load theory (CLT) to conceptualize helpful reviews as those that are both specific and cognitively accessible. Our findings show that distilBERT outperforms all baseline machine learning (ML) models in terms of precision and area under the curve (AUC), while maintaining computational efficiency. Topic modeling results further reveal that reviews featuring functional, clear, and experience-based content are more likely to be classified as helpful, whereas emotionally vague or technically dense reviews tend to be less effective. We contribute to the literature by showing how theory-informed large language model (LLM) can capture both diagnostic and cognitive dimensions of helpfulness-an area previously underexplored.

(0)

(0)

로딩중