상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
한국BIM학회.jpg
KCI등재후보 학술저널

BIM 운용 전문가 시험을 통한 ChatGPT의 BIM 분야 전문 지식 수준 평가

Evaluating ChatGPT’s Competency in BIM Related Knowledge via the Korean BIM Expertise Exam

ChatGPT, a chatbot based on GPT large language models, has gained immense popularity among the general public as well as domain professionals. To assess its proficiency in specialized fields, ChatGPT was tested on mainstream exams like the bar exam and medical licensing tests. This study evaluated ChatGPT's ability to answer questions related to Building Information Modeling (BIM) by testing it on Korea’s BIM expertise exam, focusing primarily on multiple-choice problems. Both GPT-3.5 and GPT-4 were tested by prompting them to provide the correct answers to three years' worth of exams, totaling 150 questions. The results showed that both versions passed the test with average scores of 68 and 85, respectively. GPT-4 performed particularly well in categories related to 'BIM software' and 'Smart Construction technology'. However, it did not fare well in 'BIM applications'. Both versions were more proficient with short-answer choices than with sentence-length answers. Additionally, GPT-4 struggled with questions related to BIM policies and regulations specific to the Korean industry. Such limitations might be addressed by using tools like LangChain, which allow for feeding domain-specific documents to customize ChatGPT’s responses. These advancements are anticipated to enhance ChatGPT’s utility as a virtual assistant for BIM education and modeling automation.

1. 서론

2. 연구 배경

3. 연구 방법

4. 연구 결과

5. 시사점

6. 결론

감사의 글

References

로딩중