This study evaluated the performance of generative AI in solving mechanical engineering problems and its potential for improvement via error compensation. The research used questions from the civil servants competitive exam and the selection exam for secondary school level teachers to conduct experiments using Zero-shot(questions only) and Few-shot(questions and related contents) prompts. The study employed a controlled group pretest-posttest experimental design for the civil service exams and a single group post-test design for the secondary teacher exam. The results using Google's NotebookLM indicated that Zero-shot prompts yielded a correct answer rate of 60-75% on the civil servants exams, while Few-shot prompts yielded 88%-95%, a 25% and 24% rise for grade 7 and 9 exams, respectively. The Few-shot prompts significantly improved the accuracy, with a 40% increase in the secondary teacher exam. The study concludes that providing relevant materials through Few-shot prompts is crucial when using generative AI for engineering problem-solving. It also emphasizes the importance of verifying the reliability of the information provided and validating the AI's results. This highlights the necessity for accurate and trustworthy information when students or educators use generative AI for self-directed learning in engineering problem-solving.
Ⅰ. 서 론
Ⅱ. 이론적 배경
Ⅲ. 실험 설계
Ⅳ. 실험 결과 및 분석
Ⅴ. 결론 및 논의, 제언
참고문헌
(0)
(0)