This study evaluated the engineering students’ ethical sensitivity to an AI emotion recognition robot scenario and explored its characteristics. For data collection, 54 students (27 majoring in Convergence Electronic Engineering and 27 majoring in Computer Software) were asked to list five factors regarding the AI robot scenario. For the analysis of ethical sensitivity, it was checked whether the students acknowledged the AI ethical principles in the AI robot scenario, such as safety, controllability, fairness, accountability, and transparency. We also categorized students’ levels as either informed or naive based on whether or not they infer specific situations and diverse outcomes and feel a responsibility to take action as engineers. As a result, 40.0% of students’ responses contained the AI ethical principles. These include safety 57.1%, controllability 10.7%, fairness 20.5%, accountability 11.6%, and transparency 0.0%. More students demonstrated ethical sensitivity at a naive level (76.8%) rather than at the informed level (23.2%). This study has implications for presenting an ethical sensitivity evaluation tool that can be utilized professionally in educational fields and applying it to engineering students to illustrate specific cases with varying levels of ethical sensitivity.
Ⅰ. 서 론
Ⅱ. 이론적 배경
Ⅲ. 연구방법
Ⅳ. 연구 결과
Ⅴ. 결론 및 제언
참고문헌