This study examines how graduate students, positioned as both researchers and learners, integrate generative AI into their research and how they perceive its ethical implications. Conducted in a large research university in May 2025, the study employed a two-stage, mixed-methods design: (1) a survey of 431 students on usage patterns, perceived usefulness, and ethical concerns, and (2) a small-group workshop drafting guidelines for the responsible use of AI. The result indicates that students widely adopt AI for supplementary tasks, such as grammar and style editing, summarization, and code debugging, while expressing cautions toward core scholarly activities, including manuscript writing and data generation. Perceived utility was highest for enhancing speed and alleviating workload, whereas ethical concerns centered on misinformation, citation integrity, and authorship. Significantly, students differentiated acceptable and prohibited uses throughout the research process, which indicates a stage-sensitive hierarchy of ethical risk. The workshop outcomes converged on four governance needs: the establishment of clear standards, transparent disclosure of AI use, human verification of outputs, and institutional support through guidelines and training. These findings highlight the dual perception of AI as both an enabler and a risk, and provide empirical grounding for graduate research ethics education and university policy.
Ⅰ. 서 론
Ⅱ. 이론적 배경
Ⅲ. 연구 방법
Ⅳ. 연구 결과
Ⅴ. 결론 및 시사점
참고문헌
(0)
(0)