Occasionally AI(artificial intelligence) may carry the risk of mechanical errors, which can have broader societal and economic implications. Consequently, appropriate regulatory measures are being explored to govern the utilization of AI technology. One significant legal issue pertains to the transparency of AI contents, referring to contents generated or manipulated artificially by AI. The establishing legislative systems for obligations of clear and definite markings distinguishable with genuine or human-induced contents prominently is a crucial step toward addressing this concern for. In response to violations of duty, legislative examples include examining existing legal requirements related to marking obligations against AI contents for deepfake videos and harmful to juveniles. Subsequently the key provisions of AI E.O.(the U.S. Presidential Executive Order on AI) and the bill as PCDAI Act are analyzed. And the following legislation cases of DSA(Digital Services Act) and AI act regulated by the European Union provides insight into their respective approaches. To guarantee users’ rights and maintain transparency trusted in AI utilization, legal obligations on marking of AI contents should apply not only to private service providers but also to public agencies and institutions. There is no legitimacy in determining the extent of sanctions and differentiating based on whether the service provider operates for profits. The regulatory focus should prioritize explainability and accountability from a human-centered perspective, emphasizing technical and managerial measures for certification and transparency standards. Additionally, legal measures to enhance AI literacy are necessary. Therefore, the mechanisms for ensuring predictability within systems while correcting potential errors should be built up for settlement against risks and potential harm related to AI contents. Contents adjustments, such as detection, removal, blocking, and filtering, could mitigate the limitations of labeling. It is essential for judgments of equivalence to AI contents without system errors, grounded in transparency, reliability, and responsibility, to define the legal concept and scope of AI contents precisely.
Ⅰ. 논의를 시작하며
Ⅱ. AI콘텐츠 표시의무에 관한 입법동향
Ⅲ. 표시의무의 법제적 함의와 정책방향
Ⅳ. 결론에 갈음하여