상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
민사소송 제27권 제1호.jpg
KCI등재 학술저널

민사소송에서의 AI 알고리즘 심사

Automated decision-making by AI algorithms is increasingly likely to cause civil liability. However, AI algorithms based on machine learning techniques are less explainable due to technical inscrutability arising from the nature of the learning methods itself, legal opacity due to protection of trade secrets or intellectual property rights, or incomprehensibility of the general public or judges due to the complexity and counterintuitiveness of algorithms, thus making its judicial review difficult. When the mechanism of an AI algorithm is at issue in a lawsuit, the question is how an opaque AI algorithm can be evaluated by experts and reviewed by a court. This article first introduces ACCC v. Trivago decision made by the Federal Court of Australia in 2020, focusing on how the experts appointed by each party presented their opinions on the AI algorithm and how the court drew its conclusion based on their opinions. It then studies issues and solutions that may arise in examining opaque AI algorithms in the Korean civil litigation procedures, making comparison with the Trivago decision. It explains basic features of Explainable AI(XAI) methods including Ante hoc methods and post hoc methods. It then points out problems of the Civil Procedure Act and Intellectual Property laws of South Korea in disclosing data necessary for the experts’ analysis on the AI algorithms and in protecting the trade secrets contained in the produced data, and makes suggestions on how to solve those problems. Lastly, it recommends a more active use of party-appointed experts in the judicial review of the AI algorithms by allowing their active participations throughout the litigation procedure in order to clarify issues and deepen the court’s scientific understanding.

Ⅰ. 들어가며

Ⅱ. Trivago 사건의 개관

Ⅲ. 우리 민사소송에서의 AI 알고리즘 심사

Ⅳ. 결어

참고문헌

로딩중