상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
158661.jpg
KCI등재 학술저널

인공지능에 의한 사이보그 형 자동화 의사결정에 대한 고찰

A Study on cyborg Automated decision making

DOI : 10.34267/cbstl.2021.12.1.95
  • 242

Artificial intelligence research started with computers trying to take the place of what humans have to do intelligently. Research started in the 1950s went through several ups and downs and reached the same period of the third boom as it is now. With the shocking situation where Alpha Go defeated Lee Se-dol, the strongest player in Go, in 2016, interest in artificial intelligence increased rapidly, and stories about the future situations that artificial intelligence will bring about, such as the possibility and singularity of artificial intelligence, are talked about even among the general public. Certainly, artificial intelligence has the potential to fundamentally change the way humans live, not just technological advances. The real value of artificial intelligence algorithms is prediction. At the center of all the algorithmic revolutions that have taken place in the various fields of modern society, there is one unchanging purpose: prediction. The algorithm for predicting the risk of recidivism used in US law enforcement fits precisely with the principle that individual humans behave in a predictable and consistent manner. While using the algorithm of predicting the risk of a second offender, an artificial intelligence algorithm, not a judge, actually takes the place of the defendant s decision on whether or not to be arrested or sentenced. Automated decision making continues to grow at an unprecedented rate and benefits because machine learning offers the possibility to extend the automated decision-making process, allowing wider, deeper decisions without human intervention. It is also very large and is expected to continue to expand its use. However, the risks are as great as the benefits of artificial intelligence algorithms. The risks of artificial intelligence algorithms outweigh the individual risks of any algorithmic technology specifically applied. Despite the widespread use of artificial intelligence in the social sphere, the reality is that the problems of artificial intelligence cannot be properly grasped. However, people who were treated differently due to automatic decision-making by artificial intelligence began to question the fairness or fairness of artificial intelligence, and in particular, while causing sensitive problems such as discriminatory results, controversy surrounding artificial intelligence algorithms gradually became more and more controversial. It is expanding. Even if a decision made by artificial intelligence is wrong or causes negative effects such as discrimination, there is a big problem that there are few means to object to this. In a situation where automated decision-making is more likely to threaten human autonomy and dignity, in order to secure an individual s autonomous personality, make sure that the entire process of the individual s automatic decision-making that affects him or her is fair and error-free. It is of utmost importance to know and allow humans to directly intervene if necessary. Although available data is increasing and algorithms are gradually improving, artificial intelligence algorithms do not completely eliminate the uncertainty posed by decision making. Eventually, there are cases in which humans, as finalists, have to make important decisions in situations where uncertainty has not been resolved. It is necessary to build a system in which artificial intelligence algorithms and human experts cooperate well to make the best decisions, and in addition to the direction of such a collaboration system establishment, the problem of legal responsibility due to errors in decision-making in artificial intelligence algorithms or collaboration systems is necessary. I want to deal with it in this paper.

Ⅰ. 들어가는 말

Ⅱ. 인공지능에 의한 자동화된 의사결정과 문제점

Ⅲ. 사이보그 형 의사결정 시스템의 개념 및 함의

Ⅳ. 인간-자동화 알고리즘의 협업 모델-사이보그 형 의사결정 시스템의 구축·개발의 방향

Ⅴ. 나오는 말

로딩중