상세검색
최근 검색어 전체 삭제
다국어입력
즐겨찾기0
국가지식-학술정보

Adversarial sample poisoning and security enhancement strategies for deep neural network face recognition systems

Adversarial sample poisoning and security enhancement strategies for deep neural network face recognition systems

  • 3
커버이미지 없음

With the development of artificial intelligence technology, face recognition systems based on deep neural networks are widely used in security monitoring, identity authentication, and human-computer interaction. However, recent studies have shown that face recognition systems are not fully prepared for deployment-level adversarial attacks, and adversarial samples can undermine the integrity and availability of face recognition systems by poisoning datasets. We demonstrate how attackers can undermine the reliability of face recognition systems by injecting crafted adversarial images into test data. In addition, the article will introduce strategies to defend against such attacks by mitigating performance degradation through defensive distillation methods. By conducting an empirical evaluation of face recognition systems with and without defense mechanisms, we show the impact on face recognition performance to ensure the integrity of the article.

(0)

(0)

로딩중