Пропуск в контексте

Analysis of an Existing Method for Detecting Adversarial Attacks on Deep Neural Networks

Analyzes the existing method of detecting adversarial attacks on deep neural networks, proposed by researchers from Carnegie Mellon University and the Korean Institute of Advanced Technologies (KAIST) Ko, G. and Lim, G in 2021. Examines adversarial attacks, as well as the history of research on the...

Полное описание

Сохранить в:
Библиографические подробности
Главные авторы: Lapina, M. A., Лапина, М. А., Dudun, G. D., Дюдюн, Г. Д., Kotlyarov, D. V., Котляров, Д. В., Rjevskaya, N. V., Ржевская, Н. В.
Формат: Статья
Язык:English
Опубликовано: Springer Science and Business Media Deutschland GmbH 2024
Темы:
Online-ссылка:https://dspace.ncfu.ru/handle/123456789/29181
Метки: Добавить метку
Нет меток, Требуется 1-ая метка записи!
Описание
Краткое описание:Analyzes the existing method of detecting adversarial attacks on deep neural networks, proposed by researchers from Carnegie Mellon University and the Korean Institute of Advanced Technologies (KAIST) Ko, G. and Lim, G in 2021. Examines adversarial attacks, as well as the history of research on the topic. The paper considers the concepts of interpreted and not interpreted neural networks and features of methods of protection of the types of neural networks considered. The method for protecting against adversarial attacks is also considered to be applicable to both types of neural networks. An example of an attack simulation is given, which makes it possible to identify a sign showing that an attack has been committed.