Adversarial Attacks in Cybersecurity: How AI Models Can Be Fooled and Methods to Make Them Robust
Carol I I
Paper Contents
Abstract
Artificial Intelligence (AI) is widely used in cybersecurity for intrusion detection, malware classification, and phishing prevention. However, these models are vulnerable to adversarial attacks, where small changes in input data can mislead the system. This paper studies common adversarial attack techniques, such as FGSM, PGD, and CW, and evaluates defense methods including adversarial training and preprocessing. Experiments show that attacks significantly reduce model accuracy, while defenses improve robustness but do not fully eliminate risks. The work highlights the need for stronger, more reliable AI models in cybersecurity applications.
Copyright
Copyright © 2025 Carol I. This is an open access article distributed under the Creative Commons Attribution License.