TY - JOUR AU - McDaniel, Patrick AB - Abstract:Reinforcement learning (RL) offers powerful techniques for solving complex sequential decision-making tasks from experience. In this paper, we demonstrate how RL can be applied to adversarial machine learning (AML) to develop a new class of attacks that learn to generate adversarial examples: inputs designed to fool machine learning models. Unlike traditional AML methods that craft adversarial examples independently, our RL-based approach retains and exploits past attack experience to improve future attacks. We formulate adversarial example generation as a Markov Decision Process and evaluate RL's ability to (a) learn effective and efficient attack strategies and (b) compete with state-of-the-art AML. On CIFAR-10, our agent increases the success rate of adversarial examples by 19.4% and decreases the median number of victim model queries per adversarial example by 53.2% from the start to the end of training. In a head-to-head comparison with a state-of-the-art image attack, SquareAttack, our approach enables an adversary to generate adversarial examples with 13.1% more success after 5000 episodes of training. From a security perspective, this work demonstrates a powerful new attack vector that uses RL to attack ML models efficiently and at scale. TI - Adversarial Agents: Black-Box Evasion Attacks with Reinforcement Learning JF - Computing Research Repository DO - 10.48550/arxiv.2503.01734 DA - 2025-03-03 UR - https://www.deepdyve.com/lp/arxiv-cornell-university/adversarial-agents-black-box-evasion-attacks-with-reinforcement-0MW0ogm8Vl VL - 2025 IS - 2503 DP - DeepDyve ER -