Published 2024
| Version v1
Publication
Investigating Adversarial Policy Learning for Robust Agents in Automated Driving Highway Simulations
Description
This research explores an emerging approach, the adversarial policy learning paradigm, that aims to increase safety and robustness in deep reinforcement learning models for automated driving. We propose an iterative procedure to train an adversarial agent acting in a highway-simulated environment to attack a victim agent that is to be improved. Each training iteration consists of two phases. The adversarial agent is first trained to disrupt the victim-agent policy. The victim model is then trained to overcome the defects observed by the attack from the adversarial agent. The experimental results demonstrate that the victim agent trained with adversarial attacks outperforms the original agent.
Additional details
- URL
- https://hdl.handle.net/11567/1163922
- URN
- urn:oai:iris.unige.it:11567/1163922
- Origin repository
- UNIGE