L32- Adversarial Attacks on Navigation Systems

Visual navigation systems have become essential in nowadays autonomous systems, with deep learning algorithms being the state-of-the-art.

In this project, we will work towards developing adversarial attacks on such algorithms. We aim for generating passive example attacks that, when injected to the scene being observed by the visual sensor, will cause the navigation algorithm to diverge from its trajectory.

We will start from attacking a white-box algorithm and examine several aspects, including: The attack whereabouts in the scene (for a given attack), how to blend a passive attack patch in the scene (for a given attack location).
Both aspects aim to maximize the destination divergence with a constrained patch injection and will be led by a different team.

Another research direction is to attack a system which also includes a GPS as well as the visual sensor