
Perceptually aligned gradients is a known phenomenon where strong adversarial attacks (epsilon=1) produce samples which greatly resemble samples from different classes. In this project, we wish to explore several questions, such as: When does this phenomenon reproduce? and why? Does it depend on different attacks and norm limitations?
We wish to study the above questions, especially interesting are EPGD attacks as described in the second link.