Even though Deep Neural Networks (DNNs) have been
applied with great success in a variety of areas ranging
from speech processing [7] to medical diagnostics [4], recent
work has demonstrated that they are vulnerable to adversarial
perturbations [3], [6], [8], [10], [11], [17], [18], [21]. Such
maliciously crafted changes to the input of DNNs cause them
to misbehave in unexpected and potentially dangerous ways.