Time: Wednesday 15-Jul-2020 16:00
Live in 3 days & 02:33:20
Motivation / Abstract
Computer Vision implementations based on Convolutional Neural Networks (CNN) have been reported to be unstable and vulnerable to adversarial attacks that are not visible to humans. The authors are presenting novel work to better identify algorithms that are vulnerable to these attacks and present explainable options to help design better models.
1) Is your method helping models ignore the noise in a similar way that human cognition filters out irrelevant information? 2) Why is adversarial training expensive? How does your method improve on adversarial training? 3) Do you need access to the model to generate your explanations?