Hey Federico. Thanks for both your videos about this paper and the fast gradient sign method. I have 1 open but highly important question. As the authors write in their paper, the goal is not only to fool the network but to fool it and have a wrong answer with HIGH confidence (see 57.7% rises to 99.3% on the panda to gibbon example). Also when trying my own examples with the algorithm on the Tensorflow notebook, I actually can fool the network but I never get high confidence for the perturbations which frustrates me because I think this high confidence is a very important point of the whole idea. Any ideas why the algorithm doesn't deliver high confidence when fooling the network or did you perhaps find an image where it worked? Thanks a lot!
Hey Federico. Thanks for both your videos about this paper and the fast gradient sign method. I have 1 open but highly important question. As the authors write in their paper, the goal is not only to fool the network but to fool it and have a wrong answer with HIGH confidence (see 57.7% rises to 99.3% on the panda to gibbon example). Also when trying my own examples with the algorithm on the Tensorflow notebook, I actually can fool the network but I never get high confidence for the perturbations which frustrates me because I think this high confidence is a very important point of the whole idea. Any ideas why the algorithm doesn't deliver high confidence when fooling the network or did you perhaps find an image where it worked? Thanks a lot!
excellent video
Thanks for ur video! Helped me w a grad presentation
great explanation! Thank you
how to do it on a custom dataset other than the imagenet dataset?
Great work, can I get some consultation from you?
algo