One pixel attack | Just change one pixel and fool the neural network into making crazy predictions

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ก.ย. 2024
  • One pixel attack is a fascinating concept. It shows how vulnerable the neural networks are.
    This lecture is part of the Explainable AI (XAI) course.
    References:
    Colab notebook: colab.research...
    Papers
    One pixel attack: arxiv.org/pdf/...
    GitHub repos
    Original GitHub repo: github.com/Hyp...
    My GitHub repo: github.com/sre...
    ✉️ Join our FREE Newsletter: vizuara.ai/our...
    =================================================
    Explainable AI (XAI) Lecture series is a project started by the co-founders of Vizuara: Dr. Raj Dandekar (IIT Madras Btech, MIT PhD), Dr. Rajat Dandekar (IIT Madras Mtech, Purdue PhD) and Dr. Sreedath Panat (IIT Madras Mtech, MIT PhD).
    Explainable AI (XAI) lecture series is not a normal video course. In this project, we will teach XAI from scratch. We will make lecture notes, and also share reference material.
    As we learn the material again, we will share thoughts on what is actually useful in industry and what has become irrelevant. We will also share a lot of information on which subject contains open areas of research. Interested students can also start their research journey there.
    Students who are confused or stuck in their ML journey, maybe courses and offline videos are not inspiring enough. What might inspire you is if you see someone else learning machine learning from scratch.
    No cost. No hidden charges. Pure old school teaching and learning.
    =================================================
    🌟 Meet Our Team: 🌟
    🎓 Dr. Raj Dandekar (MIT PhD, IIT Madras department topper)
    🔗 LinkedIn: / raj-abhijit-dandekar-6...
    🎓 Dr. Rajat Dandekar (Purdue PhD, IIT Madras department gold medalist)
    🔗 LinkedIn: / rajat-dandekar-901324b1
    🎓 Dr. Sreedath Panat (MIT PhD, IIT Madras department gold medalist)
    🔗 LinkedIn: / sreedath-panat

ความคิดเห็น • 18

  • @philipoakley5498
    @philipoakley5498 2 หลายเดือนก่อน

    Excellent presentation leading one through most of the steps. needed to pause it a few times to make sure I really grasped what the code did, but it was all there.
    A neat classic example of 'negative thinking' of the Monty-Hall type to get the effective "gradient descent of a non differentiable function" ! (minimise what you don't want, but can compute, rather than maximising what you 'want')

  • @ritikatanwar3013
    @ritikatanwar3013 หลายเดือนก่อน

    Great explanation and walkthrough...

  • @seachangeau
    @seachangeau หลายเดือนก่อน

    Oooh how exciting.
    Also I love your accent. Really. It’s not perfect English syntax and grammar. It’s musical in that way just enough to be beautiful and interesting rather than irritating:) I wonder what the vocal pixel threshold is for this?

  • @ShubhamSrivastava-ln3mt
    @ShubhamSrivastava-ln3mt หลายเดือนก่อน

    This line in the notebook is causing an issue
    resnet = ResNet()
    models = [resnet]
    It's giving an output as
    "Failed to load resnet
    C:\Python312\Lib\site-packages\keras\src\optimizers\base_optimizer.py:33: UserWarning: Argument `decay` is no longer supported and will be ignored.
    warnings.warn("
    Any suggestion on this one?

  • @cedricmanouan2333
    @cedricmanouan2333 หลายเดือนก่อน

    quite interesting !
    shouldn’t cutouts (pixels dropout) image augmentations take care of this? 🤔

  • @toughcoding
    @toughcoding หลายเดือนก่อน

    will see

  • @skanderbegvictor6487
    @skanderbegvictor6487 2 หลายเดือนก่อน +1

    Damm, I wonder if this will affect vit or other vision transformers. I know that adversarial training can steer a neural network to misclassify certain things, I am not sure how well will this work though

  • @aneesh3306
    @aneesh3306 หลายเดือนก่อน

    Can u do it with a video?

  • @piorewrzece
    @piorewrzece 2 หลายเดือนก่อน

    So in theory is it possible to take a picture of a road and than try to find a pixel which is confusing ML model and than simply put a sign on a road (big white square :D) to trick let's say a TESLA to ... best case scenario brake immediately?

    • @jusoleil4914
      @jusoleil4914 หลายเดือนก่อน

      Had the same idea 😂

  • @testtest-co9hk
    @testtest-co9hk 2 หลายเดือนก่อน +2

    this is the one of the reasons why ai will never become mainstream.

    • @jobiquirobi123
      @jobiquirobi123 2 หลายเดือนก่อน +4

      It is already Mainstream, what are you talking about?

    • @theknowingeye5998
      @theknowingeye5998 2 หลายเดือนก่อน +1

      Because of adversarial attacks? By that logic computers should never have become mainstream due to virus and trojans and malwares. For every attack thete is a more robust model being developed to handle itm

    • @testtest-co9hk
      @testtest-co9hk 2 หลายเดือนก่อน

      @@theknowingeye5998 even though there are attacks. the result of every attack is determined and can be fixed. here the result is not deterministic, its probabilistic.

    • @philipoakley5498
      @philipoakley5498 2 หลายเดือนก่อน

      We always need 'criminals' and exploiters of 'the system', so that we can blame others ;-)

  • @tazanteflight8670
    @tazanteflight8670 หลายเดือนก่อน

    If you can change 1 pixel, and go from cat to dog,.... your neural network doesnt work, in the first place. Change my mind, LOL.

  • @tazanteflight8670
    @tazanteflight8670 หลายเดือนก่อน

    Use a little common sense here. If you can change a single pixel, and get a different result.... the code is flawed. The code is broken. Your implementation is broken. Obviously.
    If a single pixel difference causes a malfunction, then it is useless.
    Is all Ai useless, or is YOUR implementation flawed?

  • @InstagramUser-wh4cb
    @InstagramUser-wh4cb 2 หลายเดือนก่อน

    Nice