ShapeShifter: Adversarial Attack on Deep Learning Object Detector (Faster R-CNN)

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ต.ค. 2024
  • ShapeShifter is the first targeted physical adversarial attack on Faster R-CNN object detectors. This video shows a real-world demonstration of targeted attack against Faster R-CNN, with "person" as the target class, using high-confidence perturbation. The real stop sign on the left is correctly detected by the object detector in all video frames as the car approaches it. However, the fake stop sign (i.e., adversarial patch generated by ShapeShifter) on the right has tricked the object detector into mis-detecting it as a collection of “person’ objects.
    ShapeShifter is research collaboration between Intel and Georgia Tech, by Shang-Tse Chen, Cory Cornelius, Jason Martin, and PoloChau .
    ShapeShifter’s open-source code is on GitHub: github.com/sha...
    Read the ShapeShifter ECML-PKDD 2018 paper at arxiv.org/abs/...

ความคิดเห็น • 3

  • @santhoshkumarccc
    @santhoshkumarccc 2 หลายเดือนก่อน

    I want code for this

  • @Sheriden.
    @Sheriden. 4 ปีที่แล้ว +1

    I'm so confused

    • @himitsumurasaki1222
      @himitsumurasaki1222 7 หลายเดือนก่อน +2

      This is showing R-CNN, it's an algorithm that detects objects. They modified the second stop sign just enough so that the algorithm recognizes it as a person. The percentages you see are the "confidence" the program has that it identified the object correctly.