Explainable AI Cheat Sheet - Five Key Categories

แชร์
ฝัง
  • เผยแพร่เมื่อ 11 ม.ค. 2025

ความคิดเห็น • 38

  • @prasadjayanti
    @prasadjayanti 3 ปีที่แล้ว +8

    I am a data scientist & really appreciate your work ! Keep good work going !

  • @mrunalinigarud1162
    @mrunalinigarud1162 3 ปีที่แล้ว

    Best blog and guidance video for AI

  • @kartavyabhatt7818
    @kartavyabhatt7818 3 ปีที่แล้ว +1

    Thank you very much for the video.
    9:57 yeah a dedicated video for each of the methods will be really great!

  • @metallica42425
    @metallica42425 3 ปีที่แล้ว +3

    Really appreciate these resources! Thanks for always explaining things so clearly!

  • @kokoko5690
    @kokoko5690 ปีที่แล้ว

    Thank you for ur video it's really well organized and easy to understand

  • @jeanpauldelamarre6583
    @jeanpauldelamarre6583 3 ปีที่แล้ว +3

    Explainable AI is not only based on neural networks. Everyone wants to make neural networks explainable which is not the case by design. You also have to consider other types of models like rule based models (expert systems) or even probabilistic models which are explainable by design.

  • @sheldonsebastian7232
    @sheldonsebastian7232 3 ปีที่แล้ว

    Found this channel via Linkedin post. It was a good find!

  • @its_me7363
    @its_me7363 3 ปีที่แล้ว +14

    Now I think it would be great if Jay can make video on SHAP explanability and usage...hope you have time to accept this request.

    • @NishantKumar-mp9zg
      @NishantKumar-mp9zg 3 ปีที่แล้ว +2

      +1
      I'll also be looking forward to it.

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว +2

      I'd certainly love to learn more about it at some point

    • @its_me7363
      @its_me7363 3 ปีที่แล้ว

      @@arp_ai will wait for your video for this topic

  • @incase3007
    @incase3007 ปีที่แล้ว

    great video. highly informative !

  • @dev0nul162
    @dev0nul162 2 ปีที่แล้ว

    Thank you for what you have provided here! The links are providing tremendous added value to your videos.

  • @TusharKale9
    @TusharKale9 3 ปีที่แล้ว

    Very important topic covered in good details. Thank you

  • @mpalaourg8597
    @mpalaourg8597 2 ปีที่แล้ว

    Nice video! But even better the resources which were referenced! Thank you...

  • @Ninaadiaries
    @Ninaadiaries ปีที่แล้ว

    Thanks for the information. It was helpful for me :)

  • @palomoshoeshoe8985
    @palomoshoeshoe8985 8 หลายเดือนก่อน

    Thank you so much for your contribution, i really appreciate it.

  • @camoha8313
    @camoha8313 ปีที่แล้ว

    Thanks for this video (love the john coltrane )

  • @chathurijayaweera1590
    @chathurijayaweera1590 3 ปีที่แล้ว

    Very informative and easily understandable. Thank you for making this video

  • @omyeues
    @omyeues 2 ปีที่แล้ว

    Very interesting ! Thank you for sharing

  • @TheSiddhartha2u
    @TheSiddhartha2u 3 ปีที่แล้ว

    Thank You for nice and easy information. I was looking for such information 👍

  • @ottunrasheed4076
    @ottunrasheed4076 3 ปีที่แล้ว

    Interesting content. I am looking forward to the paper reading videos

  • @francistembo650
    @francistembo650 3 ปีที่แล้ว +2

    Thanks man!

  • @muhammadomar9552
    @muhammadomar9552 2 ปีที่แล้ว

    Thanks for knowledge sharing. Where decision trees lie in cheat sheet?

  • @juanpablopajaro9229
    @juanpablopajaro9229 2 ปีที่แล้ว

    Jay, I was exploring SHAP for explainable deep learning, and it didn´t work. The git mentioned an update in TensorFlow that conflicts with SHAP. What do you know about that?

  • @MeriJ-ze5dd
    @MeriJ-ze5dd 3 ปีที่แล้ว +1

    Thanks Jay. Amazing video. I have a question though: why the pretraining in gpt-3 is called unsupervised learning?it works on labelled data, so I think it should be a supervised learning task.

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว +2

      It's better called self-supervised learning nowadays. It's unsupervised in the same way that word2vec is unsupervised -- it is not trained on an explicitly labeled dataset, but rather on on examples generated from free text.

  • @sanjanasuresh5565
    @sanjanasuresh5565 2 ปีที่แล้ว

    Hello! Can you please make a video on interpretability of unsupervised ML models

  • @balapranav5364
    @balapranav5364 3 ปีที่แล้ว +1

    Hi sir, Shap gives you the info same like feature importance results?

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว +1

      SHAP is a method of obtaining feature importance, yes.

  • @deepbayes6808
    @deepbayes6808 2 ปีที่แล้ว

    Why logistics or linear regression are considered interpretable? If you have 1000x of features, how can you interpret a non-sparse weight matrix?

  • @raminbakhtiyari5429
    @raminbakhtiyari5429 3 ปีที่แล้ว

    just fascinating

  • @yastradamus
    @yastradamus 3 ปีที่แล้ว +1

    that John Coltrane cover in the back though!

  • @moustafa_shomer
    @moustafa_shomer 3 ปีที่แล้ว

    The example based part was kind of shallow, you didn't talk about how they figure out the specific flaws in the model

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว

      That tends to be a different problem which can either arise from model but potentially also from the dataset. The problem becomes more model debugging. XAI is one debugging tool, but there are many others, especially deep examinations of the data.

  • @abhilashsanap1207
    @abhilashsanap1207 3 ปีที่แล้ว

    Some day you should do a video about the background in your videos. Please.

  • @RahilQodus
    @RahilQodus 2 ปีที่แล้ว

    Thanks a lot. It was a great introduction and really helped me🙏🫀