Shapley Values : Data Science Concepts

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 มิ.ย. 2024
  • Interpret ANY machine learning model using this awesome method!
    Partial Dependence Plots : • Partial Dependence Plo...
    My Patreon : www.patreon.com/user?u=49277905

ความคิดเห็น • 117

  • @adityanjsg99
    @adityanjsg99 2 ปีที่แล้ว +11

    No fancy tools, yet you are so effective!!
    You must know that you provide deeper insights that even the standard books do not.

  • @reginaphalange2563
    @reginaphalange2563 2 ปีที่แล้ว +2

    Thank you for the drawing and intuition explanation, which really help me understand Shapley value.

  • @whoopeedoopee251
    @whoopeedoopee251 2 ปีที่แล้ว +19

    Great explanation!! Love how you managed to explain the concept so simply! ❤️

  • @kokkoplamo
    @kokkoplamo 2 ปีที่แล้ว

    Wonderful explanation! You explained a very difficult concept simply and concisely! Thanks

  • @rbpict5282
    @rbpict5282 2 ปีที่แล้ว +33

    I prefer the marker pen style. Here, my complete focus is on the paper in focus and not the surrounding region.

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว +1

      Thanks for the feedback!!

  • @niks4u93
    @niks4u93 2 ปีที่แล้ว

    one of the easiest + thorough explanation thank you

  • @xxshogunflames
    @xxshogunflames 2 ปีที่แล้ว

    Awesome video, I don't have a preference on paper or whiteboard just keep the vids coming! First time I learn about Shapley Values, thank you for that

  • @lythien390
    @lythien390 2 ปีที่แล้ว

    Thank you for a very well-explained video on Shapley values :D. It helped me.

  • @SESHUNITR
    @SESHUNITR ปีที่แล้ว

    very crisp explanation. liked it

  • @djonatandranka4690
    @djonatandranka4690 ปีที่แล้ว

    what a great video! such a simple and effective explanation. Thank you very much for that

  • @ericafontana4020
    @ericafontana4020 ปีที่แล้ว

    nice explanation! loved it!

  • @yulinliu850
    @yulinliu850 2 ปีที่แล้ว +2

    Nicely explained. Thanks!

  • @amrittiwary080689
    @amrittiwary080689 ปีที่แล้ว

    Hats off to you. Understood most of the explanability techniques

  • @PabloSanchez-ih2ko
    @PabloSanchez-ih2ko 4 หลายเดือนก่อน

    Great explanation! Thanks a lot

  • @Mar10001
    @Mar10001 ปีที่แล้ว

    This explanation was beautiful 🥲

  • @Aditya_Pareek
    @Aditya_Pareek ปีที่แล้ว

    Great video, simple and easily comprehensible

  • @shre.yas.n
    @shre.yas.n ปีที่แล้ว

    Beautifully Explained!

  • @kanakorn.h
    @kanakorn.h ปีที่แล้ว

    Excellent explaination, thanks.

  • @MatiasRojas-xc5ol
    @MatiasRojas-xc5ol 2 ปีที่แล้ว +2

    Great video. The whiteboard is the better because of all the non-verbal communication: facial expressions, gestures,...

  • @000000000000479
    @000000000000479 ปีที่แล้ว

    This format is great

  • @mahesh1234m
    @mahesh1234m 2 ปีที่แล้ว +1

    Hi Rithvik, Really a nice video. Please cover advanced concepts like Fast gradient sign method . Ur way of explaining those concepts would be really helpful for everyone.

  • @nature_through_my_lens
    @nature_through_my_lens 2 ปีที่แล้ว +1

    Beautiful Explanation.

  • @cgmiguel
    @cgmiguel 2 ปีที่แล้ว

    I enjoy both!

  • @kancherlapruthvi
    @kancherlapruthvi 2 ปีที่แล้ว

    amazing video

  • @JorgeGomez-kt3oq
    @JorgeGomez-kt3oq 3 หลายเดือนก่อน

    Most underrated channel ever

  • @Ali-ts6po
    @Ali-ts6po ปีที่แล้ว

    simply aswesome!

  • @niknoor4044
    @niknoor4044 2 ปีที่แล้ว

    Definitely the marker pen style!

  • @tamar767
    @tamar767 2 ปีที่แล้ว

    Yes, this is the best !

  • @koftu
    @koftu 2 ปีที่แล้ว +5

    How well do Shapley values align with the composition of various Principal Components? Is there a mathematical relationship between the two, or is it just wholly dependent on the features of the dataset?

  • @oliverlee2819
    @oliverlee2819 5 หลายเดือนก่อน

    This is very clear explanation better than most of the articles that I could find online, thanks! I have one question though: when getting the global shapley value (average across all the instances), why do we sum up the absolute value of the Shapley value of all the instances? Is it how we need to keep the desirable properties of the Shapley value? Is there any meaning of summing up the plain value of the Shapley value (e.g. positive and negative will now cancel off each other)?
    Another question is, when you said the expected value of the difference, is it just an arithmetic average of all the difference from all those permutations? I remember seeing something that Shapley value is actually the "weighted" average of the difference, which is related to the ordering of those features. Is the step 1 already taking into this into consideration, such that we only need the arithmetic average to get the final Shapley value for that instance?

  • @alphar85
    @alphar85 2 ปีที่แล้ว

    Hey Ritvikmath, grateful for your content. Wanted to ask you how many data science / machine learning methods someone needs to know to start a career in data science ? I know the more the better lol

  • @pravirsinha5012
    @pravirsinha5012 2 ปีที่แล้ว

    Very interesting video, Ritvik. Also very curious about your tattoo.

  • @daunchoi8679
    @daunchoi8679 2 ปีที่แล้ว

    Thank you very much for the intuitive and clear explanation! One question is, so is Step1~5 basically the classic Shapley value and is Step6 SHAP (Shapley Additive exPlanation )?

  • @anmolchandrasingh2179
    @anmolchandrasingh2179 2 ปีที่แล้ว +2

    Hey Ritvikmath, great video as always. I have a doubt, on step 5 the contributions of each of the features adds up to the difference btw the actual and predicted values. Will they always add up perfectly?

    • @Yantrakaar
      @Yantrakaar 2 ปีที่แล้ว

      I have the same question!
      I don't think they do. We are randomly creating the Frankenstein samples and taking the difference in their outputs, then doing this many many times and finding the average difference. This gives the Shapley value of just one feature for that sample. Because of the random nature of this process, and because this is done for each feature separately from the other features, I don't think the sum of the Shapley values for each feature necessarily add up to the difference between the expected and the sample output.

    • @juanorozco5139
      @juanorozco5139 2 ปีที่แล้ว

      Please note that this method approximates the Shapley values, so I'd not expect the efficiency property to hold. If you were to compute exactly the Shapley values, their sum would certainly amount to the difference between the predicted value and the average response. However, the exact computation involves powersets (which increase exponentially w.r.t. the number of features), so we have to settle with approximations.

  • @preritchaudhary2587
    @preritchaudhary2587 2 ปีที่แล้ว

    Could you create a video on Gain and Lift Charts. That would be really helpful.

  • @songjiangliu
    @songjiangliu 7 หลายเดือนก่อน

    cool man!

  • @geoffreyanderson4719
    @geoffreyanderson4719 2 ปีที่แล้ว

    Shapley values were also taught in the AI for Medicine specialization online. There, it was intended for use with individual patients as opposed to groups or aggregates of patients. You would use Shapley to make individualized prognoses for patients, like what is the best course of treatment for this specific individual patient. Clearly valuable information, however it was super computationally expensive, requiring all permutations to have a different model trained. Therefore only the simplest of model was used, particularly linear regression. I have not yet watched Ritvikmath's video, and I'm curious how much different his material is from the AI for Medicine courses.

    • @geoffreyanderson4719
      @geoffreyanderson4719 2 ปีที่แล้ว

      In this video there was only one model trained. Inferencing (predicting) was re-run as many times as needed with different inputs to the same trained model. Very interesting. Much more efficient, but I'm wondering about the correctness and if it's solving a slightly different problem than in the AI for Med course --- not sure.

  • @starkest
    @starkest 2 ปีที่แล้ว

    liked and subscribed

  • @DivijPawar
    @DivijPawar 2 ปีที่แล้ว +2

    Funny, I was part of a project which dealt with this exact thing!

  • @sawmill035
    @sawmill035 2 ปีที่แล้ว

    Excellent explanation! The only question I have is that, sure, in practice you can (and probably should) probably calculate all these through random sampling of feature interactions (random permutations from step 1) because as the number of features increases, you would have a exponentially increasing number of feature interactions to have to be handled, rendering random sampling of features as the only viable method. My question is wouldn't you have to iterate through all possible feature interactions and all data set points for each in order to calculate exact Shapley values? In other words, is the method you proposed just an approximation of the correct values?

    • @justfacts4523
      @justfacts4523 ปีที่แล้ว

      i know it's late but this is my understanding of it in case someone else has the same question.
      Yes, we are getting an approximation of the correct values. But if the sample is large enough and considering that we are taking the expected value, according to the law of big numbers we are pretty confident to get an appropriate estimation of the measure

  • @juanete69
    @juanete69 ปีที่แล้ว

    Hello.
    In a linear regression model are SHAP values equivalent to the partial R^2 for a given variable?
    Don't they take into account the variance as the p-values do?

  • @ghostinshell100
    @ghostinshell100 2 ปีที่แล้ว +2

    Can you put out similar content for other interpretable techniques like PDP, ICE etc.

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว +1

      Good suggestion! As a start, you can check out my PDP video linked in the description of this video!

  • @florianhetzel9157
    @florianhetzel9157 7 หลายเดือนก่อน

    Thank you for the video, really appreciate it!
    I have a question about Step3:
    Is it necessary to 'undo' the permutation after creating the Frankenstein Samples and before feeding them in the model, since the model expects Temp to be in the first position from the training?
    Thank you very much for clarification

  • @mauriciotorob
    @mauriciotorob 2 ปีที่แล้ว

    Hi, great explanation. Can you please explain me how does Shapley values are calculated for classification problems?

    • @justfacts4523
      @justfacts4523 ปีที่แล้ว

      Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question.
      Instead of considering the class as the output we can use the exact same concept by taking the probabilities generated by the last softmax layer (in case of a nn or any probabilistic like model)
      Or eventually I think we can compute that probability by checking how many times that class has been "outputted"

  • @ghostinshell100
    @ghostinshell100 2 ปีที่แล้ว +1

    NICE!

  • @junkbingo4482
    @junkbingo4482 2 ปีที่แล้ว +1

    i would say that this vid points out the fact that most of the ML tools are black boxes; but now, people want ' black boxes' to be explained! it's a pb you don't have when you use statistics and/or econometrics
    as to me it's rather curious to calculate an average value in models that are supposed to be non linear; well in ann there is the sensitivity ( based on the gradient); can be a good start of course, but one have to be cautious

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว +1

      Thanks for your notes!

  • @johanrodriguez241
    @johanrodriguez241 ปีที่แล้ว

    great. How doy think we can apply it for stacking where we can create a stacknet of network of multiples layers with multiple models and for big data problems cuz this approach is based in monte Carlo to "approximate" the shapley values?

  • @beautyisinmind2163
    @beautyisinmind2163 2 ปีที่แล้ว

    what is the difference between the work done by Shapley value and the feature selection technique(filter,wrapper and embedded method)? aren't both of them trying to find the best feature?

  • @yesitisme3434
    @yesitisme3434 2 ปีที่แล้ว

    Great video as always !
    Would prefer more pen style

  • @prateekyadav9811
    @prateekyadav9811 15 วันที่ผ่านมา

    Bhai, haven't finished this video but I am sure it's gonna be informative like all of your DS videos that I have watched. Just curious, why have you tattooed Mumbai's coordinates on your arm? :D

  • @chakib2378
    @chakib2378 ปีที่แล้ว

    Thank you for your explanation but with the SHAP library, one only gives the trained model without the training set. How the sampling from the original dataset can be done with only the trained model ?

  • @jacobmoore8734
    @jacobmoore8734 ปีที่แล้ว

    So, if you had x features, say 50, instead of 4, would you randomly subset 15 (half) of them and create x1...x25? And in each of these x1...25, the differences will be that feature 1:i will be conditioned on the random vector whereas feature[i+n] will not be conditioned on the random vector? Trying to visualize what happens when more than 4 features are available.

  • @juanete69
    @juanete69 ปีที่แล้ว

    I like both the whiteboard and the paper. But I think it's even better to use something like a Powerpoint because it lets you reveal only important information at that moment, hiding future information which can distract you.

  • @saratbhargavachinni5544
    @saratbhargavachinni5544 ปีที่แล้ว

    In Idea-1 slide: Aren't we getting more composite effect instead of isolated effect? As the feature is correlated, the second order interactions with other features is also lost by randomly sampling on this dimension.

  • @JK-co3du
    @JK-co3du ปีที่แล้ว

    The SHAP function explainer expects a data set input called "background data". Is this the data set used to create the "Frankenstein" Vectors explained in the video?

  • @sachinrathi7814
    @sachinrathi7814 5 หลายเดือนก่อน

    Thank you for the great explanation but I have one doubt here, how we get 200 there for temperature ? you said it is the expected difference so say when we run the sample 100 time and each time we get some difference so how that 200 number came out from those 100 difference , did we took average or what math's we applied there?
    Any response on this would be highly appreciated.

  • @apargarg9914
    @apargarg9914 2 ปีที่แล้ว

    Hey Ritvik! May I know how to do this process for a multi-class classification problem? You have taken a regression problem as an example.

    • @thomassimancik1559
      @thomassimancik1559 2 ปีที่แล้ว

      I would assume that for a classification problem, the approach remains the same. The only thing that differs for the classification problem, is that you would choose and observe the prediction for a single class value.

  • @nikhilnanda5922
    @nikhilnanda5922 2 ปีที่แล้ว

    Can anyone recommend any good books for Data science in general and for such concepts and beyond? Thanks in advance!

  • @aelloro
    @aelloro ปีที่แล้ว

    Hello, Ritvik! Thank you for the video! The marker style works great! I'm curious, how to deal with the situation when a feature can have a great importance, but we lack of observations? Following the Ice-cream example, let's add a feature for the time of the day (ToD). And let assume for some reason, that 03:00AM-04:00AM there is a line of airport workers and passengers willing to buy. If we operate the shop at that time, we could sell 5000 cones in one hour regardless other features values. But among our observations there are only working hours (9AM-5PM), and the importance of this feature is quite low.
    It may sound as an imaginary problem, but in medicine field for rare diseases that's the case.

    • @justfacts4523
      @justfacts4523 ปีที่แล้ว +1

      these are my two cents.
      You can't use that that are outside of your training data. Mainly because the prediction would not be reliable and as a consequence your explanation won't be reliable either.
      Let's remember that one of the assumptions of any machine learning model is that the production data must come from the same distribution of our training data. Hence using data for which you have no observations whatsoever would be dangerous.
      Different is the case in which you have very few data but you still have something. In that case I think you can still be able to solve the problem

    • @aelloro
      @aelloro ปีที่แล้ว

      @@justfacts4523 Thank you very much! Your content is the best!

  • @mohitdwivedi4588
    @mohitdwivedi4588 2 ปีที่แล้ว

    we stored difference in array or list after step 3 (must be many values). How can SHAP at T=80 can be a single value(200) in your example. Did we take average of that? this E(diff) value how it can be a single value basically?

  • @geoffreyanderson4719
    @geoffreyanderson4719 2 ปีที่แล้ว

    Question: Which of the following two questions is the shown algorithm really answering: "How much does Temp=80 contribute to the prediction FOR THIS PARTICULAR EXAMPLE vs mean prediction?" versus "How much does Temp=80 contribute to the prediction FOR ALL REALISTIC EXAMPLES vs mean prediction?" Is there a link to the source reference used by Ritvikmath here? Thanks!

  • @bal1916
    @bal1916 2 ปีที่แล้ว

    Thanks for the informative video.
    I just have one issue, I thought Shapley values measure the impact of feature absence. Is this correct? If so, how this was realized here?

    • @justfacts4523
      @justfacts4523 ปีที่แล้ว +1

      Hi, i know it's late for you but I want to give my understanding in case someone else will have the same question.
      We are realizing this because we are taking different samples. Hence the interested feature will be random hence it won't provide any meaningful information.
      I'm not 100% sure of this though

    • @bal1916
      @bal1916 ปีที่แล้ว

      @@justfacts4523 thanks for your reply

  • @simranshetye4694
    @simranshetye4694 2 ปีที่แล้ว

    Hello Ritvik, I love your videos. I was wondering if there is a way to contact you. I had a couple questions about learning data science. Hope to hear from you soon, thank you.

  • @dustuidea
    @dustuidea 2 ปีที่แล้ว

    Difference between adj r2 and shapley?

  • @michellemichelle3557
    @michellemichelle3557 2 ปีที่แล้ว

    hello, I guess it should be combination instead of permutation according to the coalitional game theory where SHAP method originates

  • @KetchupWithAI
    @KetchupWithAI หลายเดือนก่อน

    13:59 I did not fully understand how the values in the chart give you the contribution of variables to difference b/w given and avg prediction. I think what you were doing all along was take the difference in predictions b/w two vectors (x1 and x2) you generated from an OG vector and a randomly chosen vector from data. How does this give you the difference in prediction from OG vector and the mean cones sold (which is what you started with)?

  • @juanete69
    @juanete69 ปีที่แล้ว

    What does it mean in your example that SHAP is a "local" explanation?

  • @juanete69
    @juanete69 ปีที่แล้ว

    I haven't understood how you decide what variables to keep fixed and what to change.
    Imagine you get the permutation [F,T,D,H] or [F,H,D,T]

  • @aaronzhang932
    @aaronzhang932 2 ปีที่แล้ว +1

    8:16 I don't get Step 2. It seems you're lucky to get H = 8. What if the second sample is [200, 5, 70, 7]?

    • @offchan
      @offchan 2 ปีที่แล้ว

      Why is H=8 a lucky thing? H can be anything. The original H is 4. The new H is 8. Just the fact that it changes is what's important.

    • @harshavardhanachyuta2055
      @harshavardhanachyuta2055 ปีที่แล้ว

      ​@@offchan so the H value for form vectors is from the random sample ??

    • @offchan
      @offchan ปีที่แล้ว +1

      @@harshavardhanachyuta2055 yes

  • @juanete69
    @juanete69 ปีที่แล้ว

    OK, SHAP is better than PDP but...
    What are the advantages of SHAP vs LIME (Local Interpretable Model Agnostic Explanation) and ALE (Accumulated Local Effects)?

  • @abrahamowos
    @abrahamowos ปีที่แล้ว

    I didn't get the part of how he got the 2000, c^

  • @kisholoymukherjee
    @kisholoymukherjee ปีที่แล้ว

    Great video but I do prefer the whiteboard style

  • @lilrun7741
    @lilrun7741 2 ปีที่แล้ว +2

    I prefer the marker pen style too!

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว

      Thanks for the feedback! Much appreciated

  • @baqirhusain5652
    @baqirhusain5652 7 หลายเดือนก่อน

    I still do not understand how this would be applied to text

  • @oliesting4921
    @oliesting4921 2 ปีที่แล้ว +2

    Pen and paper is better. It would be awesome if you can share the notes. Thank you.

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว

      Thanks for the feedback!

  • @hassanshahzad3922
    @hassanshahzad3922 2 ปีที่แล้ว

    The white board is the best

  • @offchan
    @offchan 2 ปีที่แล้ว

    Let me try to put it into my own words. In order to make it easy to understand, I have to simplify it by lying first. So here's a soft lie version: you have a sample with temperature 80, you replace it by a temperature from a random sample. So if the random sample has temperature of 70, then replace 80 by 70. Then you ask a question "If I convert this 70 back to 80, what will be the predicted difference?" If the difference is positive, it means the temperature of 80 is increasing prediction value. If it's negative, it's decreasing the prediction value. And this difference is called the SHAP value. We call a feature with large absolute SHAP value as important.
    Now let's fix the lie a little bit: instead of only replacing the temperature, we also replace a few other features from the random sample to the original sample. But we still only try to convert back the temperature. Then we average the SHAP value by doing many random sampling to reduce variance.
    Another thing to do even more is to calculate SHAP value for every sample, then you will have a global SHAP value instead of a local SHAP for a specific sample.
    So this is pretty much an intense iterative process.
    And that's it done.

  • @tariqkhasawneh4536
    @tariqkhasawneh4536 ปีที่แล้ว

    Monginis Cake Shop?

  • @taiwoowoseni9364
    @taiwoowoseni9364 2 ปีที่แล้ว

    Not Fahrenheit 😁

  • @rahulprasad2318
    @rahulprasad2318 2 ปีที่แล้ว +5

    Pen and paper is better.

    • @ritvikmath
      @ritvikmath  2 ปีที่แล้ว

      Appreciate the feedback!

  • @sorsdeus
    @sorsdeus 2 ปีที่แล้ว +1

    Whiteboard better :)

  • @jawadmehmood6364
    @jawadmehmood6364 2 ปีที่แล้ว

    Whiteboard

  • @dof0x88
    @dof0x88 2 ปีที่แล้ว

    for noobs like me trying to learn about new things, your handwriting makes me miss lots of things, Im not getting anything .

  • @vivekcp9582
    @vivekcp9582 2 ปีที่แล้ว

    Marker- Pen style does help with focus. But the tattoo on your hand doesn't. :P
    I aborted the video mid-way and went on a google map hunt. :/

  • @a00954926
    @a00954926 2 ปีที่แล้ว +1

    You made this so simple to understand, that I will get to Python and do this ASAP!! Thank you @ritvikmath