Intuitively Understanding the KL Divergence

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 พ.ย. 2024

ความคิดเห็น • 114

  • @unsaturated8482
    @unsaturated8482 24 วันที่ผ่านมา +3

    The most intelligent people are the one's who are able to explain the hardest concepts in the most intutive way possible. Thanks.

  • @Vroomerify
    @Vroomerify 2 ปีที่แล้ว +100

    I just want to say. This is--by far--the best explanation of KL divergence I've found on the internet. Thanks so much!

  • @niofer7247
    @niofer7247 2 ปีที่แล้ว +25

    This was actually one of the most helpful videos. Thank you

  • @RedwanKarimSony_napstar_1455
    @RedwanKarimSony_napstar_1455 หลายเดือนก่อน

    Best explanation of the KL divergence in TH-cam for sure....
    Thanks...

  • @liliz1902
    @liliz1902 ปีที่แล้ว +2

    KL divergence confused me for so long, and I understood it just by watching your video for one time, thank you very much!

  • @jimmygan801
    @jimmygan801 2 ปีที่แล้ว +1

    holy smoke, you are legit GOAT. so concise yet clear and intuitive explanation.

  • @wynandwinterbach455
    @wynandwinterbach455 10 หลายเดือนก่อน

    I'm just rewatching this video to freshen up my deep learning fundamentals. Super clear video, thank you so much!

  • @nericarcasci9919
    @nericarcasci9919 2 ปีที่แล้ว +4

    You are unbelievably good at teaching man. You explained it better than they did in my course.

  • @karstenhannes9628
    @karstenhannes9628 ปีที่แล้ว

    This type of explanation is perfect! First boiling the problem down to the most intuitive understanding and from there deduce the general formula. Thanks so much!

  • @adityakulkarni5577
    @adityakulkarni5577 7 หลายเดือนก่อน +1

    Perfectly explained in 5 minutes. Wow.

  • @baskaisimkalmamisti
    @baskaisimkalmamisti 3 ปีที่แล้ว +2

    I didn't expect that good explanation from a randomly suggested youtube video

  • @haresage6110
    @haresage6110 2 ปีที่แล้ว +14

    Great explanation! One technical remark I have is that (from my understanding) KL divergence is not technically a measure of distance, since it's not symmetric ( Dlk(P||Q) != Dlk(Q||P) ).

    • @charchitsharma8902
      @charchitsharma8902 8 หลายเดือนก่อน +3

      Yes, that's why it's called divergence instead of distance.

  • @marcegger7411
    @marcegger7411 2 ปีที่แล้ว +5

    Great video! Loved the intuition behind the KL distribution. For some thinking about applications, this is used in the loss function of Variational Auto Encoders, a class of deep networks, and is used to find low dimensionality features of high dimensionality input data as an encoder. (e.g. use this to deconstruct images into "features")

  • @ian-haggerty
    @ian-haggerty 6 หลายเดือนก่อน +1

    Best explanation on the interwebs!

  • @sharingpurpose237
    @sharingpurpose237 ปีที่แล้ว

    Bro, this intuition was not normal, u r just genius!!

  • @germangarcia5599
    @germangarcia5599 ปีที่แล้ว

    One of the most useful explanations ever. Thanks!!

  • @matakos22
    @matakos22 3 ปีที่แล้ว +15

    Thanks so much for this, needed to understand what KL Divergence is for a paper I'm reading and you just saved me so much time!

  • @alkanair7325
    @alkanair7325 ปีที่แล้ว

    Thank you so much for this content. By far the explanation of KL Divergence seen so far

  • @drondasgupta9378
    @drondasgupta9378 2 ปีที่แล้ว

    Thanks for the brilliant, intuitive and crystal-clear explanation!

  • @drdca8263
    @drdca8263 3 ปีที่แล้ว +1

    Thanks, that made the idea make a lot more sense to me. Showing how it arises so nicely from a large sample size, made it feel much more natural.

  • @karthikeyans3
    @karthikeyans3 ปีที่แล้ว

    Great video. Thanks for sharing. Really intuitive.

  • @moopoo123
    @moopoo123 2 ปีที่แล้ว

    Thanks Adian! The connection back to cross entropy loss is cool. Slowly coming together for me.

  • @AashraiRavooru
    @AashraiRavooru ปีที่แล้ว +5

    A question here why will the number of heads and number of tails be the same for both the distributions at 3:04. If the probabilities for both the coins are different then the number of occurrences of heads and tails can also be different

    • @rednas195
      @rednas195 2 หลายเดือนก่อน

      I think the reason is because we do one experiment where we note the amount of tails and heads. We then assume that the observations are modelled by the probability distribution from the true coin. Now we want to figure out what the probability is that the amount of tails and heads we observed could be coming from coin 2. So the number of tails and heads are the same since they are literally the same, there is only one "measurement" being done.

  • @zukofire6424
    @zukofire6424 ปีที่แล้ว

    this was great and super useful in my internship (which really just started), Thanks! :)

  • @reformed8246
    @reformed8246 ปีที่แล้ว

    thanks a lot ! 5min for explaining what I could'nt understand in hours

  • @alecpanayotov
    @alecpanayotov 2 ปีที่แล้ว

    This is awesome, thanks for breaking it down Adian

  • @farshadsaberi2740
    @farshadsaberi2740 3 ปีที่แล้ว +1

    Thanks for the simple, yet helpful, explanation!

  • @kukuster
    @kukuster 11 หลายเดือนก่อน +1

    Thanks for the explanation!! One thing is, formulas were confusing with how you denoted *q1* & *q2* for probabilities for coin 2, instead of *p2* & *q2=1-p2*

  • @balasubramanyamevani7752
    @balasubramanyamevani7752 2 ปีที่แล้ว +5

    @3:26 I don't understand how are we normalizing by raising it to the power of 1/N. Could you please explain that?

    • @Chris-zg1me
      @Chris-zg1me 2 ปีที่แล้ว +1

      Same question here. This is a fantastic explanation but it defeats me when you mention “we normalize by raising to power of 1/N”. Why do we do this? What does that do or mean to the data? Thanks for making this video! Awesome!

    • @vyasraina3930
      @vyasraina3930 2 ปีที่แล้ว +4

      I think the 1/N gives us the 'average' probability of a single toss; e.g. if we had a fair coin and had 3 tosses, the probability of our sequence would be 1/2 * 1/2 * 1/2 = 1/8. If we had ten tosses, the probability of the sequence would be 1/(2^10). These numbers are currently incomparable. If we now look at the probability of the sequence to the power of 1/N, where N is the number of tosses, then suddenly they are the same ... which is what we would want .... it basically normalizes the probability sequence!

    • @aniruddhajoshi7496
      @aniruddhajoshi7496 11 หลายเดือนก่อน

      @@vyasraina3930 thanks for the explaination! in general why is power 1/N more important than let's say multiplying by 1/N?

    • @franklyvulgar1
      @franklyvulgar1 6 หลายเดือนก่อน

      @@vyasraina3930 so basically the 1/N gets rid of the number of tosses/sample size and in your case of a fair coin makes it so the probability would be 1/2 regardless of N by getting rid of the N (exponent in your probability sequence)

  • @brianlee4966
    @brianlee4966 8 หลายเดือนก่อน

    Thank you so much for this video and clear explanation!

  • @SunilKumarSamji
    @SunilKumarSamji 8 หลายเดือนก่อน +1

    Excellent video. Can someone help me understand why is it called Divergence in the first place? Why are we taking 1/N power to normalise it to sample space, I did not understand the logic behind this.

  • @cuongnguyenuc1776
    @cuongnguyenuc1776 9 หลายเดือนก่อน

    Great video! Can you make a video about soft actor critic?

  • @adamtaylor2142
    @adamtaylor2142 ปีที่แล้ว

    Great content! Thank you.

  • @soroushmehraban
    @soroushmehraban ปีที่แล้ว

    Very well-explained. Thank you!

  • @JingyueWu
    @JingyueWu ปีที่แล้ว

    Nice video! Can you say something about alternatives? E.g. why wouldn't mean squared error (of two probability distributions) work as well?

  • @user-sx4wm5ls5q
    @user-sx4wm5ls5q 2 ปีที่แล้ว +1

    Wow this is an amazing explanation. So is KL divergence equivalent to Bayes factor with equal priors?

  • @Luca-yy4zh
    @Luca-yy4zh 3 ปีที่แล้ว

    Finally a simple explanation

  • @gaoyang6608
    @gaoyang6608 3 ปีที่แล้ว +1

    thx for sharing very helpful and intuitive.

  • @clairewang8370
    @clairewang8370 2 ปีที่แล้ว

    This is so intuitive!!!!!!!!!❤

  • @alifarrokh9863
    @alifarrokh9863 2 ปีที่แล้ว

    Very great explanation!

  • @petercourt
    @petercourt 2 ปีที่แล้ว

    Amazing explanation, thanks!

  • @openroomxyz
    @openroomxyz หลายเดือนก่อน

    Thanks for creating this video is awsome

  • @kushalneo
    @kushalneo หลายเดือนก่อน

    Great Video

  • @jessechen6541
    @jessechen6541 3 ปีที่แล้ว +1

    excellent explanation

  • @Darkev77
    @Darkev77 3 ปีที่แล้ว +4

    Awesome video, but at 3:27, on what basis did we take the log?

    • @adianliusie590
      @adianliusie590  3 ปีที่แล้ว +6

      That's a good question which I'm not sure I could answer too well. One could claim that the log function makes numbers more readable, and often when we deal with large/small numbers we log expressions first since the log operation is reversible and squeezes the range into a smaller one (e.g. e^10, about 22000, becomes 10), like is done with things like log probabilities. It could also just be mathematical convenience to drop the powers so that the overall expression looks much simpler.
      However I think you'd find a more satisfying answer by looking in the direction of entropy, as entropy is defined as the expected log probabilities of a distribution. Since the KL is interlinked tightly with entropy, something may drop out there which will show that logging the ratio makes the expression more natural and intuitive. I'd have to think bout it more, and maybe I'll make a video on entropy in the near future, but if I figure anything out I'll get back to you then.

    • @Darkev77
      @Darkev77 3 ปีที่แล้ว

      @@adianliusie590 wow that’s such a great answer. I truly appreciate that! And yeah, what you said makes sense, and with regards to entropy you’re very right; since entropy is the expected/avg information of a distribution of random events and KL div measures the *relative* difference in expected information between two distributions.

    • @skeletonrowdie1768
      @skeletonrowdie1768 2 ปีที่แล้ว +2

      Hi Darkev and Adian, there is another video on youtube (study squad academy) which explains the KL divergence from the perspective of Jensen's inequality. The main argument for taking the log is that it is a concave function, which does somewhat touch Adian's comment.

  • @ruggertactics
    @ruggertactics 4 หลายเดือนก่อน

    Useful video.

  • @filipedstrom4462
    @filipedstrom4462 2 ปีที่แล้ว

    Concise and clear, thank you!

  • @annaly2318
    @annaly2318 2 ปีที่แล้ว

    Very good video. Thanks so much!

  • @yatinarora9650
    @yatinarora9650 2 ปีที่แล้ว

    thank you so much, very nicely explained

  • @ramendrachaudhary9784
    @ramendrachaudhary9784 2 ปีที่แล้ว

    Very well explained! Thank you!

  • @longh
    @longh ปีที่แล้ว

    super helpful! Thank you

  • @thapargerrard123
    @thapargerrard123 2 ปีที่แล้ว

    Great video . Thanks.

  • @researchmedicine6950
    @researchmedicine6950 3 ปีที่แล้ว

    Keep the vids coming this is so so useful

  • @blakeedwards3582
    @blakeedwards3582 2 ปีที่แล้ว

    This was awesome. Thank you.

  • @yashrathi6862
    @yashrathi6862 2 ปีที่แล้ว +2

    Hi, I don't get why you assume that the nH and nT for the coin two would be the same as the coin 1?

    • @Marcus-ok2jy
      @Marcus-ok2jy 2 ปีที่แล้ว

      Yeah i don't get it either, any explainations anyone?

    • @Drewbie_T
      @Drewbie_T 2 ปีที่แล้ว +1

      @@Marcus-ok2jy nH and nT are just the number of heads and tails generated in the sequence by the 'true coin', not by coin 2.. i.e., if i have a true coin and I flip it a few times I may get H,H,T,H (nH=3, nT=1) and you will notice that nH/N=0.75 and nT/N =0.25 which is not equal to p1 and p2 respectively. However, if were to flip the coin many more times, infinitely more times, we would notice the number of heads is the same as the number of tails. Thus, he is saying in the limit of a sufficient amount of coin flips, we will notice nH/N = 0.5 and nT/N = 0.5.

    • @Marcus-ok2jy
      @Marcus-ok2jy 2 ปีที่แล้ว

      @@Drewbie_T Hi Andrew, But in 3:21 , the formula P(observations|coin 2) looks at the nH and nT of Coin 2 does it not? This is so that the KL divergence could take into the account the disparity in probability distribution between the 2 coins.

    • @Drewbie_T
      @Drewbie_T 2 ปีที่แล้ว +1

      @@Marcus-ok2jy No it does not, it is only looking at nH and nT of the true coin. Coin 2 is not being flipped at all. The only part where coin 2 comes in is after flipping the true coin (which has probability p1 heads and p2 tails), we obtain some chain of outcomes (i.e., H,H,T,H,T,T). Now that we have flipped the true coin and obtained an outcome, we look at the coin 2 probabilities and say, how likely is it that this sequence (H,H,T,H,T,T) could have come from coin 2? If coin 2 has .95 probability of landing on heads every time, it is unlikely that we would see an equal number of heads and tails in the distribution.

    • @adytya
      @adytya 2 ปีที่แล้ว

      It's because we first flip a coin N times and record the number of heads (nH) and the number of tails (nT). It is assumed here that the coin used here repesents the real coin (which has p1 probability for head and p2 probability for tail). We are now interested in finding how close coin 2 can mimic the real coin's flips. And since the real coin produced nH heads and nT tails during our experiment, we use the same values.
      Hope this helped.

  • @DC-gq6ww
    @DC-gq6ww 2 ปีที่แล้ว +2

    Thank you!
    May I ask how you made the video?
    I want the numbers to move like they do in your show.
    It looks great and maintains comprehensibility by bringing it to life!
    We have to make a video about AIC for our neuroinformatics class, so your video would be a nice introduction to the topic anyway...
    You do it a little better than our prof^^

    • @adianliusie590
      @adianliusie590  2 ปีที่แล้ว +8

      This might break the magic a bit but I just use plain old fashioned Microsoft power point! To move the equations I use the inbuilt animations functionality, though it can get a bit tedious to make everything move exactly how you’d like to. But best of luck on making your video.

    • @DC-gq6ww
      @DC-gq6ww 2 ปีที่แล้ว +1

      @@adianliusie590 thx for your answer! Good to know. It doesn't break the magic. I just use another program and I am a noob at some points

  • @hackercop
    @hackercop 2 ปีที่แล้ว

    This was very good have liked and subscribed

  • @xxluapxx
    @xxluapxx ปีที่แล้ว

    Thanks for the explanation. With the RLHF stuff happening in ChatGPT, does anyone know why they choose to use KL divergence instead of Cross-entropy loss when calculating the RL policy penalty?

  • @ananya_sutradhar
    @ananya_sutradhar ปีที่แล้ว

    Just perfect!

  • @ferkstkojtt
    @ferkstkojtt 2 ปีที่แล้ว

    Dude just plops in some God-tier eye openers in the credits and leaves. Never realized this relationship between KL and cross-entropy loss.

  • @akidnag
    @akidnag ปีที่แล้ว

    Only that is not a distance ('cause is not symmetric), but a pseudo distance. Great video!

  • @unbridled_exciton
    @unbridled_exciton 2 ปีที่แล้ว

    This is gold!

  • @juliocardenas4485
    @juliocardenas4485 2 ปีที่แล้ว

    Excellent!!!

  • @Gathanokos
    @Gathanokos 3 ปีที่แล้ว

    This video is amazing

  • @akshaydongare2136
    @akshaydongare2136 ปีที่แล้ว

    Thank you!

  • @ian-haggerty
    @ian-haggerty 7 หลายเดือนก่อน

    So a Kale Divergence of zero means identical distributions? What do the || lines mean?

  • @yihongli350
    @yihongli350 ปีที่แล้ว

    beautiful!

  • @yingliu350
    @yingliu350 2 ปีที่แล้ว

    The vedio is good, but what confuses me is the correctness of the division. Sometimes,we have different probability(like NH = NT = 1,and p1=q2,p2=q1),but the division result is 1,which mean they are similar ,or same. It is wrong actually. So, may this explanation is just coinstance, or I have made some mistakes. Hopefully you can help me.(If my pool english make it confusing, I am sorry for that)

  • @amrahmed2009
    @amrahmed2009 3 ปีที่แล้ว

    Thanks very much.

  • @BillHaug
    @BillHaug ปีที่แล้ว

    ...tremendous!

  • @hamzeasadi671
    @hamzeasadi671 2 ปีที่แล้ว

    Greaaaat job

  • @vi5hnupradeep
    @vi5hnupradeep 3 ปีที่แล้ว

    Thank you so much

  • @zyzhang1130
    @zyzhang1130 ปีที่แล้ว

    KL loss is not exactly equivalent to cross entropy loss right

  • @onamixt
    @onamixt ปีที่แล้ว

    Why raise to 1/n power, why use log? Why don't we use just sum(P/Q)?

  • @treksis
    @treksis 2 ปีที่แล้ว

    😁😁😁gotcha. super ez explanation

  • @salehmontazeran1130
    @salehmontazeran1130 ปีที่แล้ว

    Awesome

  • @joshholder359
    @joshholder359 ปีที่แล้ว

    So fire

  • @dogukan463
    @dogukan463 3 ปีที่แล้ว

    Nice video :)

  • @gottlobfreige1075
    @gottlobfreige1075 2 ปีที่แล้ว

    So, Why is KL Divergence is not symmetric?

  • @_jiwi2674
    @_jiwi2674 3 ปีที่แล้ว +3

    great explanation, would be perfect if you speaked slower

  • @ViralPanchal97
    @ViralPanchal97 ปีที่แล้ว

    I love you Biradr

  • @Justin-zw1hx
    @Justin-zw1hx ปีที่แล้ว

    when you say "likelyhood of the observation of each coin", you really mean "probability" instead of "likelyhood", right?

  • @gzitterspiller
    @gzitterspiller ปีที่แล้ว +1

    I still dont know why the log appears there.

    • @Messiah-000
      @Messiah-000 4 หลายเดือนก่อน

      It allows to factorize by addition rather than multiplication, since the log is a strictly monotonically increasing function. Practically speaking, this is more efficient to compute than a product of terms.

  • @1.4142
    @1.4142 2 ปีที่แล้ว +1

    It has my initials

  • @cliveemary4806
    @cliveemary4806 ปีที่แล้ว

    nice

  • @ian-haggerty
    @ian-haggerty 7 หลายเดือนก่อน

  • @yegounkim1840
    @yegounkim1840 ปีที่แล้ว +1

    It is not a measure of distance between distributions!

  • @KemalCetinkaya-i3q
    @KemalCetinkaya-i3q 7 หลายเดือนก่อน

    wowowowo

  • @zjy2936
    @zjy2936 2 ปีที่แล้ว +1

    It’s technically not “distance”

  • @semionababo
    @semionababo 18 วันที่ผ่านมา

    IMO this video gives pretty bad explanation of the essence just hiding by algebraic transformations. Found a much better video that actually explains why we use log() without "believe me or not...": th-cam.com/video/q0AkK8aYbLY/w-d-xo.html

  • @shahriarrahman8425
    @shahriarrahman8425 6 หลายเดือนก่อน

    Great explanation. Thank you so much!

  • @tudor6210
    @tudor6210 ปีที่แล้ว

    Beautiful explanation!

  • @Yassinius
    @Yassinius ปีที่แล้ว

    Thanks so much

  • @mormonteg4073
    @mormonteg4073 ปีที่แล้ว

    Thank you a lot