Entropy & Mutual Information in Machine Learning

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 ก.ย. 2024

ความคิดเห็น • 44

  • @FreeMarketSwine
    @FreeMarketSwine 9 หลายเดือนก่อน +2

    You are one of the best data science/ML teachers that I've found on TH-cam and I really hope keep making videos.

    • @AlaphBeth
      @AlaphBeth  9 หลายเดือนก่อน +1

      Thank you so much for your feedback.

  • @nanthawatanancharoenpakorn6649
    @nanthawatanancharoenpakorn6649 ปีที่แล้ว +3

    I've watched a tons of vdo about Entropy. This is the best !

  • @MissPiggyM976
    @MissPiggyM976 7 วันที่ผ่านมา +1

    Very clear, many thanks!

  • @SajjadZangiabadi
    @SajjadZangiabadi ปีที่แล้ว +1

    This video is absolutely fantastic! The presenter's clear explanations and engaging visuals made it a pleasure to learn from. I'm grateful for the valuable insights and real-world examples provided. Well done!

  • @salonikothari7494
    @salonikothari7494 ปีที่แล้ว +2

    i have been into this topic for a month... and so so happy to find your lecture series !!! Thank you so much. So clear and precise.. Treasure :)

  • @RajivSambasivan
    @RajivSambasivan ปีที่แล้ว +1

    Absolutely fantastic video. This is the best information theoretic feature selection explaination that I have come across - so accessible, so well explained. Kudos on a fantastic job.

    • @AlaphBeth
      @AlaphBeth  ปีที่แล้ว

      Thanks for the feedback, much appreciated.

  • @RafaelQuirinoVex
    @RafaelQuirinoVex ปีที่แล้ว +2

    What an excellent lecture. Its really hard to find such good ones. Hats off for you!

    • @AlaphBeth
      @AlaphBeth  ปีที่แล้ว +1

      Thank you so much for your feedback, much appreciated.

  • @thousandTabs
    @thousandTabs 2 ปีที่แล้ว +2

    Excellent lecture Dr. Khushaba! I was searching for a visual explanation of these concepts, now I feel like I actually understand how MI works! Thank you so much, please make more videos like this.

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you for the feedback, much appreciated.

  • @achyuthnandikotkur5647
    @achyuthnandikotkur5647 2 ปีที่แล้ว +2

    This is very useful to me, as I'm planning to leverage this concept in my masters thesis. Thanks again!

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว +1

      Thanks, appreciate the feedback and good luck with your Master study.

  • @someonewhowantedtobeahero3206
    @someonewhowantedtobeahero3206 2 ปีที่แล้ว +1

    Thanks a lot. I was really struggling with the relationship between MI and Entropy, also with the notations, until I came across your video.

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you, appreciate the feedback

  • @omridrori3286
    @omridrori3286 2 ปีที่แล้ว +1

    amaizing amazing amazing woow please make more like this in information theory

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว +1

      Thanks for the feedback

  • @sevdakhlzd2194
    @sevdakhlzd2194 2 ปีที่แล้ว +1

    Amazing work, many thanks Dr. Khushaba.

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thanks for the feedback

  • @nadravface
    @nadravface 2 ปีที่แล้ว +2

    Thank you!!! This is what I was looking for!!! If you update your microphone it will be ideal.

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you, appreciate the feedback.

  • @djs749
    @djs749 2 ปีที่แล้ว +1

    Simply put..Excellent. It is a kind request to add more of this nature. Regards

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thanks a lot for the feedback, much appreciated. I have added another one on transfer entropy yesterday, just look for “finding causal relationships” in my channel. It extends further on the concepts presented here to look for causality rather than just mutual information.

    • @djs749
      @djs749 2 ปีที่แล้ว +1

      @@AlaphBeth Thank you so much. The videos and explanations are excellent. Excellent resources for learning. Many thanks. Warm Regards

  • @sevdakhlzd9397
    @sevdakhlzd9397 2 ปีที่แล้ว +1

    Thanks a lotttt for this video!!!
    I highly appreciate your time and dedication.

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you, appreciate your feedback

  • @yuhuawei1763
    @yuhuawei1763 2 ปีที่แล้ว +1

    Wonderful talking! well explained!

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you, appreciate the feedback.

  • @johnsondlamini3375
    @johnsondlamini3375 2 ปีที่แล้ว +1

    Amazing!

  • @janschmidt1218
    @janschmidt1218 ปีที่แล้ว +1

    Thank you for this great explanation! Helped me a lot. I have one question: In 31:20 you say what is I(X,Y,Z) in the Venn diagram. From what I understood from the example with just two variables, X and Y, the area for I(X,Y) was the envelope of the Venn diagram (at least it was marked like this). Why is it here not the envelope of the three circles?
    Thank you very much already in advance!

    • @AlaphBeth
      @AlaphBeth  ปีที่แล้ว +1

      Hello, thanks for your feedback. About your question: for the case of two variables, the exterior envelope of the two circles is H(X,Y) that is the entropy of the two variables. Mutual information, I(X;Y) is given by the overlapping region between the two circles. So you start from entropies, and the joint areas are the portions that one variable knows about another and hence reducing entropy - this is mutual information.
      The same goes for the case of three variables. I(X;Y;Z) is the joint part between the three circles and the outside envelop for the three circles is the entropy.

    • @janschmidt1218
      @janschmidt1218 ปีที่แล้ว +1

      Thank you, ah yes I mixed up the entropy and the mutual information. It’s clear now, thanks!

  • @kayeezhou9427
    @kayeezhou9427 6 หลายเดือนก่อน

    So ',' and ';' both represent 'and', however, it is like ';' has higher priority than ','. They have same portion in the Venn diagram if there are only two variables.

  • @alessandrorossi1294
    @alessandrorossi1294 2 ปีที่แล้ว +1

    Very nice!

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว

      Thank you, appreciate the feedback

  • @indrakishorebarman5267
    @indrakishorebarman5267 ปีที่แล้ว

    you are applying Histogram Approach to discrete data or continuous data?

  • @user-cu6qs4np6e
    @user-cu6qs4np6e ปีที่แล้ว +1

    Hi, first of all, this is an excellent video! I have a question on page 13. in each square of the 2 dimensional graph, what should be the filled in value? at the left lower square, is the value there equal to the sum of the number of data samples from the first bin of the feature represented by the horizontal axis and the number of data samples from the first bin of the feature represented by vertical axis? Thanks for your clarification!

    • @AlaphBeth
      @AlaphBeth  ปีที่แล้ว

      Thank you for your feedback. The answer is not the sum of the individual values, as you just need to think multidimensional rather than single dimension. The answer for that lower left bin is a count of the number of samples from feature 1 that falls within the range of the first bin while the samples in the second feature fall within the range of the first bin of that feature. In that example it is given as how many samples from feature 1 fall within the range of 0.1 to 0.34 while at the same time the samples from feature2 had values within the range of 1 to 1.59.
      Think of this as driving on a highway with 4 lanes. How many times did the car on lane 1 drive on a speed of 60 to 70 km for example while, at the same time, the car on lane 2 had a speed between 80 to 90km for example. When you do individual cars you just look at car on lane 1 individually from other cars and count number of times it drove on a specific speed regardless of other cars on other lanes.

  • @KhaledMohammed-i9r
    @KhaledMohammed-i9r ปีที่แล้ว +1

    nice explanation. can you provide us with slides?

  • @simawpalmer7721
    @simawpalmer7721 2 ปีที่แล้ว +1

    Wonderful work, thanks. Can you share the slides for reference?

    • @AlaphBeth
      @AlaphBeth  2 ปีที่แล้ว +2

      Thank you, appreciate the feedback. I will post a link to the slides soon.

  • @beelogger5901
    @beelogger5901 ปีที่แล้ว

    missing a formlar H(Y|X)=H(X,Y)-H(X)=p(x,y)log(1/p(x,y)) -p(x)log(1/p(x))

    • @AlaphBeth
      @AlaphBeth  ปีที่แล้ว

      Thanks for your feedback. The conditional entropy is defined in the slides using a different formula than the one you expressed for a reason and that is for anyone interested to read and discover more ways of writing the formulas to encourage further learning and development.