Information Gain in Decision Tree || Lesson 89 || Machine Learning || Learning Monkey ||

แชร์
ฝัง
  • เผยแพร่เมื่อ 10 ม.ค. 2025

ความคิดเห็น • 20

  • @hrishikeshnamboothiri.v.n2195
    @hrishikeshnamboothiri.v.n2195 3 ปีที่แล้ว

    Sir , Clearly explained and understood the concept. Thank you So much Sir....

    • @LearningMonkey
      @LearningMonkey  3 ปีที่แล้ว +1

      Thank you have a great learning

  • @sachupraveen
    @sachupraveen 3 ปีที่แล้ว +1

    Very clear in explaining!

  • @archnapandey1777
    @archnapandey1777 11 หลายเดือนก่อน +2

    Sir how someone can explain it in this much easy way 😮

    • @LearningMonkey
      @LearningMonkey  11 หลายเดือนก่อน +1

      Have a great learning in CSE

  • @KiranSharma-ey6xp
    @KiranSharma-ey6xp 3 ปีที่แล้ว

    Nicely explained, better explained than other tutorials

  • @shahnaz1981fat
    @shahnaz1981fat 3 ปีที่แล้ว

    In some literature infor gain is given by formula 1- weighted entropy . Whereas in some literature h(y) - weighted entropy. However for finding best split , both formulas work the same. As it is just finding the maximum. But is there any justification of formulas ? Please share your knowledge.

    • @LearningMonkey
      @LearningMonkey  3 ปีที่แล้ว

      Hi
      Sklearn decision tree classifier class uses h(Y) - weighted entropy formulae.
      Where u see the equation 1-we.
      Share the link.

    • @shahnaz1981fat
      @shahnaz1981fat 3 ปีที่แล้ว

      @@LearningMonkey I saw it on analyticsvidya.com
      May b it is approximation of parent entropy - weighted child entropy .
      As parent entropy is mostly high , they took it as 1

    • @LearningMonkey
      @LearningMonkey  3 ปีที่แล้ว

      Yes may be.
      Tq for sharing ur knowledge.
      Have a great learning

    • @shahnaz1981fat
      @shahnaz1981fat 3 ปีที่แล้ว

      @@LearningMonkey just a beginner...raw knowledge

    • @spicyornot1787
      @spicyornot1787 ปีที่แล้ว

      @@shahnaz1981fat yes, as beginners we needed that only. perhaps you should see videos of nasa people.

  • @jeevithachouta8812
    @jeevithachouta8812 3 ปีที่แล้ว

    Thank you

  • @surajJoshiFilms
    @surajJoshiFilms 3 ปีที่แล้ว

    Why we calculate weighted entropy
    Can't we simply add the entropies???

    • @LearningMonkey
      @LearningMonkey  3 ปีที่แล้ว

      Weighted entropy gives average.
      If u simply add. The summation may be greater than 1

    • @surajJoshiFilms
      @surajJoshiFilms 3 ปีที่แล้ว

      @@LearningMonkey Summation great than 1 is not good for best split??

    • @LearningMonkey
      @LearningMonkey  3 ปีที่แล้ว +1

      No suraj.
      When we find entropy.
      We get maximum value 1.
      In information gain we are taking entropy of a data subtracted with weighted entropy of a column split.
      Instead of weighted entropy. If you take sum of all entropies.
      There is a chance of going the sum value greater than 1.
      So that we take weighted entropy which is average of the entropies.

    • @surajJoshiFilms
      @surajJoshiFilms 3 ปีที่แล้ว +1

      @@LearningMonkey ok i understood
      Thank you 😍