6 - 6 - Multinomial Naive Bayes- A Worked Example .mp4

แชร์
ฝัง
  • เผยแพร่เมื่อ 7 ม.ค. 2025

ความคิดเห็น • 42

  • @helihobby
    @helihobby 6 ปีที่แล้ว +10

    Seriously, this is a good example which easily to understand.

  • @mhfateen
    @mhfateen 12 ปีที่แล้ว +3

    how simple and helping! It turned out that doing it practically & interactively makes it more understandable instead of just writing long equations. Thank You Sir!!

  • @TeamTRAINIT
    @TeamTRAINIT 10 หลายเดือนก่อน

    really simple, finally I understand multinomial naive bayes

  • @bhaskargarai8371
    @bhaskargarai8371 2 ปีที่แล้ว

    Such an awesome example - Really helpful for understanding with an example👍👍

  • @faisalalaisaee6604
    @faisalalaisaee6604 5 ปีที่แล้ว +5

    could you please explain how you got the size of vocabulary |V| as 6?

    • @hhvable
      @hhvable 5 ปีที่แล้ว +6

      its the total number of distinct words that are occurring in the given documents. so those six are Chinese, Beijing, Shanghai, Macao, Tokyo, Japan. Rest are the repetition of those words.

  • @LutfarRahmanMilu
    @LutfarRahmanMilu 7 ปีที่แล้ว +1

    Thank you. You know how to make things obvious!

    • @featuresky5084
      @featuresky5084 7 ปีที่แล้ว +1

      Yes I agree. I watched this video 2 years ago. When I needed today, I searched whole youtube for this specific video. This is such a nice example with a really nice explanation. Edited: punctuation

  • @bagasandriann
    @bagasandriann ปีที่แล้ว

    whats the different multinomial naive bayes and the basic naive bayes?

  • @championsplace1646
    @championsplace1646 6 ปีที่แล้ว +1

    this video really helped me...thanks!!

  • @gepliprl8558
    @gepliprl8558 8 ปีที่แล้ว +1

    Dear Rafael Merino García, thank you!!

  • @piotrchodyko6278
    @piotrchodyko6278 6 ปีที่แล้ว

    Wow, really good tutorial. Best wishes from Poland

  • @etaifour2
    @etaifour2 7 ปีที่แล้ว

    very good explanation, very very good, thank you for posting this

  • @yawenzheng2960
    @yawenzheng2960 4 ปีที่แล้ว

    It's a very nice video, thank you! If I may give a bit of advice, imho, if "word bag" is defined and "features" of each document is explicitly written, it might be easier to understand for new learners. Great video though, thanks!

  • @tuananhtran5071
    @tuananhtran5071 8 หลายเดือนก่อน

    Why do we have to
    smoothing for Chinese, they both
    appear in 2 classes

  • @namhoang353
    @namhoang353 10 ปีที่แล้ว +4

    Dear Rafael Merino García! Thank for your presentation. I have a problem with Multinomial Naive Bayes. I can't fully understand the meaning one of the fragment in the formula of the probability of a document in Multinomial Naive Bayes Model.
    P(di|Cj) = P(|di|). |di|!. U(P(Wt|Cj)^Nit / Nit !) with i = 1, .., |V|. U is Integration, comment isn't allowed for special symbol so I can't express it.
    My question:
    P(|di|), what does this probability mean? How to compute it?
    Please explain for me! Thanks you so much.
    Best regard!
    Nam.

  • @hhvable
    @hhvable 5 ปีที่แล้ว

    Perfect explanation

  • @Favwords
    @Favwords 6 ปีที่แล้ว

    how to compute P(d5)?

  • @koushikshomchoudhury9108
    @koushikshomchoudhury9108 6 ปีที่แล้ว +1

    Why did you not include the word 'Sanghai' ? Or did I not hear you ignoring it intentionally since I watched at 2x speed?

    • @ifargantech
      @ifargantech 3 ปีที่แล้ว +1

      Why you listen by a speed of 2x? hhhhh

  • @kadhumalii7231
    @kadhumalii7231 2 ปีที่แล้ว

    where is the multinomial?

  • @Favwords
    @Favwords 6 ปีที่แล้ว

    What if there are more than one feature?

  • @hombreazu1
    @hombreazu1 11 ปีที่แล้ว

    Thanks for this. So helpful.

  • @sultanismail4970
    @sultanismail4970 3 ปีที่แล้ว

    Thank you man...........

  • @adeeluet
    @adeeluet 11 ปีที่แล้ว

    What if there is unknown word in testing document?

    • @angelbeltre8022
      @angelbeltre8022 7 ปีที่แล้ว +1

      Probability = 0

    • @hhvable
      @hhvable 5 ปีที่แล้ว +1

      For future purpose :
      If the text that we are trying to classify has not been occurred even once then its probability would be 0.
      However, this would make the entire sentences probability to be 0.
      To avoid this we add 1 which he adds in the video as well that there is some probability of it being in any of the category. Adding 1 is part of laplacian smoothing.

  • @Ludibrolo
    @Ludibrolo 11 ปีที่แล้ว

    Thank you, this was really helpful!

  • @YusufSaidCANBAZ
    @YusufSaidCANBAZ 7 ปีที่แล้ว

    thank you sooo much.

  • @abirkolin4702
    @abirkolin4702 3 หลายเดือนก่อน

    thanks

  • @ismetozturk947
    @ismetozturk947 5 ปีที่แล้ว

    very good

  • @ElizaberthUndEugen
    @ElizaberthUndEugen 5 ปีที่แล้ว +1

    I don't see any multinomiel here.

  • @randythamrin5976
    @randythamrin5976 4 ปีที่แล้ว

    I saw naive but not multinomial

  • @hiteshochani3990
    @hiteshochani3990 7 ปีที่แล้ว

    Thanks!

  • @pavithraradhakrishnan8229
    @pavithraradhakrishnan8229 5 ปีที่แล้ว

    to the point

  • @mariel871
    @mariel871 2 ปีที่แล้ว

    how about giving credit to the author of the example and the slides? (Dan Jurafski) You are explaining everything as if it was your work.

  • @adisatriapangestu9815
    @adisatriapangestu9815 6 ปีที่แล้ว +1

    how to compute multi-label classification using this classifier ?

    • @koushikshomchoudhury9108
      @koushikshomchoudhury9108 6 ปีที่แล้ว +1

      I'm not sure, just an idea: Calculate conditional probabilities of the words for the third, fourth....nth class. Then find P(c3|d5), P(c4|d5), ..... P(cn|d5) using the same approach. The P(ci|d5) with max value will be the most probable class of the sample d5.