Naive Bayes: Text Classification Example

แชร์
ฝัง
  • เผยแพร่เมื่อ 5 ม.ค. 2025

ความคิดเห็น • 21

  • @aleisley5797
    @aleisley5797 3 หลายเดือนก่อน +4

    Ayo! This is probably the best vid I found that explains this shit. All other examples I saw didn't take repeating words in a document into consideration. Thanks!

  • @KKKenny-uic
    @KKKenny-uic 3 หลายเดือนก่อน +1

    I can't understnad what my teacher said during the class, but you save me!!!
    Thanks!!!!🥺

  • @aaaminahammm4365
    @aaaminahammm4365 9 หลายเดือนก่อน +1

    Smooth explanation❤ thanks

  • @umamaheshpilla4951
    @umamaheshpilla4951 3 ปีที่แล้ว +2

    Good explanation from you madam,it will be helpful to number of learner's, thank you so much for making us very understandable.

  • @sarraayougil
    @sarraayougil ปีที่แล้ว

    Thank you so much , i needed to understand this exercice

  • @Meetlimbani27
    @Meetlimbani27 2 หลายเดือนก่อน

    Best Explanation

  • @ani7453
    @ani7453 ปีที่แล้ว

    thanks a ton, understood the entire video

  • @slowtone6151
    @slowtone6151 2 ปีที่แล้ว

    What is the default formula for choosing a class?

  • @EngineeredFemale
    @EngineeredFemale 2 ปีที่แล้ว

    Extremely well explained and great presentation. Gg. Thankyouu. Here's a cookie for you. 🍪

  • @nafassaadat8326
    @nafassaadat8326 3 ปีที่แล้ว

    Thank you, great job dear

  • @VigneshKumar-jf2gl
    @VigneshKumar-jf2gl ปีที่แล้ว

    Hi I have a doubt, I could see that the Training data set of class j is having all the words of test data set, so Laplace smoothing can be done only for class c right instead of doing for both.

  • @goyalnaman99
    @goyalnaman99 4 ปีที่แล้ว +2

    Please clarify.
    Suppose there is a word in the test document which is not included in any training docs. Will we include that word in our count of |v| ?

    • @machinelearningmymusic6250
      @machinelearningmymusic6250  4 ปีที่แล้ว +3

      Good question. Sometimes test documents do have words that are not in the training docs. We smooth the frequencies of those words as well using Laplace smoothing so they get a non-zero, small value or ignore them from our calculation so that the total probability does not come out to be zero.

    • @ahmedifhaam7266
      @ahmedifhaam7266 2 ปีที่แล้ว

      @@machinelearningmymusic6250 So the word is still included in the calculation right, but defaults to 1/(1*Cardinal) ?

  • @murarikumar346
    @murarikumar346 ปีที่แล้ว

    You have clearly explained how text classification works using BoW representation. Can you please explain, how will be the conditional probabilities of words/features will be calculated (specifically numerator, Laplace and denominator) for the same example if using tf-idf representation.

  • @yournemesis8232
    @yournemesis8232 3 ปีที่แล้ว

    OP SHORT AND CONCISE THANKSSS

  • @misycode
    @misycode 3 ปีที่แล้ว +1

    Mam 8+6 and 3+6 are there and 6 is common among all. But how 6 comes ??

    • @machinelearningmymusic6250
      @machinelearningmymusic6250  3 ปีที่แล้ว +2

      Each word is a feature in this example. There are 6 unique words, i.e., 6 words. Hence, we add 1 for each feature in the denominator, so the denominator has a +6.

    • @misycode
      @misycode 3 ปีที่แล้ว +1

      @@machinelearningmymusic6250 Clear mam! Thank you

    • @ahmedifhaam7266
      @ahmedifhaam7266 2 ปีที่แล้ว +3

      8 is total for C, then 3 is total for J, then 6 is the unique words * K value I think, since K value = 1, and unique words = 6, it is equal to 6 I believe