The Viterbi Algorithm : Natural Language Processing

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ก.ย. 2024
  • How to efficiently perform part of speech tagging!
    Part of Speech Tagging Video : • Part of Speech Tagging...
    Hidden Markov Model Video : • Hidden Markov Model : ...
    My Patreon : www.patreon.co...

ความคิดเห็น • 201

  • @DistortedV12
    @DistortedV12 3 ปีที่แล้ว +200

    NLP students in the future are going to love this

    • @utkarshgarg8783
      @utkarshgarg8783 2 ปีที่แล้ว +10

      I am a NLP student and I am lovin' this :)

    • @LeonPark
      @LeonPark 2 ปีที่แล้ว +4

      lovin it

    • @thepriestofvaranasi
      @thepriestofvaranasi 2 ปีที่แล้ว +1

      Lovin it bro, you're a visionary

    • @xzl20212
      @xzl20212 2 ปีที่แล้ว +1

      I do

    • @bedoor109
      @bedoor109 ปีที่แล้ว

      True

  • @NaifAlqahtani
    @NaifAlqahtani 2 ปีที่แล้ว

    My guy.. you have a way with words. Ma Sha Allah.
    beautifully explained

  • @90Piero90
    @90Piero90 3 ปีที่แล้ว

    good job!

  • @sultanmahmudnahian1226
    @sultanmahmudnahian1226 3 ปีที่แล้ว

  • @raghavarora3044
    @raghavarora3044 2 ปีที่แล้ว +116

    This was honestly one of the few videos where someone has actually explained something so clearly and efficiently! Great job! Keep up the good work!

  • @r2d2b3c4
    @r2d2b3c4 2 ปีที่แล้ว +29

    Mate, I just had a lecture on Viterbi in NLP context from uni and I was having nearly a breakdown due to all the smart formulas our teacher has gave us. It was not possible to understand it for me from the lecture. But you have shown it and explained it so clearly, I am amazed and shocked at the same time. You are a legend! Please carry on with the videos, you are saving and changing lives with this

  • @muradal-ahmad4048
    @muradal-ahmad4048 3 ปีที่แล้ว +29

    Awesome video, very informative!
    Viterbi explanation starts at 07:28, if you're somehow familiar with HMMs basic concept.

  • @sashankvemulapalli6238
    @sashankvemulapalli6238 2 ปีที่แล้ว +9

    This literally has to be the best resource to understand Viterbi algorithm across the whole of the internet. Even the inventor themselves wouldn't have been able to explain it better!!! THANKS A TON

  • @HadiAhmed7546
    @HadiAhmed7546 ปีที่แล้ว +4

    Hey thanks for the video - its super helpful - just one question. When you branched off the start node you only considered the possible state as DT, but isnt there also a 0.2 prob that the first state is a NN?

    • @richardfutrell9668
      @richardfutrell9668 8 หลายเดือนก่อน

      The transition from start to NN is possible, but the emission probability for "the" given NN is zero so we can skip it.

  • @croncoder862
    @croncoder862 ปีที่แล้ว +3

    Very well explained, I actually came from the Coursera NLP specialization since I had many doubts over there, but after watching this, everything is clear.

    • @devindoinmonkmode
      @devindoinmonkmode 2 หลายเดือนก่อน

      Me tooo, this guy is damm crazy

    • @yurimuniz7
      @yurimuniz7 หลายเดือนก่อน

      Same here. My expectations were high with that course given the quality of the deep learning specialization. But I'm kinda disappointed so far. I've been learning much more from materials like this over the internet.

  • @YuryGurevich
    @YuryGurevich 3 ปีที่แล้ว +4

    This is the best explanation I have encountered. Thank yup very much.

  • @kewtomrao
    @kewtomrao 2 ปีที่แล้ว +8

    I had the same doubt you had at around 13:30 , but you cleared it without causing any confusion!!!
    Awesome explanation!!!
    Hopefully your channel becomes more popular!!
    cheers and good day ahead!!

    • @Mew__
      @Mew__ ปีที่แล้ว

      Exactly. I couldn't be convinced when I was told Viterbi isn't greedy, but it makes sense now. Essentially, there's a big difference between argmax of the next connection, and argmax of the cumulative previous connections.

    • @unlimitedsky8506
      @unlimitedsky8506 ปีที่แล้ว

      The prove of the dijkstra invariants is very similar to how you would prove the statement with the viterbi algorithm. In case you're interested in the exact prove!

  • @wenerjy
    @wenerjy 11 หลายเดือนก่อน +1

    Thank you so much, I had the same questions as you!!

  • @ngocquangle5975
    @ngocquangle5975 7 หลายเดือนก่อน +1

    thank you so much, it took me so much time to learn this

  • @programmer1111x
    @programmer1111x 4 หลายเดือนก่อน +1

    Good Explanation. Thanks man.

  • @libertylocsin4666
    @libertylocsin4666 2 ปีที่แล้ว +2

    NLP student here. Love this. Your my hero. :D

  • @omarboukherys5216
    @omarboukherys5216 3 ปีที่แล้ว +1

    Hey dude!
    If you can provide us a course about Record Linkage(NLP) with WINKLER algorithme 🇲🇦👍👍👏🔥🔥

  • @sethcoast
    @sethcoast 2 ปีที่แล้ว +1

    Amazing explanation thank you!

  • @prepotente_irreale
    @prepotente_irreale 17 วันที่ผ่านมา

    Thanks for the amazing, structured and pretty damn good video and explanation :)

  • @emid6811
    @emid6811 2 ปีที่แล้ว +1

    Thank you so much! I learned a lot. 👏

  • @anirudhgangadhar6158
    @anirudhgangadhar6158 ปีที่แล้ว +2

    One of the best explanations I have ever come across. I was struggling with POS tagging a bit but now its crystal clear. Thanks a lot :)

  • @anandarajguntuku1037
    @anandarajguntuku1037 วันที่ผ่านมา

    Thanks a lot...Short and very clear

  • @mohammedafkir8713
    @mohammedafkir8713 6 หลายเดือนก่อน

    Thank u sir. But, given these matrices A and B the POS would be different. the POS u found could be correct if the proba of watch to be a VB is higher than the proba of watch to be a NN, so that is what changed the POS.

  • @learn5081
    @learn5081 3 ปีที่แล้ว +2

    Thank so much for this helpful and clear video. Does the algorithm require the predetermined probability matrices of Transitions and Emissions?

    • @tomisonoakinrimisi7032
      @tomisonoakinrimisi7032 3 ปีที่แล้ว +1

      Yes the matrices are obtained from raw data. But they can be trained further on for more accurate prediction... This algorithm is just to speed up the argmax process

  • @datatalkshassan
    @datatalkshassan 2 ปีที่แล้ว

    My Question is, here the "The" word only has "DT" as one possible Parts of Speech of it. What if your sentence had started with, say, "Fans", which have more than one possible "Parts of Speech". Viterbi algorithm will always pick the parts of speech with maximum probability (and it will always be same no matter what the sentence is). Wouldn't that be wrong?

  • @kanikagupta6103
    @kanikagupta6103 2 ปีที่แล้ว

    bhaiya thank you so much ! had some much difficulty coz everyone explained easy half of it..
    but you did amazing job !!!!

  • @xy11021
    @xy11021 8 หลายเดือนก่อน

    Great video, greetings from munich, lmu ! Thanks you

  • @farnaznouraei8351
    @farnaznouraei8351 2 ปีที่แล้ว +2

    The best explanation I've seen on this topic. Thank you!

  • @SignalFlux
    @SignalFlux 10 หลายเดือนก่อน

    Great video, you are a great explainer. One note, are you sure that the reason why Viterbi is fast (O(L*P^2) rather than O(P^L)) is because you can discard paths (13:49)? It seems to me like the generic Viterbi formulation does not discard any paths (as it seems from the pseudo code on wikipedia), rather it's efficiency comes from the very nature of a dynamic program where the algo builds on previous work in a smart way (overlapping sub problems etc...). As you yourself say at the very end (19:57), you look at all nodes in the previous layer at each step. At each layer there are P nodes and at each node there are P options, which repeated L times means there are L*P^2 ops to do. So I guess it's not even necessary for Viterbi to prune paths to reach that good of a runtime.

  • @yangwang9688
    @yangwang9688 3 ปีที่แล้ว +2

    Looking forward to coding viterbi algo from scratch!

  • @ramtinfarajollahi7250
    @ramtinfarajollahi7250 หลายเดือนก่อน

    not used to commenting, but thank you, subscribed as well

  • @proteus333
    @proteus333 หลายเดือนก่อน

    This video was so good I loved it, thank you so much!

  • @일언-s2p
    @일언-s2p 3 ปีที่แล้ว +2

    Hey, thank you so much for sharing all of these helpful videos with us. I really appeciate it! I can see you explained about the decoding algorithm with HMM. Could you also explain about evaluation and learning algorithms?

  • @elaine19931120
    @elaine19931120 5 หลายเดือนก่อน

    Thank you, it's much clearer than my professor

  • @Abhishek-pb8kt
    @Abhishek-pb8kt 5 หลายเดือนก่อน

    Wonderful 🎉 very engaging and beautifully explained.

  • @tamoghnamaitra9901
    @tamoghnamaitra9901 2 ปีที่แล้ว

    Why did you not calculate the starting probability that the sentence starts with a noun? It has a probability of 0.2

  • @wonbulzhang2240
    @wonbulzhang2240 11 หลายเดือนก่อน

    teaching slowly, but it is very clear. Good job!

  • @jamesli322
    @jamesli322 2 ปีที่แล้ว

    could you please make a video on Baum-Welch Algorithm as well?

  • @selwia.771
    @selwia.771 ปีที่แล้ว

    Excellent explanation. Thank you very much

  • @suedas3944
    @suedas3944 11 หลายเดือนก่อน

    how do we get the probabilities from the data in the emmission and transition parts?

  • @giovannirolando5154
    @giovannirolando5154 ปีที่แล้ว +1

    Bravissimo

  • @rxs5556
    @rxs5556 5 หลายเดือนก่อน

    This video finally nailed it for me. Thanks!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w ปีที่แล้ว

    great video but I didn't quite get the big O for Viterbi. where do you get the p^2?

  • @Kishan31468
    @Kishan31468 2 ปีที่แล้ว

    0 dislikes in the video tell everything. Damn Amazing.

  • @marouanebenbetka
    @marouanebenbetka 5 หลายเดือนก่อน

    Clear Explanation, Thank you!

  • @secondtonone2284
    @secondtonone2284 2 ปีที่แล้ว +1

    Thank you so much for posting this awesome tutoring video. It really helped me understand the algorithm indeed. Can I ask a question? We have two probability matrices in the example. In reality, when we have a sequence data set, do we use transition and emission probabilities that a trained model by EM algorithm produced or the probabilities we can calculate from the empirical data?

  • @茱莉-x2o
    @茱莉-x2o ปีที่แล้ว

    very clearly explained. thank you very much

  • @simiouch5128
    @simiouch5128 2 ปีที่แล้ว +1

    Very clear and instructive explanation. You're a great teacher :) Thank you for these videos

  • @naveen_malla
    @naveen_malla ปีที่แล้ว +1

    Dude this is awesome. I came here because I did not uderstand the explanation of a Coursera course. No offense to them but you did a great job. Thank you.

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว +1

      Glad it was helpful!

  • @lechx32
    @lechx32 ปีที่แล้ว

    This is a very clear explanation, thank you

  • @sudharakafernando4391
    @sudharakafernando4391 2 ปีที่แล้ว

    Great explanation...thank you sir !!!

  • @FabianLandwehr
    @FabianLandwehr ปีที่แล้ว

    Thanks, very clear explanation!

  • @annatigranyan6417
    @annatigranyan6417 2 ปีที่แล้ว

    Excellent explanation. Thank you :)

  • @ArminBishop
    @ArminBishop 2 ปีที่แล้ว

    Finally, I understood it. thanks.

  • @hossainalif3909
    @hossainalif3909 ปีที่แล้ว +1

    I'm not gonna say I can solve all the problems of veterbi algorithm from now on, but I can say I have a clear concept after watching this, thank you sir....

  • @matthewmoore6116
    @matthewmoore6116 2 ปีที่แล้ว

    You my friend, are very good at teaching

  • @monarch0251
    @monarch0251 3 ปีที่แล้ว +1

    So helpful video! really helped me a lot.
    I just have one suggestion, instead of green marker. try some dark marker like brown. Green shines a little extra.

  • @calpal9871
    @calpal9871 2 ปีที่แล้ว

    Great video. Thank you!

  • @danianiazi8229
    @danianiazi8229 2 หลายเดือนก่อน

    how did we chose start->dt for the?

  • @mmmmiru
    @mmmmiru 2 ปีที่แล้ว +1

    love your shirt!

  • @josy4767
    @josy4767 ปีที่แล้ว

    Really great - I was really able to follow along

  • @smangla07
    @smangla07 ปีที่แล้ว

    Awesome video. Clearly explains a difficult to comprehend algorithm

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Glad it was helpful!

  • @funskill-relaxationsounds7521
    @funskill-relaxationsounds7521 ปีที่แล้ว

    This helped a lot bro! God bless you

  • @boosterfly
    @boosterfly 2 ปีที่แล้ว

    Just subscribed! This is awesome!

  • @parththareja6480
    @parththareja6480 2 ปีที่แล้ว

    can i get a python program for this?

  • @hhehe24
    @hhehe24 ปีที่แล้ว

    very clear and direct, thanks

  • @brendanking6326
    @brendanking6326 2 ปีที่แล้ว +1

    Super helpful explanation on exactly when you can discontinue the candidate paths. I've seen a few explanations of that point and this one is definitely the clearest

  • @maxkasper7766
    @maxkasper7766 ปีที่แล้ว

    Great video! I think this actually helped me to better understand a different Algorithm called PELT (for Changepoint Detection).
    Still, I am not 100% sure about PELT so if you would cover it in a different video I would be very grateful☺❤

  • @LIS75608
    @LIS75608 5 วันที่ผ่านมา

    The best lecture ever about this concept.

    • @ritvikmath
      @ritvikmath  5 วันที่ผ่านมา

      Thanks!

  • @mananlalit
    @mananlalit ปีที่แล้ว

    Very nice explanation!

  • @Ram-vu4lk
    @Ram-vu4lk 2 ปีที่แล้ว

    Calmly explained to make this algorithm understandable in intuitive way. :)

  • @rafaelsetyan1755
    @rafaelsetyan1755 2 ปีที่แล้ว

    I just do not get how the actual algorithm is done during training, does not this mean we have to calculate all available probs during training and we get the same brute force?

    • @indraneilpaul1309
      @indraneilpaul1309 2 ปีที่แล้ว +1

      Viterbi is only an efficient decoding algorithm which means its only for inference. For training of HMMs one uses Forward Backward algorithm.

  • @danieldrew2356
    @danieldrew2356 7 หลายเดือนก่อน

    thanks very intuitive and well planned video - was really easy to follow!

    • @ritvikmath
      @ritvikmath  7 หลายเดือนก่อน

      Glad it was helpful!

  • @DemianUsul
    @DemianUsul ปีที่แล้ว

    This is incredible well explained. Thank you and congratulations 🎉

  • @minaksheenarayankar2245
    @minaksheenarayankar2245 2 ปีที่แล้ว

    Thank you so much sir !!

  • @Zaznobable
    @Zaznobable ปีที่แล้ว

    Very clear. Thanks.

  • @tarunr130
    @tarunr130 9 หลายเดือนก่อน

    The best way, anything academic related, has ever been explained to me on TH-cam. Amazing!! Thanks a lot.

  • @francisst-amour646
    @francisst-amour646 2 ปีที่แล้ว

    very good presentation

  • @Xnshau
    @Xnshau 2 ปีที่แล้ว

    Brilliantly explained 👍

  • @doctorneuron4214
    @doctorneuron4214 2 ปีที่แล้ว

    ++ for once again rescuing my score (and my entire sanity from thousands of paper I have to read) :D

  • @totoro_r1668
    @totoro_r1668 2 ปีที่แล้ว

    Really like your video, it's super clear even for me, who come from a linguistic background! Thank you

  • @AnhNguyen-df1nm
    @AnhNguyen-df1nm 11 หลายเดือนก่อน

    God damn what a thorough explanation. Respect brother

  • @dianagalindo2906
    @dianagalindo2906 2 ปีที่แล้ว

    Muchas gracias! 💯💯💯

  • @liltarnation2641
    @liltarnation2641 3 หลายเดือนก่อน

    Absolute legend

  • @ray...
    @ray... 10 หลายเดือนก่อน

    I love you, man.

  • @trejohnson7677
    @trejohnson7677 2 ปีที่แล้ว

    This algorithm is appealing to the sensibilities. You can feel the authors nature & propensities.

  • @thealiens386
    @thealiens386 ปีที่แล้ว

    I have 11 hours to my algorithms exam, this video helped so much thank you!

  • @flyingsnowball9224
    @flyingsnowball9224 ปีที่แล้ว

    This is THE BEST lesson on the Viterbi algorithm ever. THANK YOU!

  • @jetlime08
    @jetlime08 2 ปีที่แล้ว

    Amazing video ! Thanks for the great contribution :)

  • @אליהשלמה-ב8י
    @אליהשלמה-ב8י 8 หลายเดือนก่อน

    excellent explanation

  • @mycodeKK
    @mycodeKK 2 ปีที่แล้ว

    Thanks man you're thousand times better than my prof.

  • @raamdemaas
    @raamdemaas 2 ปีที่แล้ว

    The explanation is amazing. Couldn't wrap my head around it earlier with the text book

  • @jannisn.3600
    @jannisn.3600 ปีที่แล้ว

    Insanely well explained, thank you very much!

  • @ginkei
    @ginkei ปีที่แล้ว

    Genuinely the best explanation there is, Enlightenment reached!

  • @Sulfyderz
    @Sulfyderz 2 ปีที่แล้ว

    Thank you. It is well explained👏.

  • @Anthestudios
    @Anthestudios 3 ปีที่แล้ว

    THANK YOU!!!

  • @LiIRobJr
    @LiIRobJr ปีที่แล้ว

    I understood this the first time he explained it. Great video man

  • @ramankumar41
    @ramankumar41 9 หลายเดือนก่อน

    Great effort !!!

  • @mulyevishvesha1283
    @mulyevishvesha1283 2 ปีที่แล้ว

    Nice explanation

  • @leonardliu1995
    @leonardliu1995 2 ปีที่แล้ว

    Very good video! Cleared up my doubts about why we can't have branch pruning of the lower probability branch! Thanks a lot!