A friendly introduction to Bayes Theorem and Hidden Markov Models

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ก.ย. 2024
  • Announcement: New Book by Luis Serrano! Grokking Machine Learning. bit.ly/grokkingML
    40% discount code: serranoyt
    A friendly introduction to Bayes Theorem and Hidden Markov Models, with simple examples. No background knowledge needed, except basic probability.
    Accompanying notebook:
    github.com/lui...

ความคิดเห็น • 696

  • @pauldacus4590
    @pauldacus4590 5 ปีที่แล้ว +225

    Happy I found this video.. even though it was rainy outside

    • @kebman
      @kebman 4 ปีที่แล้ว +5

      Happy I found this video.. even though there's a Corona lockdown :D

    • @pqppd8491
      @pqppd8491 4 ปีที่แล้ว

      It's coincidentally rainy outside 😂

    • @a7md944
      @a7md944 3 ปีที่แล้ว

      Based on previous experiance, because it is rainy at your side, i predict that you were probably not happy 😔

    • @TymexComputing
      @TymexComputing 3 หลายเดือนก่อน

      @@a7md944 Bob was more likely not happy, we are the hidden state - whats the probability that the lockdown was not justified and that people were dying because of lack of medical help instead of the illness.

  • @csejpnce2585
    @csejpnce2585 6 ปีที่แล้ว +20

    Usually Bayes Theorem and HMM are nightmare to even researchers. In this video these nightmares are made like child's play. I'm highly thankful for this service you are providing to the academic community- teachers, researchers and students. Keep it up Luis Serrano and hope to see many more plays in future!!!

  • @somdubey5436
    @somdubey5436 3 ปีที่แล้ว +12

    your are one of those rarest breed of gifted teachers

  • @simpleprogramming4671
    @simpleprogramming4671 6 ปีที่แล้ว +84

    wow. perfect explanation . Even a kid can learn HMM by watching this video

  • @codebinding71
    @codebinding71 6 ปีที่แล้ว +207

    Your video tutorials are a great breakdown of very complex information into very understandable material. Thank You. It would be great if you could make a detailed video on PCA, SVD, Eginvectors, Random Forest, CV.

    • @jacobmoore8734
      @jacobmoore8734 5 ปีที่แล้ว +7

      Eigenvectors and SVD for sure.

    • @ericbauer6595
      @ericbauer6595 4 ปีที่แล้ว +11

      @@jacobmoore8734 check out 3blue1brown's channel for the Essence of Linear Algebra. He explains that matrices are linear functions like y=f(x) or like a line 'y=mx', with y-intercept b=0. Eigenvectors are special inputs 'x' such that f(x) = kx, where k is some scalar coefficient (k is the eigenvalue associated with the special input x).
      For certain types of NxN matrices, (the covariance matrix used in PCA for example) are super interesting because any point in N-dimensional coordinates can be represented as a linear combination (ax1 + bx2+...) of the eigenvectors. The eigenvectors form a 'basis' for that space. This is where SVD (singular value decomposition) comes in. SVD essentially asks "instead of just multiplying x by your matrix, why don't you decompose this task into 3 easier tasks?" Let's say your matrix is C for covariance. Then SVD says that C = ULU' where U is made up of the eigenvectors for columns, U' is the transpose of U, and L is a diagonal matrix with the eigenvalues.
      Pretend we're doing y = C*x. Then first we do w = U'*x. This essentially represents x as a linear combination of eigenvectors. Said another way, you've changed the representation of point x from the original coordinate system to the eigenvector coordinate system. Next we do z = L*w, which scales every value of vector w by an eigenvalue. Some of these eigenvalues are very small and the result in z is perhaps closer to 0. Some of these eigenvalues are relatively large and upscale the result in z. Finally, when you do y = U*z, all you're doing it translating your scaled z vector back into the original coordinate system.
      So SVD basically splits a matrix into 3 different operations:
      1. represents an input vector in terms of eigenvector coordinates
      2. scales each coordinate by an eigenvalue
      3. represents the scaled result back in terms of the original coordinates
      When you look at PCA (principal components analysis), you take your covariance matrix and decompose it to look at how much your eigenvalues scale the eigenvector coordinates. The largest eigenvalues correspond to the direction (eigenvector) of largest variation

    • @noduslabs
      @noduslabs 4 ปีที่แล้ว

      Definitely eigenvectors! Please!

    • @kapil_vishwakarma
      @kapil_vishwakarma 4 ปีที่แล้ว

      Yes, please, do that.

    • @SaptarsiGoswami
      @SaptarsiGoswami 4 ปีที่แล้ว

      You may have already found some, this is an attempt by University of Calcutta, not so coolly done, but please see if it makes sense th-cam.com/video/C6fH5Nfoj40/w-d-xo.html

  • @Slush_
    @Slush_ 4 ปีที่แล้ว +14

    You have just saved me, this was such a clear breakdown of Bayes Theorem and HMMs, and exactly what I needed at the 11th hour of a project I'm working on!

  • @shuchitasachdev9310
    @shuchitasachdev9310 5 ปีที่แล้ว +4

    This is the best description of this topic I have ever seen. Crystal clear! True knowledge is when you can explain a complex topic as simple as this!

  • @chenqu773
    @chenqu773 3 ปีที่แล้ว

    The most exciting thing I found in your video is that most of them is a one-stop solution for dummies like me, without the need to go to other 100 places to find 50 missing info pieces. Many thanks !

  • @BabakKeyvani0
    @BabakKeyvani0 5 ปีที่แล้ว +7

    Thank you so much for this great video Luis. I am a Udacity alumni myself. I have watched & read many videos and articles on Bayes & HMMs, but your video by far is the best. It explains all the steps in the right amount of detail & does not skip any steps or switch examples. The video really helped solidify the concept, and giving the applications of these methods at the end really helps put them in context. Thank you again very much for your information & helpful video.

  • @changyulin47
    @changyulin47 6 ปีที่แล้ว +14

    OMG! you are amazing! I consider myself as a information theory guy and should know this pretty well. But I can never present this idea as simple and easy understanding as you did! Great great job! I will for sure check around your other videos! Thank you!

  • @noduslabs
    @noduslabs 4 ปีที่แล้ว +9

    Beautiful work! It’s the most accessible introduction to Bayes inference I’ve seen. Great job! Please, keep them coming!

  • @viro-jx2ft
    @viro-jx2ft 4 หลายเดือนก่อน

    This is the best ever video you will find on HMM. Complicated concepts handled soooo wellll🥰

  • @kassymakhmetbek5848
    @kassymakhmetbek5848 4 ปีที่แล้ว

    I wish professors would just show this video in lectures... You are great at making these animations and your speech is perfect. Thank you!

  • @danielking7988
    @danielking7988 6 ปีที่แล้ว +4

    Your videos are amazing! As someone who hasn't looked at calculus in 20 years, I find these "friendly introduction" videos extremely helpful in understanding high-level machine learning concepts, thank you! These videos really make me feel like this is something I can learn.

    • @TheGenerationGapPodcast
      @TheGenerationGapPodcast ปีที่แล้ว

      Isn't this opposite of calculus? Discrete vs continuous functions.

  • @pratiksharma1655
    @pratiksharma1655 5 ปีที่แล้ว +2

    I wasted the whole day understanding HMM model by watching useless youtube videos, untill I saw this. Thank you so much for this video. It is so simple and so intuitive. So very thankful to you :)

  • @theavo
    @theavo 4 ปีที่แล้ว +2

    I'm on a streak of watching your third video in a row and instantly liking it for outstandingly easy-to-understand breakdown of a quite complex topic. Well done, sir, I'll visit your channel in the future for sure! ✨

  • @mrinmoykshattry527
    @mrinmoykshattry527 3 ปีที่แล้ว

    This is the best video that explains HMM so simply to someone who doesn't have a computer science background. Godspeed to you

  • @AbeikuGh
    @AbeikuGh 3 ปีที่แล้ว +2

    I was quite tensed when my supervisor pointed out to me that my master thesis should incorporate HMM. This video is my first introduction to HMM. You chased my fears away with your simple explanation and tone. Forever grateful

    • @carlosmspk
      @carlosmspk 3 ปีที่แล้ว +1

      Similar situation here, I have a master thesis in anomaly detection, and using HMM is a candidate. I'm afraid it's much more complicated than this, but it sure made it look less scary

  • @johnpetermwangimukuha
    @johnpetermwangimukuha ปีที่แล้ว +1

    Man Bayesian Theory has been having me for Breakfast! Thank you for this tutorial!

  • @aatmjeetsingh7555
    @aatmjeetsingh7555 4 ปีที่แล้ว

    this example made everything crystal clear, I have an exam tomorrow on HMM. Initially, I was anxious but after this video I'm sure I can solve any problem.
    Thank you very much, sir.

  • @PinaTravels
    @PinaTravels 3 ปีที่แล้ว +4

    This has taken me from 0 to 80% on HMM. Thanks for sharing

  • @at4652
    @at4652 6 ปีที่แล้ว +1

    Top notch and best explanations. You are taking complex subjects and making it intuitive not an easy thing to do !

  • @sametcetin9235
    @sametcetin9235 5 ปีที่แล้ว +32

    Hi Luis, thank you for your friendly introduction. When I was studying on an assignment and trying to implement Viterbi method following your explanation, I noticed that there may be some mistakes on your calculations. You calculated best path starting from the beginning (from leftmost side) and select the weather condition (sunny or rainy) with the max value. However, I am not sure if that this is the correct way to apply Viterbi. You don't mention anything about backpointers.
    I reviewed HMM chapter of Speech and Language Processing by Dan Jurafsky. Here, it is stated that to find best path we should start from the end (from rightmost side) First we should select the weather condition with the max probability (that is actually the last node of our visiting path. we find the full path in reverse order) Then we should do a backward pass and select the weather condition which maximizes the probability of the next condition that we have just selected instead of just by looking for the max probability among all conditions at that observation time. We continue this process until we reach the beginning.
    Two things to emphasize;
    1- We go backward. (from end to start)
    2- We don't just select the weather conditions with maximum probabilities at specific observation times, instead we select the max one only once at the beginning and then select the conditions that maximizes the one that comes after it, like a chain connection.
    If I am wrong, please enlighten me.
    Best.

    • @priyankad7674
      @priyankad7674 4 ปีที่แล้ว +3

      You are right

    • @arikfriedman4442
      @arikfriedman4442 3 ปีที่แล้ว

      I agree. seems there is a minor mistake there. choosing the "best" probability in each day doesnt ensure the optimal path we are looking for. If I understand correctly, you should start from the end, looking for the best final probability. then go "backwards", looking for the specific path which led to this final probability.

  • @stephenhobbs948
    @stephenhobbs948 4 ปีที่แล้ว

    Very easy to understand using Bob and Alice and the weather. Thanks.

  • @LizaBrings
    @LizaBrings 5 ปีที่แล้ว +7

    Omg. You just replaced an entire dry, non-understandable book for bioinformatics! I can’t thank you enough! It’s so easy!

  • @muhammadyousuf2828
    @muhammadyousuf2828 4 ปีที่แล้ว

    I am a bio-organic chemist and we have a bioinformatics course which included Hidden Markov Model and your video helped me to learn the idea without immersing myself deep into mathematics. Thanks ...

  • @i.d432
    @i.d432 4 ปีที่แล้ว +1

    What a clear way of teaching. You're a total Rockstar of teaching stats. Ok, let's do the Baum-Welch algo

  • @mahdiebrahimi1241
    @mahdiebrahimi1241 4 ปีที่แล้ว

    best description about HMM, I had hard time to understand this topic, but your teaching keep me motivated for further learning.

  • @urcisproduction797
    @urcisproduction797 6 ปีที่แล้ว +1

    You are the best explainer I have found in youtube till now! Great work!

  • @freeustand
    @freeustand 5 ปีที่แล้ว +6

    At around 29:00, you say that "all we have to do is to pick the largest one each day and walk along that path," picking the "S-S-S-R-R-S" sequence as the best one. But my calculation shows that "S-S-R-R-R-S" sequence actually gives a better likelihood for the observed sequence of Bob's mood. I think what we have to do is not "just to pick the largest one each day and walk along that path," but "to pick the sequence of weathers that eventually led to the larger one at the last day." Please correct me if I'm wrong. Anyway, this video is super helpful! Thanks a lot!

    • @mehransh7753
      @mehransh7753 5 ปีที่แล้ว +1

      I agree with you, Junghoon. I reach the same conclusion. and I think the best way is to actually register the path we came for calculating each maximum value. and at the end, we can start with maximum and print the road that we had registered to print result. or instead of using memory, as you said we can calculate to see which one is matching along the path from the maximum value on the last day to reach to the start day.

  • @deveshmore3106
    @deveshmore3106 3 ปีที่แล้ว

    As a feedback I would say your explanation is spot on .... A person with basic statistical knowledge can understand HMM with your explanation

  • @cybren2
    @cybren2 4 ปีที่แล้ว

    Thank you so much for this video! I searched for hours, watched many videos, read many websites/ papers etc. but i never really understood what a HMM and the algorithms are and how they work. You explained everything from how it works to how to implement it so well that I got in 30 minutes, what i didnt get in hours before. Thank you so much!!

  • @Fdan36
    @Fdan36 3 ปีที่แล้ว

    Really liked the video. Was looking to understand HMMs for neuron spiking and things are much clearer now.

  • @sonasondarasaesaria1941
    @sonasondarasaesaria1941 2 ปีที่แล้ว

    Hi Luis Serrano thanks for the clear explanations, your informal way to explain this material is the best for us as a student, even my professor in Machine Learning class recommend this video for learning the HMM introduction!

  • @amyrs1213
    @amyrs1213 3 ปีที่แล้ว

    Your videos are very helpful and giving a good intuition of complex topics :) many thanks from Siberia

  • @miroslavdyer-wd1ei
    @miroslavdyer-wd1ei 9 หลายเดือนก่อน

    I explain Bayes with a horizontal event tree, like a decision tree. Very good job mr cerrano

  • @eTaupe
    @eTaupe 4 ปีที่แล้ว

    Thanks to your videos, I save a huge amount of time. Focusing on the intuition and mechanic allows an instant understanding BEFORE delving into the maths

  • @eaae
    @eaae 4 ปีที่แล้ว

    In this video I explain what conditional probabilities are and I show how to calculate them in Excel and how to interpret them, using Solver to implicitly apply Bayes' theorem. Though in spanish, subtitles in english are available: th-cam.com/video/rxHd7td6Xo0/w-d-xo.html.

  • @knasiotis1
    @knasiotis1 2 ปีที่แล้ว

    You did a better job teaching this than my MSc

  • @hellomacha4388
    @hellomacha4388 4 ปีที่แล้ว

    Very very nice and impressive explanation even a layman can understand this concept tq sir for keeping lot of effort for making this video

  • @AnshumanDVD
    @AnshumanDVD 4 ปีที่แล้ว

    I am a first time viewer but with such kind of amazing explanations, I will always stick to your teaching, vow so nicely explained!

  • @RC-bm9mf
    @RC-bm9mf 3 ปีที่แล้ว

    Dr Serrano, I think you are an embodiment of Feynman in ML education! Thanks a lot!!

  • @sintwelve
    @sintwelve 5 ปีที่แล้ว +1

    Thank you so much for this. I wish more educators were more like you.

  • @AB-km5sp
    @AB-km5sp 5 ปีที่แล้ว +3

    The best explanation of HMM ever! Very visual and easy to grasp. Enjoyed learning so much. Thanks!
    Edit: Can you please do a friendly video on EM algorithm, too?

  • @vladimir_egay
    @vladimir_egay 4 ปีที่แล้ว

    Nice job! Best explanation by now. Explained 6 weeks of my class in 32 minuts!

  • @namnguyen7153
    @namnguyen7153 4 ปีที่แล้ว +1

    Thank you so much! This video literally helps me understand 3 lectures in my machine learning class

  • @kimdinh8359
    @kimdinh8359 4 ปีที่แล้ว

    This video is really useful for me to learn HMM as well as probability calculation with algorithms. The example is easy to understand. Thank you so much.

  • @ramakalagara3577
    @ramakalagara3577 3 ปีที่แล้ว

    You made it so ease for learners... Appreciate the time you are spending in creating the content!!

  • @vaibhavjadhav3453
    @vaibhavjadhav3453 3 ปีที่แล้ว

    Thank you so much for this beautiful explanation. learned about application of Bayes and Markov together ...Would happy to see more engineering application of these thermos..

  • @geovalexis
    @geovalexis 4 ปีที่แล้ว +1

    Simply amazing! After quite a long time struggeling to understand HHM now I finally get it. Thank you so much!!

  • @georgikyshenko4380
    @georgikyshenko4380 5 หลายเดือนก่อน

    the best explanation on the internet. Thank you!

  • @ashishgohil9717
    @ashishgohil9717 4 ปีที่แล้ว

    Very nicely explained. It takes a lot to teach a complex topic like HMM in such a simplistic way. Very well done. Thank you.

  • @CBausGB
    @CBausGB 3 ปีที่แล้ว

    It's impressing how simple you explain very complex issues! Thank you!!

  • @anirudha_ani
    @anirudha_ani 2 ปีที่แล้ว

    This is hands down the best video on HMM.

  • @shapeletter
    @shapeletter 4 ปีที่แล้ว

    It was so nice with images! When you switched to letters, it was super clear how much easier it was to look at images!

  • @ipaliipali8804
    @ipaliipali8804 6 ปีที่แล้ว

    Being a teacher myself for long time all I can say is that this video is awesome! You have a talent my friend.

  • @soliloquy2006
    @soliloquy2006 4 ปีที่แล้ว +1

    Thanks so much for this! It really helped with a research report I'm writing. Clear and easy to understand and the pacing was excellent for being able to take notes.

  • @fuadmohammedabubakar9202
    @fuadmohammedabubakar9202 2 ปีที่แล้ว

    Really amazing video that breaks down Bayes Theorem for simple understanding. Thanks Luis

  • @castro_hassler
    @castro_hassler 5 ปีที่แล้ว +17

    This guy is amazing, hey bro, could you make a video comparing classical techniques like this one with RNN , which one is better generalizing , when to use one over the other? Thanks and keep it up!

  • @iglf
    @iglf 5 ปีที่แล้ว

    I was going thru HMMs for robot localization and found this super clear explanation. Eres un fenomeno, Luis. Gracias!

  • @vardaanbhave2231
    @vardaanbhave2231 4 หลายเดือนก่อน

    Dude, thanks a ton for explaining this so simply

  • @nigerinja7195
    @nigerinja7195 3 ปีที่แล้ว

    Thanks alot! I came across your video while searching for HMM-explanation for my computational biology course, and it helped a lot to understand the basic principle :)

  • @user-de8ue5cs6s
    @user-de8ue5cs6s 4 ปีที่แล้ว +1

    my dad recommended i watch this, and i sure am thankful he did :D great video!

  • @StarEclipse506
    @StarEclipse506 5 ปีที่แล้ว

    I took a probability class and did badly. After recently finding out I'd need to revisit it for machine learning, I was a bit concerned. Then I come to understand an algorithm for Baye's Theorem!! How incredible, thank you!!

  • @ImperialArmour
    @ImperialArmour 3 ปีที่แล้ว

    Thanks Luis, I was taught HMMC using speech recognition, but will be having case study test on robot vacuums using this. I really appreciate it.

  • @dYanamatic
    @dYanamatic 4 ปีที่แล้ว

    Amazing ... I just bought your book from Australia. Thank you for your time and effort!!!

  • @WallsOfAbaddon
    @WallsOfAbaddon 4 ปีที่แล้ว

    Excellent video, i remember looking at this on wikipedia and just not having a clue of what it meant, you did a fantastic job of explaining it!

  • @OzieCargile
    @OzieCargile 3 ปีที่แล้ว

    Best video of its kind on TH-cam.

  • @blz1rorudon
    @blz1rorudon 4 ปีที่แล้ว +1

    I can do nothing except to give my utmost respect to you, sir. Thank you so much for a fantastically eloquent explanation.

  • @williamhuang5455
    @williamhuang5455 3 ปีที่แล้ว

    As a high schooler, this video was very helpful and I understand HMMs a lot more now!

  • @SimoneIovane
    @SimoneIovane 4 ปีที่แล้ว

    I think it is the most clear explanation of HMM. A university course in 30 mins video

  • @ebrukeklek3237
    @ebrukeklek3237 3 ปีที่แล้ว

    Loved it. You are a great teacher. I was blessed finding your video first so I didn't waste any time 🥰

  • @SupremeSkeptic
    @SupremeSkeptic 6 ปีที่แล้ว

    Very comprehensive and easily understandable. Even though I get increasingly impatient to watch the whole thing, I still managed to swing the thumb up.

  • @milleniumsalman1984
    @milleniumsalman1984 4 ปีที่แล้ว

    cheated on my work hours to watch this course, this was totally worth it

  • @anderswigren8277
    @anderswigren8277 6 ปีที่แล้ว +1

    This is the best explanation of HMM i ever seen up to now!

  • @arbaazaattar6266
    @arbaazaattar6266 6 ปีที่แล้ว +2

    Made my day...I learned Hidden Morkov Model for first ever time n guess wht? It was damn simple to understand the way explained.

  • @boamixtura1155
    @boamixtura1155 5 ปีที่แล้ว

    I Love You man. All maths should be explained like this. Easy and intuitive. I'm tired of sayin' it.

  • @windandwolf
    @windandwolf 4 ปีที่แล้ว

    Love it! Please add Viterbi algorithm to the title. Your explanation to that is super easy to understand and follow. Thank you thank you thank you!

  • @vishwajitiyer4716
    @vishwajitiyer4716 5 ปีที่แล้ว

    A very nicely done and visually appealing video on a slightly complex topic. Thank you!

  • @sinabaghaei3504
    @sinabaghaei3504 ปีที่แล้ว

    it was a real teaching approach. Thank you. it would be good if you also provide the math notations and formulas of probability equivalent to this example weather and mood.

  • @ramgollakota1225
    @ramgollakota1225 5 ปีที่แล้ว +1

    excellent video. gives a very lucid explanation with concrete example.

  • @dikidwidiro5990
    @dikidwidiro5990 ปีที่แล้ว +1

    terimakasih bang videonya bermanfaat buat subject AI saya di sem 5 ini, Thank you bang

  • @mitchellphiri5054
    @mitchellphiri5054 5 ปีที่แล้ว

    So I always just saw posts about HMM and I just decided to give your video a try and the explanations are just so fluid, I'm interested now

  • @girishtiwari79
    @girishtiwari79 5 ปีที่แล้ว +1

    Great tutorial. While calculating transition probabilities, you have taken 3 sunny days at the end (4 sunny, 3 rainy, 4 sunny, 2 rainy and last 3 sunny), but to calculate probabilities of sunny and rainy without knowing Bob's mood, you have taken 2 sunny at the end. I think you taken last 3rd sunny day to loop back to first sunny day since we cannot start with sunny on our own. I think a cyclic representation will be better to clear the doubts it may raise.

  • @dennishuang3498
    @dennishuang3498 5 ปีที่แล้ว

    So great by using sample example to explain confusing yet very important topics! Appreciate your excellent tutorial!

  • @guoyixu5793
    @guoyixu5793 3 ปีที่แล้ว +1

    This video is great, but I was wondering if a small part of it is wrong... Please correct me if I am wrong. The max probability up to Thursday is 0.0147 (rainy Thursday), and the max probability up to Wednesday is 0.0546 (sunny Wednesday), but the max prob. of Thursday doesn't come from a sunny Wednesday, but instead comes from a rainy Wednesday (if you follow the arrow and the calculations each step closely). Therefore, I was wondering that the final solution shouldn't be a sunny Wed. and a rainy Thu., but instead should be a rainy Wed. and a rainy Thu. That is, the final solution should be "sunny -> sunny -> rainy -> rainy -> rainy -> sunny", not what the video says as "sunny -> sunny -> sunny -> rainy -> rainy -> sunny".
    However, if only given the first three days to be "happy -> grumpy -> happy", then the answer will still be "sunny -> sunny -> sunny" (a sunny Wed.), but given more than three days (and a grumpy Thursday), then the final answer should be a rainy Wednesday, not a sunny one.

  • @yonasabebe
    @yonasabebe 2 ปีที่แล้ว

    Thank you very much for the video. It is well explained and you made me understand the concept in easy way.

  • @jfister2481
    @jfister2481 6 ปีที่แล้ว

    Best explanation of Hidden Markov Models on the Internet. Well done.

  • @spikersdoom4300
    @spikersdoom4300 4 ปีที่แล้ว

    This video is too good to be free.
    If you can't understand from this video, then probably you can't understand from anywhere else. :)

  • @deadchannel-x2m
    @deadchannel-x2m 4 ปีที่แล้ว

    Thank you so much! Your explanation and the way you presented the concept, was so crystal clear. Loved learning it.

  • @lancelofjohn6995
    @lancelofjohn6995 2 ปีที่แล้ว

    Very good lecture about introducing Markov Models.

  • @dardodel
    @dardodel 3 ปีที่แล้ว

    Out of 319k views (as of now), 1k come from me! Every time I watch this video, I learn something new. Best video, great instructor! Luis' videos always remind me of Albert Einstein quote: "If you can't explain it simply, you don't understand it well enough."

  • @jokmenen_
    @jokmenen_ 4 ปีที่แล้ว

    Very good video! Simple examples make it very approachable and keeps it from being overwhelming

  • @ratnakarm200
    @ratnakarm200 5 ปีที่แล้ว +1

    Nice video with clear explanation. I can see lot of work & heart put into making this video.

  • @optcai4403
    @optcai4403 3 ปีที่แล้ว

    Really thank you, better than my uni's lectures

  • @AmDsus2Fmaj7Am
    @AmDsus2Fmaj7Am 4 ปีที่แล้ว

    Excellent presentation. Simple to follow. I'll check out your book.

  • @rangjungyeshe
    @rangjungyeshe 4 ปีที่แล้ว

    Fantastic explanation - interesting application, clearly explained and without a single formula! The symbolic form of Bayes can be a bit off-putting, but as you showed, it's not even necessary: it's just a matter of adding up the appropriate events. (As I already knew the odds version of Bayes I decided to use that instead....and was rewarded by making a simple arithmetic error...pfff).

  • @shurovi123
    @shurovi123 4 ปีที่แล้ว

    Very useful and easily,nicely explained. Very much benefitted from it. Thanks a lot.

  • @anand_dudi
    @anand_dudi 3 ปีที่แล้ว

    best video on hidden markov model out all videos on youtube at this topic

  • @chrisogonas
    @chrisogonas 3 ปีที่แล้ว

    Well illustrated. Thanks for putting this together.