Markov Chains : Data Science Basics

แชร์
ฝัง
  • เผยแพร่เมื่อ 8 ก.ย. 2020
  • The basics of Markov Chains, one of my ALL TIME FAVORITE objects in data science.

ความคิดเห็น • 90

  • @RD-zq7ky
    @RD-zq7ky 3 ปีที่แล้ว +24

    Yes, I would like more videos on Markov Chains. Thank you for your videos.

  • @diegososa5280
    @diegososa5280 3 ปีที่แล้ว +40

    Brilliant explanation, I cannot thank you enough. Markov chains are so important, we easily get lost in linear thinking, Markov helps us see probabilities differently. More videos on this topic would be highly appreciated.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +8

      Thank you for the kind words! Seeing the positive response, more Markov Chain videos will come soon

    • @redghost105
      @redghost105 2 ปีที่แล้ว

      @@ritvikmath Would it be possible to create a video on Markov chains and their implementation in finance? Much appreciated, these videos are invaluable! Thank you

  • @ingenierocivilizado728
    @ingenierocivilizado728 3 หลายเดือนก่อน

    Incredibly useful! You achieve to explain difficult concepts in a straightforward and easy way. Thank you for these videos!!

  • @eduardocruces3959
    @eduardocruces3959 2 ปีที่แล้ว +1

    Nice job! clear and simple explanation.

  • @ramankutty1245
    @ramankutty1245 3 ปีที่แล้ว +3

    You have a wonderful knack for explaining concepts. Thank you

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      You're very welcome!

  • @CleverSmart123
    @CleverSmart123 7 หลายเดือนก่อน

    This is so brilliantly well explained, thank you. I was not getting it at all before.

  • @NicolaevM
    @NicolaevM 2 ปีที่แล้ว +4

    Great explanation!! It would be great if you could overview DSGE models (often used in econometrics), they also have steady state.

  • @minrongwang4038
    @minrongwang4038 ปีที่แล้ว +1

    I love your tutorial. It is very helpful. Thank you.

  • @l2edz
    @l2edz 3 ปีที่แล้ว +2

    Thank you for the clear explanation! Have a video request for Conditional Random Fields

  • @dennismwangi3573
    @dennismwangi3573 2 ปีที่แล้ว +1

    Excellently explained.

  • @anaibrahim4361
    @anaibrahim4361 3 ปีที่แล้ว

    that example you gave at the end i was looking and asking people one by one but null gave me a direct simple answer like what you gave us in that vid thanks a lot for the vid
    deserve the subs and likes
    keep up the good work

  • @tatianapashkova7275
    @tatianapashkova7275 ปีที่แล้ว +1

    Thank you for a great explanation!

  • @donatsu8
    @donatsu8 3 ปีที่แล้ว +1

    Great job explaining!

  • @user-um4di5qm8p
    @user-um4di5qm8p ปีที่แล้ว +1

    Awesome! Thank you for this!

  • @beyerch
    @beyerch 3 ปีที่แล้ว +37

    How exactly is Sunny W2 0.44? If there is a 0.3 chance of the day after a sunny day also being sunny day how did your probability INCREASE for W2? Seems there is either an error here or something was left out of this explanation?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +91

      great question. The 0.44 is the probability of day 2 being sunny. We know the day before that (day 1) had to be sunny or cloudy. If day 1 was sunny (as you noted this has a 0.3 chance) then there is a 0.3 chance that day 2 will be sunny. So multiplying those, we get a 0.09 chance that day 2 is sunny *and* day 1 was sunny. Now, the missing part of the puzzle is that day 1 could have also been cloudy (with a 0.7) chance. If day 1 was cloudy, there is a 0.5 chance that day 2 will be sunny. Multiplying those we get 0.35. Adding the 0.09 from before with the 0.35 we get 0.44.
      In a nutshell, probability of day 2 being sunny is computed considering *both* cases where the previous day is sunny *and* previous day is cloudy.

    • @beyerch
      @beyerch 3 ปีที่แล้ว +14

      My attempt to break down in English, let me know if this is accurate:
      Initial State "t"
      ---------
      W0 is known. (SUNNY) P(Sunny) = 1 / P(Cloudy) = 0
      First Time Step "t+1"
      -----------------------
      W1(Sunny) = Probability that it stays Sunny (1 * .3) + Probability that it was cloudy and transitioned (0 * .7) = .3 + 0 = .3
      W1(Cloudy) = Probability that it stays Cloudy (0 * .5) + Probability that it was sunny and transitioned (1 * .7) = 0 + .7 = .7
      Next Time Step "t+2"
      -----------------------
      W2(Sunny) = Probability that it stays Sunny from prior "t" sunny (.3 * .3) + Probability that it was cloudy @ prior "t" and transitioned (.7 * .5) = .09 + .35 = .44
      W2(Cloudy) = Probability that it stays Cloudy from prior "t" cloudy (.7 * .5) + Probability that it was sunny @ prior "t" and transitioned (.3 * .7) = 0 + .7 = .56

      I got hung up a bit when you said the only thing that matters is prior state. Since the starting point was known (W0), I was ignoring "cloudy" possibilities for the future "t" calculations. Ooops.

    • @MrAstonmartin78
      @MrAstonmartin78 12 วันที่ผ่านมา

      @@ritvikmath Now it's clear... cross-transition from both possibilities... thx for an explanation

  • @nikkatalnikov
    @nikkatalnikov 3 ปีที่แล้ว +8

    A brilliant into, thank you!
    Just a small addition: steady-state vector is (unsurprisingly) an eigenvector of transformation matrix with the corresponding eigenvalue (once again, unsurprisingly, as the matrix is normalized) of 1.
    A video on Monte Carlo Markov Chains would be nice.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      thanks! And a great suggestion!

  • @jake5camposano
    @jake5camposano ปีที่แล้ว

    this is the first video that I understand the Markov chain thanks you i watch the commercial to pay you thanks a lot

  • @lorezampadeferro8641
    @lorezampadeferro8641 3 ปีที่แล้ว

    Fantastic explanation

  • @anadianBaconator
    @anadianBaconator 3 ปีที่แล้ว +3

    I would like more videos on Markov Chains and also on Metropolis-Hastings algorithm

  • @Blu3B33r
    @Blu3B33r 3 วันที่ผ่านมา

    You explanations are so good

  • @sumers9396
    @sumers9396 ปีที่แล้ว +1

    well explained, thanks a lot!

  • @xinyuan6649
    @xinyuan6649 2 ปีที่แล้ว +1

    Thanks so much as always for the great video 🫰It also feels very philosophical to convince someone to not dwell on the past or anxious about the future, “The future is independent of the past given the present.”

  • @joycwang
    @joycwang ปีที่แล้ว

    great explanation. Would like to explore how monopoly is a Markov chain!

  • @dlee4736
    @dlee4736 3 ปีที่แล้ว

    Awesome. You deserve more views !

  • @dmno45
    @dmno45 ปีที่แล้ว

    You are FANTASTIC at teaching.

    • @ritvikmath
      @ritvikmath  ปีที่แล้ว

      Glad you think so!

  • @user-fm8ru3cy5w
    @user-fm8ru3cy5w 3 ปีที่แล้ว +1

    Thanks for video!! Awesome video!, And I would like to see more video about Markov Chains about finance related topic.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +3

      Thanks! I'm planning to make a Markov Chains for stock price prediction video soon.

  • @goncalocruz2206
    @goncalocruz2206 3 ปีที่แล้ว +3

    Any chance you could do Monte Carlo Markov Chains (MCMC) methods ?

  • @wendycastillo1756
    @wendycastillo1756 2 ปีที่แล้ว +3

    Hello how did you calculate W2? Thank you!

  • @altruist_AI
    @altruist_AI 3 ปีที่แล้ว

    your videos are amazing, I just wanted to know, how statistical models are different from Kalman (ekf/ukf) because both are predicting states only

  • @kosalamanojeewa
    @kosalamanojeewa ปีที่แล้ว

    easy to understand 👍

  • @teegnas
    @teegnas 3 ปีที่แล้ว +3

    It would be great if you could make a video on the Markov decision process in the context of reinforcement learning.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Good suggestion! I'll look into it

    • @teegnas
      @teegnas 3 ปีที่แล้ว

      @@ritvikmath thanks!

  • @dalmacyali1905
    @dalmacyali1905 2 ปีที่แล้ว

    Bro! Thank you very much!

  • @randalllionelkharkrang4047
    @randalllionelkharkrang4047 ปีที่แล้ว +3

    How did you get 0.44 and 0.56 for W2?

  • @Phil-oy2mr
    @Phil-oy2mr 3 ปีที่แล้ว +1

    Do markov chains relate to difference equations at all?

  • @user-wi1rj4iw9y
    @user-wi1rj4iw9y ปีที่แล้ว

    Thank you ! 谢谢!

  • @rajath1964
    @rajath1964 3 ปีที่แล้ว

    Which are those books tor referred to above giving similar examples?

  • @robertc6343
    @robertc6343 3 ปีที่แล้ว +2

    Baltic Avenue? What city are you in? 😀 great video as always! 👍🏻

  • @DanielLopez-mk9ih
    @DanielLopez-mk9ih 3 ปีที่แล้ว +10

    Good video. I did not get how do you get the 0.44 and 0.56. And i suppose that the 0.30 was only an assumption right? Thanks for the video, and if you can explain me that it would be awesome

    • @Phil-oy2mr
      @Phil-oy2mr 3 ปีที่แล้ว +11

      0.44 = (0.5*0.7)+(0.3*0.3)

    • @DanielLopez-mk9ih
      @DanielLopez-mk9ih 3 ปีที่แล้ว +2

      Phil E Thanks!

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +12

      Thanks Daniel for the question and Phil for the reply! I likely should have explained that a bit more.

  • @guidosalescalvano9862
    @guidosalescalvano9862 3 ปีที่แล้ว

    So is it true that the steady state is an eigenvector of the transition matrix? Can there be multiple steady states given a sufficiently complicated transition matrix? Say you define such a steady state region as the region of state space that converges onto a particular steady state. Could you model a meta transition matrix that shows probabilities of the markov model transitioning between steady states regions?

  • @gigz54
    @gigz54 3 ปีที่แล้ว +1

    Great video, thank you! The steady state explanation really helped. A comment and a question...
    I agree that the weather example is the go-to method that others have used to explain what a Markov Chain is, but the Monopoly example seems to much more intuitive for a basic explanation. Your next square depends only on your current square + a dice roll, and the dice roll has a discrete pdf that is well covered in basic prob stat courses. But with weather, the transition probabilities seem more hypothetical, and possibly over simplified.
    Ironically, I will ignore the simply Monopoly scenario and focus on the weather for my question. How do you address the transition from a more complex/multivariate current state. For example, in practice the probability of tomorrow being sunny may be different if we know that today was sunny and in August as opposed to today being sunny and in April. Is it as simple as adding the necessary rows to the transition matrix? That seems like the simplest answer, but does it blow up the idea of having a steady state by creating a cycle?

    • @nathankrowitz3884
      @nathankrowitz3884 ปีที่แล้ว

      Well I think it's a daily transition-- so if you're trying to incorporate some sort of cyclicality, I think you need to think about it on a daily basis versus monthly. E.g. Day 1 of the year has a 100% probability of transitioning to day 2... up to 365. Or July 31 has a 100% probability of transitioning to August 1st... and onward. That would preserve the state.

  • @mohamedr3w
    @mohamedr3w ปีที่แล้ว

    thanks!

  • @JohnJones-rp2wz
    @JohnJones-rp2wz 3 ปีที่แล้ว

    Awesome

  • @kemrank8739
    @kemrank8739 11 หลายเดือนก่อน

    Thanks for explaining. As I noticed, in the pi equation there is a mistake. pi1 x 0,3 + pi2 x 0,7 (instead of 0,5). Since the probability that it will be cloudy after sunny is 0,7. Moreover, in the steady state above this equation we can see what i have written. If Im mistaken take apologies and correct me. Thanks

  • @Hagakure12e412rede
    @Hagakure12e412rede 3 ปีที่แล้ว

    Damn dude you are good at teach

  • @ben7333
    @ben7333 ปีที่แล้ว

    Can you do a video on Structural Equation Model?

  • @calebleung6761
    @calebleung6761 3 หลายเดือนก่อน

    Can you talk about the difference between stationary distribution and invariant distribution?

  • @millaniaangela4147
    @millaniaangela4147 3 ปีที่แล้ว +1

    Cool video thanks! Can you explain about Markov Switching Autoregressive model?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Great suggestion!

  • @oligneflix6798
    @oligneflix6798 2 ปีที่แล้ว

    please create a video explaining how to convert markov chains to neural networks (RNN)

  • @rohitnath5545
    @rohitnath5545 2 ปีที่แล้ว

    Excellent video but a silly doubt how did we get the intial probabilities which are in transition matrix

    • @gmatsue84
      @gmatsue84 2 ปีที่แล้ว

      Like any other: observation.

  • @yulinliu850
    @yulinliu850 3 ปีที่แล้ว +1

    Cool

  • @sumitkumarpal3957
    @sumitkumarpal3957 2 ปีที่แล้ว +1

    Just a comment: If we calculate using {S(t)} = [Transition Matrix] {S(t-1)} for W2 then we have to transpose the matrix shown at 3:33. Please correct me if I am wrong.

    • @whycurious6754
      @whycurious6754 ปีที่แล้ว

      This had me confused for a while. its either that or {S(t)} = {S(t-1)}[Transition Matrix] , depending on how the transition matrix is defined.

  • @bapireddy5790
    @bapireddy5790 3 ปีที่แล้ว +3

    Can you cover Markov chains for time series.

    • @meanreversion1083
      @meanreversion1083 3 ปีที่แล้ว +1

      Yep I agree. Run through an example would be nice. And also would see math been covered.

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Yes! More Markov Chain videos to come!

  • @nicoleluo6692
    @nicoleluo6692 ปีที่แล้ว

    WOW. love you. btw, i have a UCLA shirt too 😁

  • @arc6fire
    @arc6fire 2 ปีที่แล้ว

    hi
    i'm a little unsure about the reasoning used in the steady state example
    the calculation's presented show *IF* the values converge, then these would be the values
    but what is not shown is that these values should necessarily converge
    So yes I understand if these values converged to a number, these would be the numbers, but what isn't shown is that the markov chains would necessarily converge
    do all markov chains neccesarily converge to a steady state?
    p.s. Brilliant videos btw, watching a handful of them and they're very digestible!

    • @gmatsue84
      @gmatsue84 2 ปีที่แล้ว

      If I'm not wrong, any finite Markov Chain will convert, because they all have recurring states. In this case all states are recurrent, so they converge. Given a finite number of states, the transient ones (the one's which you can't go back to if you leave, which can be alocated as classes such as {1, 2}) will converge to 0, while the rest (the recurrent ones) will converge at some point.

  • @abdelrahmanaltawil2219
    @abdelrahmanaltawil2219 3 ปีที่แล้ว +1

    Hi, I have a question and "sorry for my weak probability fundamentals", I am quite confused about one thing,
    Does reaching the steady state means that our one step probability has changed, I mean after infinitely times steps if I were to ask myself what is the probability of having sunny day tomorrow given today is cloudy, what you be the answer? "if the answer was other than 0.5, is it okey for the Marko property to be not preserved, I mean can we still call it markovain process or it became iid "
    again sorry, I am sure I mixed too many things up

    • @fszhang9010
      @fszhang9010 3 ปีที่แล้ว +1

      I think steady state is a statistical number that indicates a possibility over a long time span(infinite times, like you said), just like if you flip a coin only 10 times and there's 0.6 of chance you'll get front side and 0.4 of the back side, but after trying 1million times the probability of front/back side will eventually come to 0.5, however 0.5 doesn't equal to the initial P(front) or P(back), its just a long term trend which computated at different time dimension compared with single time computation. NOTE: I'm not sure if my thought are correct : )

    • @abdelrahmanaltawil2219
      @abdelrahmanaltawil2219 3 ปีที่แล้ว

      @@fszhang9010 thank you very much for the reply

  • @learn5081
    @learn5081 3 ปีที่แล้ว +1

    hope to see more math under the hood. thanks

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว

      Thanks for the suggestion!

  • @transcendentpsych124
    @transcendentpsych124 ปีที่แล้ว

    I don't get the assumptions. Are you taking a given data that whether will change based on some pattern? I mean the probability it will just stay sunny in Saudi Arabia is higher than it would be in Reykjavik.

  • @rakith
    @rakith ปีที่แล้ว

    It’s not sunny. So why’d you wear a cap?

  • @devonk298
    @devonk298 3 ปีที่แล้ว

    You are seriously cute! Great instructor too. ty

  • @farzansoltani344
    @farzansoltani344 2 ปีที่แล้ว

    tnx for your videos ... but this video has some problems, ... the voice and the video are not sync

  • @riusaddm
    @riusaddm 3 ปีที่แล้ว +1

    Can you write a book already?

    • @ritvikmath
      @ritvikmath  3 ปีที่แล้ว +1

      Hahaha not a bad idea :)

  • @Naturehack
    @Naturehack 2 ปีที่แล้ว

    Maxwell's Demon Here
    1|0 = 3
    1 |transition| 0
    Markov Maxwell
    Feelings three states
    Positive |transition| negative
    Procedural programmed population
    First language update
    Species Growth Rights
    3rd planet from Sun
    Galileo Galilee
    380+ years
    Late
    Like your style
    Language sort ahead

  • @MrNitKap
    @MrNitKap 3 ปีที่แล้ว

    Thanks. Learnt basic concept of steady state. However can’t help point out that 0.42 (5/12) is not🙄 ‘Point forty-two’ (yes saying 42% is fine) or it could just as well be 0.4166 ‘point four thousand one hundred and sixty six’ ... many sports commentators when reading stats like players average score make such mistakes...but I find it hard to accept it from a person dedicated to maths and science 👍...