Your channel is criminally underviewed. You are a really talented lecturer. I took a whole bayesian stats course and you revealed things already I didnt learn in 5 months of time.
bro, you are the absolute best in explaining complicated concepts in such an intuitive manner. You should seriously consider becoming a teacher since you are so talented in explaining
It finally clicked! I was very confused about multiplying a distribution by the transition matrix exactly as you were. But not anymore thanks to you. So, a HUGE thx!!
Thanks a lot. Very intuitive! Taking the time to clarify steps you initially had trouble on makes it much easier for me to understand the thought process behind the concept.
I'm an undergrad doing stats and learning Markov chain right now. I really like the intuitive explanation of the stationary distribution can be a probability distribution, rather than a fixed state at the beginning of the video. That idea has confused me for 3 months! Really appreciate your video :D
A very underrated video. I loved the explanation of what 🥧*A=🥧 truly means. This was the first video I've seen it explained. A lot of other videos show the computation (which is trivial) and avoid the meaning behind it.
Amazing video! Especially the part about how to get the steady state by using the Eigenvector equation was eye opening for me. One question: I am starting out on markov-chains and I would like to know how I can generate the trasition matrix in the first place. Let's say I have a dataset with some timeseries. How do I start clustering the states? Do you have a video on that?
I use the eigenvector trick to find stationary distributions regularly for marketing applications, such as finding the steady state distribution of market shares
@@ritvikmath I also use it for digital journey analysis to project add to cart events as a state with other states such as branded search, generic search, competitive brand search, viewing product pages inside of amazon for your brand, competitors, etc. I use add to cart ad the conversion metric because purchase would be an absorbing state. works great!
I was kinda hoping for you to explain what would hypothetically happen in your magically corrected scenario, although I believe it would not be possible. Great work and great material!
My visualization of the chain being "at a distribution" is to imagine, say, 1000 particles in the system. Each one moves around according to the transition probabilities. A "steady state" is where the particles leaving a node are exactly replaced by new particles entering the node.
super video. request you to add more on other properties (reducibility, reversibility, time homogeniety, periodicity, ergodicity , mixing times & why these are necessary)
Great explanation as always! One question I have is, since this is a stationary state, shouldn't we consider both the channels going out and the channels coming in? Such as for B, shouldn't we write an equation like: pi_C * 0.5 + pi_E * 0.1 - pi_B * 1 = pi_B ? Why didn't we count the channels going out?
Hello man, thank u so much for this helpful videos. Im a huge fan of you and I want to askin u about a problematiq that i think of it, is about using neural network on forecasting time series and how about the resulats ? Is it better than the ARIMA models ??
Nice video, i have two questions 1. How did we come about 1/5, 2/5 and 2/5, i know of the first and last zero. 2. In a case where we are told to calculate the probability of ending in the third state after 4steps, if we start from state 1..how do i do this?
I wouldn't consider this an "advanced" topic in the context of Markov Chains. Anytime you talk about Markov Chains, the question of steady state is a natural one.
Nice video. Here it seems that the example in the video does not meet the Detailed balance equation (e.g., P(B)*T(C|B) != P(C)*T(B|C)). Is it safe to say that Detailed balance is sufficient but unnecessary for Stationary Distribution?
Question: suppose we wanted to calculate P(A) for some reason. It's certainly not the case that P(A)=0, since we of course could start at state A. So how would we go about calculating this? I know that normally we compute stationary values, but in this case it would be 0. Looking forward to hearing responses.
in the first video about markov he has one of weather (sunny vs cloudy). Simpler but 'similar to real world case' th-cam.com/video/prZMpThbU3E/w-d-xo.html
It is understood that the probability of being in state A is zero in the next time step but how come intuitively the probability of being in state E can be zero? After all the probability of self-transition is 0.9!
Your channel is criminally underviewed. You are a really talented lecturer. I took a whole bayesian stats course and you revealed things already I didnt learn in 5 months of time.
Wow thanks for the kind words!
@@ritvikmath Hope you will boom at some point. Generally a canal's popularity curve is sigmoid and there's inflection point at some point.
@@ritvikmath really great lecture!! better than all the courses I have learned
This is how teaching should be done. It was incredibly effortless to understand when you switch between intuition and rigor. Thank you so much
Hey thanks!
bro, you are the absolute best in explaining complicated concepts in such an intuitive manner. You should seriously consider becoming a teacher since you are so talented in explaining
I really enjoy that you explain these concepts so simply! Thanks!
I have never seen any explanation about markov chain so clear!
Your videos are quite good at explaining machine learning concepts with all the maths in the background. Thanks a lot!
Really appreciate how you keep bringing it back to intuition but also do the math. well done dude!
It finally clicked! I was very confused about multiplying a distribution by the transition matrix exactly as you were. But not anymore thanks to you. So, a HUGE thx!!
That subtle point is something that I had the misconception of before. Thanks for pointing it out and clarifying.
Thanks a lot. Very intuitive! Taking the time to clarify steps you initially had trouble on makes it much easier for me to understand the thought process behind the concept.
Glad it was helpful!
Love your videos, thanks for making them. I just wonder who gave a dislike to this amazing video.
I'm an undergrad doing stats and learning Markov chain right now. I really like the intuitive explanation of the stationary distribution can be a probability distribution, rather than a fixed state at the beginning of the video. That idea has confused me for 3 months! Really appreciate your video :D
One of the best lectures on Markov chains
A very underrated video.
I loved the explanation of what 🥧*A=🥧 truly means. This was the first video I've seen it explained.
A lot of other videos show the computation (which is trivial) and avoid the meaning behind it.
your way of explaining things is amazing. I've actually been studying some of the way you struture your explanations to improve my videos
This is such an awesome channel!
The best video so far I’ve watched regarding the topic! Thank you! And keep updating more content pls
More to come!
Awesome explanation! Thanks for all the posts ritvik😊
love your explanations! really cleared up some of my confusions. please make more videos like this!
Thank you! Will do!
Great Explanation. Looking forward for the coding. And the rest of the playlist. And as always, thanks.
Awesome, thank you!
This is explained amazingly, thank you!
This is so so well explained omg
Glad you clarified that point. I definitely had the wrong notion that stationarity meant eventually getting there.
Thanks a lot for your awesome explanation! ❤
great video! good mix between intuition and math
Very clear and intuitive, I definitely learned something.
Great to hear!
Great explanation indeed! Thanks bro.
Very good job man. Thank you a lot!!
This is amazing content. Keep it going!
Good job 👍... learnt some vital concepts
ritvik you are one of the wonders of the world
Bhai, you are just awesome!
masterpiece! you made me feel that education is kind of art! :D
This is excellent, thank you very much!
Thanks 🙏
You are such an awesome teacher. Thanks for this video =)
Glad it was helpful!
Amazing video! Especially the part about how to get the steady state by using the Eigenvector equation was eye opening for me.
One question: I am starting out on markov-chains and I would like to know how I can generate the trasition matrix in the first place. Let's say I have a dataset with some timeseries. How do I start clustering the states? Do you have a video on that?
I use the eigenvector trick to find stationary distributions regularly for marketing applications, such as finding the steady state distribution of market shares
Super cool application!
@@ritvikmath I also use it for digital journey analysis to project add to cart events as a state with other states such as branded search, generic search, competitive brand search, viewing product pages inside of amazon for your brand, competitors, etc. I use add to cart ad the conversion metric because purchase would be an absorbing state. works great!
You and Josh Starmer really should get together and do some content together! I have learned more from the two of you than I learned in college!
Loved it!
I was kinda hoping for you to explain what would hypothetically happen in your magically corrected scenario, although I believe it would not be possible. Great work and great material!
Indeed, before the fix we had a defective Markov Chain, good eye!
saved my day ! thank u
amazing so clear
My visualization of the chain being "at a distribution" is to imagine, say, 1000 particles in the system. Each one moves around according to the transition probabilities. A "steady state" is where the particles leaving a node are exactly replaced by new particles entering the node.
That‘s amazing!!! Thanks!
definitely downloading this one ..
Nope, I don't trust youtube to keep this gem alive for too long.
MAN THANK YOU SO MUCH!. This will save my ass
Very clear explanation
Glad you think so!
super video. request you to add more on other properties (reducibility, reversibility, time homogeniety, periodicity, ergodicity , mixing times & why these are necessary)
Great suggestion!
Great video.
Glad you enjoyed it
Great explanation as always! One question I have is, since this is a stationary state, shouldn't we consider both the channels going out and the channels coming in? Such as for B, shouldn't we write an equation like: pi_C * 0.5 + pi_E * 0.1 - pi_B * 1 = pi_B ? Why didn't we count the channels going out?
Another great video 👍
You have a good teaching style, clearly born from others poor styles you have experienced 😂
It took me 9 videos to learn that you have an intro pen flip
Haha, no shame. It is quite subtle.
There is also a *snap* *point* "see ya next time" sign off
THANK YOU
Hello man, thank u so much for this helpful videos. Im a huge fan of you and I want to askin u about a problematiq that i think of it, is about using neural network on forecasting time series and how about the resulats ? Is it better than the ARIMA models ??
Sick shirt!
thanks!
Nice video, i have two questions
1. How did we come about 1/5, 2/5 and 2/5, i know of the first and last zero.
2. In a case where we are told to calculate the probability of ending in the third state after 4steps, if we start from state 1..how do i do this?
So the steady state is like the probability of being at a certain node as time t goes to infinity? Like a limit??
Hey, would you consider this topic advanced and if yes, what textbook would you recommend for learning advanced topics?
I wouldn't consider this an "advanced" topic in the context of Markov Chains. Anytime you talk about Markov Chains, the question of steady state is a natural one.
Nice video. Here it seems that the example in the video does not meet the Detailed balance equation (e.g., P(B)*T(C|B) != P(C)*T(B|C)). Is it safe to say that Detailed balance is sufficient but unnecessary for Stationary Distribution?
Question: suppose we wanted to calculate P(A) for some reason.
It's certainly not the case that P(A)=0, since we of course could start at state A. So how would we go about calculating this?
I know that normally we compute stationary values, but in this case it would be 0.
Looking forward to hearing responses.
Or is it not possible to calculate P(A) unless the Markov Chain is an irreducible recurrent chain?
Like two conditional steady states with the last example.
Nice. But I still like it when you explain with a real world example.
Thanks! And noted :)
Can you provide a real world example where we have finite discrete number of states such that this would be useful?
in the first video about markov he has one of weather (sunny vs cloudy). Simpler but 'similar to real world case' th-cam.com/video/prZMpThbU3E/w-d-xo.html
Ritvikmath : Lingayat Jangam?
It is understood that the probability of being in state A is zero in the next time step but how come intuitively the probability of being in state E can be zero? After all the probability of self-transition is 0.9!
I'll be damned if this isn't a good explanation