Thank you for the great video! I would like to point out that it is not obvious in 9:30 to get from \alpha(x_t) * \beta(x_t) = p(x_t, Y). My thought is that \alpha(x_t) * \beta(x_t) = p(x_t, y_1~y_T) * p(y_{t+1} ~ y_T | x_t) = p(y_1~y_t | x_t) * p(x_t) * p(y_{t+1} ~ y_T | x_t) *=* p(y_1 ~ y_T | x_t) * p(x_t) = p(x_t, y_1 ~ y_T). The '*=*' place is derived from the Markov assumption, which can be explained as "given the current state x_t, the furture state x_{t+1} so as the outcome y_{t+1} is independent of previous states {x_1 ~ x_{t-1}}, so as the previous outcomes {y_1 ~ y_{t-1}}", therefore we can merge the probability as shown. (wondering if my thought is correct...)
Would you kindly do another vedio series on the Hierarchical version of HMM ? And when shall we prefer to use the Hierarchical version ? It would be great if you provide an implementation as well in Python , R , Mathlab
Completed a project thanks to this video. You're the best man!!!
How was the expression for p(x2,y1,y2) derived at 11:48? Shouldn't p(x2,y1,y2) = p(x2|y2,y1)p(y2|y1)p(y1)?
p(x2,y1,y2)=sigma_x1(p(x1,x2,y1,y2))=sigma_x1(p(y2|x2,x1,y1)p(x2|x1,y1)p(x1,y1)=sigma_x1(p(y2|x2)p(x2|x1)p(x1,y1))
@@eliesfeir4511 Thank you! This is much clearer.
At 9:48 he says p(y1,y2,y3,x3) x p(y4,y5,y6|x3) = p(x3, Y) where Y = {y1,y2,...,y6} anyone figured out how?
figured out ... y1,y2,y3 independent of y4,y5,y6 given x3; that is : p(a,b,c) = p(b,c| a) x p(a) = p(b|a) p(c|a) p(a) = p(a,b) p(c|a)
Thanks for the great explanation! Finally understood the implementation of HMM`s
Thank you for the great video! I would like to point out that it is not obvious in 9:30 to get from \alpha(x_t) * \beta(x_t) = p(x_t, Y). My thought is that \alpha(x_t) * \beta(x_t) = p(x_t, y_1~y_T) * p(y_{t+1} ~ y_T | x_t) = p(y_1~y_t | x_t) * p(x_t) * p(y_{t+1} ~ y_T | x_t) *=* p(y_1 ~ y_T | x_t) * p(x_t) = p(x_t, y_1 ~ y_T). The '*=*' place is derived from the Markov assumption, which can be explained as "given the current state x_t, the furture state x_{t+1} so as the outcome y_{t+1} is independent of previous states {x_1 ~ x_{t-1}}, so as the previous outcomes {y_1 ~ y_{t-1}}", therefore we can merge the probability as shown. (wondering if my thought is correct...)
Your math checks out to me, but I am new to this as well.
Would you kindly do another vedio series on the Hierarchical version of HMM ? And when shall we prefer to use the Hierarchical version ? It would be great if you provide an implementation as well in Python , R , Mathlab
Such an amazing video. Very clear to understand! Thanks much for the effort.
6:52
thanks
thanks
thanks
thanks
thanks