His explanations on the normally convolutedly explained Markov Inequality, Chebyshev's Inequality, heck, even convergence of limit, are spotless! Such a gem. Thank you MIT, thank you Prof.Tsitsiklis.
6 more lectures to go, let's finish strong. for those interested in following the course or learning the material more deeply, I've personally just been doing problems from the book rather than the OCW page (most of the problem sets are from the book anyway, not sure about the exams). I recommend looking through the problems because I find that they show different ways of applying the concepts and help develop problem-solving intuition. If you're like me and coming from a very non-mathematical background, I think putting in a lot of time (10+ h/week) to learn the material well is a very worthwhile endeavour, if you can afford it.
Thank you so much. I'm extremely grateful to be living in a time where this much knowledge is so accessible. I'll try my best to support this content :)
This is just amazing how clearly he explains the topics, big thanks to the professor. I wonder which other courses he teaches at MIT that are available in OCW? Also I think the Prof. is an author of the stats book for this course which must be great as well so need to look into getting it!
you all prolly dont care but does any of you know of a method to get back into an Instagram account?? I was dumb forgot the login password. I would appreciate any help you can offer me.
Initially I thought like you... but after some deeper thought I saw there is actually "a connection". Here is the statement he wrote. For every eps > 0: [ for every eps' > 0: [ there exists an n0 such that when n>=n0 then P(abs(Yn-a) >= eps )
What are X_1, X_2....X_n? Is each X_i a single measurement, or a bunch of measurements (of let's say Penguin height)? EDIT: Now that I think of it, a single measurement is also a subset of "bunch of measurements"
It's simple. When you're calculating the mean of a random variable, you multiply the value of the outcome with the outcome's probability and then take sum over all the outcomes to obtain the expected value. But when you do the same above procedure for outcomes greater than a particular value 'a', you're omitting some terms while taking the sum mentioned in the above procedure which makes it smaller than the expected value.
Because the sum goes for all x bigger than A. Ex: In the set {0, 1, 2, 3, 4...}, you can sum all of the numbers, or you can sum some of them, if you same from 3 to 4, it's 7, but if you sum all, it's 10. Now, if you think the case where the bigger is x, the smaller is p(x), it doesn't matter, x_i*p(x) is always bigger that a*p(x) because multiplication of positive numbers in an inequality preserves the relation between right hand side and left hand side.
excellent lecturer, but terrible cameraman!.. The cameraman, has no mathematical background of whatsoever, and it is frustrating how s/he either to slow or to fast to zoom in and out of the pictures, during the respective lecture!..
His explanations on the normally convolutedly explained Markov Inequality, Chebyshev's Inequality, heck, even convergence of limit, are spotless! Such a gem. Thank you MIT, thank you Prof.Tsitsiklis.
He just blew my mind with his explanation of Markov's Inequality.
Thank you Prof. Tsitsiklis. The way you explain the subject is truly shedding light my stone head, which otherwise remains a stone.
6 more lectures to go, let's finish strong.
for those interested in following the course or learning the material more deeply, I've personally just been doing problems from the book rather than the OCW page (most of the problem sets are from the book anyway, not sure about the exams). I recommend looking through the problems because I find that they show different ways of applying the concepts and help develop problem-solving intuition. If you're like me and coming from a very non-mathematical background, I think putting in a lot of time (10+ h/week) to learn the material well is a very worthwhile endeavour, if you can afford it.
strong.
which book do you mean?
i do both
The opening remark was just wonderful to listen to!
This theorem should be called "The Fundamental theorem of probability theory" and the Central Limit Theorem, "the Fundamental theorem of Statistics"
I will ask them to change
@@TheArselover thanks
Thank you so much. I'm extremely grateful to be living in a time where this much knowledge is so accessible. I'll try my best to support this content :)
Mathematically speaking, it does not tell you about distribution of Sn. Wow!! Thank you so much. Have done many revisions of this series.
This is just amazing how clearly he explains the topics, big thanks to the professor. I wonder which other courses he teaches at MIT that are available in OCW? Also I think the Prof. is an author of the stats book for this course which must be great as well so need to look into getting it!
you all prolly dont care but does any of you know of a method to get back into an Instagram account??
I was dumb forgot the login password. I would appreciate any help you can offer me.
my favorite anime episode so far, what a plot twist. very twisty.
Now I really understand what Tschebyschev is actually saying :)
It would be easier if 5 penguins are used as a "weak sample" to work through the markov theory
Thank you for the lecture, extremely useful!
Andrei Krishkevich there is no connection between e and e'. the notation is just somewhat misleading.
I think that in 21:40 it should say "For all e > e' > 0" Or something of that nature. Otherwise there is not connection between e and e'.
Initially I thought like you... but after some deeper thought I saw there is actually "a connection". Here is the statement he wrote. For every eps > 0: [ for every eps' > 0: [ there exists an n0 such that when n>=n0 then P(abs(Yn-a) >= eps )
What are X_1, X_2....X_n? Is each X_i a single measurement, or a bunch of measurements (of let's say Penguin height)?
EDIT: Now that I think of it, a single measurement is also a subset of "bunch of measurements"
They are a bunch of measurment of a single sample space, so this sum of a random variable.
How did he replace x in x.P(x) by "a" for all x >= a?
Can somebody please explain?
it was a inequality and doing that was possible, so he did changed things, but the inequality was still satisfied
It's simple. When you're calculating the mean of a random variable, you multiply the value of the outcome with the outcome's probability and then take sum over all the outcomes to obtain the expected value.
But when you do the same above procedure for outcomes greater than a particular value 'a', you're omitting some terms while taking the sum mentioned in the above procedure which makes it smaller than the expected value.
Because the sum goes for all x bigger than A. Ex: In the set {0, 1, 2, 3, 4...}, you can sum all of the numbers, or you can sum some of them, if you same from 3 to 4, it's 7, but if you sum all, it's 10. Now, if you think the case where the bigger is x, the smaller is p(x), it doesn't matter, x_i*p(x) is always bigger that a*p(x) because multiplication of positive numbers in an inequality preserves the relation between right hand side and left hand side.
at 5:12
in markov inequality final derivation why he replaced greater than or equal with just equal for a*p(x>=a)?
seems it's a notation mistake.
It’s equal to the last past of derivation (sum of a*px), but still less than the first part (E[X])
I thun he wanted to emphasize that you can pick e' as small as you want.
Huwen= When
27:24,the subtitles is error, I think he means 'n' rather than 'an', it is important。
bed finishing
cool
god but change g to j and d to hn
excellent lecturer, but terrible cameraman!.. The cameraman, has no mathematical background of whatsoever, and it is frustrating how s/he either to slow or to fast to zoom in and out of the pictures, during the respective lecture!..
It's free and You don't loose anything for waiting 1 or 2 seconds between what prof is talking and the blackboard/slices.