- 7
- 5 452
Henrik Hellström
เข้าร่วมเมื่อ 18 ก.ย. 2021
Probability and Statistics: Total Probability Theorem
The total probability theorem is an extension of conditional probability, where multiple subsets are used. This theorem has been very useful to me in my engineering career and can be used to solve the classical Russian Roulette problem.
Prerequisite:
th-cam.com/video/MoLEpeqRKs0/w-d-xo.html
Extension:
en.wikipedia.org/wiki/Russian_roulette
Prerequisite:
th-cam.com/video/MoLEpeqRKs0/w-d-xo.html
Extension:
en.wikipedia.org/wiki/Russian_roulette
มุมมอง: 70
วีดีโอ
Probability and Statistics: Conditional Probability
มุมมอง 127ปีที่แล้ว
Conditional probability is the probability of one event occurring, given that another event has already occurred. This concept is incredibly useful in solving problems and is a prerequisite for essential theorems such as the Total Probability Theorem. Prerequisite: th-cam.com/video/nJmDdWofAgk/w-d-xo.html Extension: en.wikipedia.org/wiki/Law_of_total_probability
Probability and Statistics: Kolmogorov's Axioms
มุมมอง 2.5Kปีที่แล้ว
Kolmogorov's axioms form the foundation of formal probability theory. Starting from Kolmogorov's axioms, a multitude of interesting results can be derived. Andrey Kolmogorov: en.wikipedia.org/wiki/Andrey_Kolmogorov Example theorems that can be proven with Kolmogorov's axioms: en.wikipedia.org/wiki/Complementary_event www.brainkart.com/article/Addition-Theorem-of-Probability_39444/ Update: I ask...
GLOBECOM 2022: Unbiased Over-the-Air Computation via Retransmissions
มุมมอง 2222 ปีที่แล้ว
Video presentation of my GLOBECOM 2022 conference paper "Unbiased Over-the-Air Computation via Retransmissions". Will also be presented physically in Rio de Janeiro, Brazil on the 5th of December 2022.
Over the Air Federated Learning with Retransmissions - SPAWC
มุมมอง 2553 ปีที่แล้ว
This is a video presentation of the paper "Over-the-Air Federated Learning with Retransmissions" presented at the IEEE Conference on Signal Processing Advances in Wireless Communication, 2021, Lucca, Italy. Apologies for the terrible sound quality, I'll make sure to be more careful in future presentations.
Probability and Statistics: One Dimensional Random Variables
มุมมอง 6003 ปีที่แล้ว
Random variables are essential to describe and compute problems involving random phenomena. In this video, I introduce discrete random variables, the probability mass function, and solve a motivational problem. As a sanity check, I have also made a simple MATLAB script to run 100,000 iterations of the game in the motivational problem (it is indeed about 32% chance of winning): github.com/henrik...
Probability and Statistics: Urn Model
มุมมอง 1.7K3 ปีที่แล้ว
Urn models are mental frameworks often employed in probability theory and statistics. These models can be applied to a wide class of probability problems directly by phrasing the problem as an urn model. Binomial Coefficients: en.wikipedia.org/wiki/Binomial_coefficient th-cam.com/video/Pcgvv6T_bD8/w-d-xo.html
Thank you sir !! Your channel is just great i hope more students get to find it , also hope that you continue sharing more videos in probability .
Thank you, Serine! I'm glad that you liked the video :)
This helped so much thank you!!
Thank you for letting me know, makes me glad to hear that!
in the dice problem, solution can be worded even simpler. it is given that prob of rolling 1,2,3 is twice that of 4,5,6. so lets assume prob of 4 = prob of 5= prob 6 = p. then prob of 1= ..of 2 = ..of 3= 2p (given). now, we know that total prob must be 1. ( i mean when you roll the dice, you are bound to get a number from 1 to 6, so). therefore, 2p+2p+2p+p+p+p=1 which gives, p=1/9. now we can find all sorts of occurrence prob. i mean you did the same thing, mine just looks easier to me lol. but your method is ofc more useful when problems get harder , intuition fails many times. anyways, good vid.
You are correct! Essentially, your approach is the same as what I do in the video, just based more on intuition and less on mathematical rigor. During my daily work as an engineer, my solutions look much more like what you're writing in this comment and less like what it looks like in this video ^^ However, as you point out, intuition only takes you so far, and it can be misleading. Many times, I convince myself of a solution at the white board, and then when I go to implement the resulting algorithm in real life, it doesn't perform as expected. Usually, because I made some mistake in the intuitive solution. Then, you have to go through the slog of looking at every detail and figuring out where your intuition failed :P
I found the probability by thinking of a dice probability being 1/6. Since P(omega) needs to add to one we get 6*(1/6) = 1. I then replaced the 6 with x. 6*(1/x)=1 and expanded 3*(1/x) + 3*(1/x) = 1. I then multiplied the first term by two since 1,2,3 are twice as likely. and got 6*(1/x) + 3*(1/x) = 1. solved for x and got x = 9. Even though they are pretty much the same thing, I like your way of thinking about this solution better haha.
That's awesome, Rashawn! Finding a new way to solve the problem really reinforces that you understood it.
i could not understand that why the Last question's answer is 4/9 instead of 1/2? i think it is supposed to be 1/2. can you explain it to me.
If the die was a normal die (all six sides show up with equal probability), then the answer 1/2 would be correct! However, note that the die is not fair. This die is loaded such that the outcomes "one", "two", and "three" turn up twice as often as the remaining three outcomes. I describe the problem at 0:37 Intuitively, since there are more odd numbers than even numbers in the set {1,2,3}, we should expect even numbers to show up less often than 50% of the time.
@@henrikhellstrom1241 oh yes now i understand, i missed that point i guess. thank you for replying to me.
very cool
at 4:47 how is the server able to broadcaste the new model with only knowing of the weighted sum/ how does any device derive what the new model is based on what is being broadcasted?
Hello Taartin, thank you for your comment! First question: The server is able to broadcast the new model because the weighted sum is exactly equal to the new model, i.e., the equation that you see inside of the "cloud" at 4:47 is how the new global model is generated. This model update step comes directly from the original description of Federated Learning. You can find that description in "Communication-Efficient Learning of Deep Networks from Decentralized Data" by McMahan et al. Second question: The device will take the broadcasted global model w^t and then compute a local update w_k^t by running a training algorithm. The most common learning algorithms are variants of the stochastic gradient descent algorithm. These algorithms are extremely popular, so a search for "gradient descent" will give you many good sources on the topic.
@@henrikhellstrom1241 I did not realize that the weighted model was for each parameter. I thought it was one sum for all parameters which did not make sense to me. Thank you very much for your answer:)
@@Taartin Ah, I see! Yes, perhaps I should have been more clear about that. :) To clarify for the future: the idea is that each parameter in the global weight vector is computed as a separate function over-the-air. In other words, if you have a 100,000 dimensional model and K users, you compute 100,000 functions. In an orthogonal communication scheme, you would communicate 100,000*K parameters and then compute 100,000 functions in the CPU at the server.
I study civil engineering in Norway and I would like to thank you for this useful video.
Thank you for your kind comment, Salah!
well, please correct me if I'm wrong, but this doesn't mean that the man also have a win probability of 68%?
Hello, k j! That's right, the man will have a win probability of approximately 68%! However, note that when the man wins, you are only losing 1 dollar, and when you win you are gaining 3 dollars. So even though the man wins 68% of all games, you are still making a profit on average!
Wow, great video. I hope you will upload more videos in the future 👍
Will do!!