- 9
- 52 914
ML Explained
เข้าร่วมเมื่อ 21 ต.ค. 2011
Check out my videos regarding machine learning (ML) and data science fundamental explanations.
The Reparameterization Trick
This video covers what the Reparameterization trick is and when we use it. It also explains the trick from a mathematical/statistical aspect.
CHAPTERS:
00:00 Intro
00:28 What/Why?
08:17 Math
CHAPTERS:
00:00 Intro
00:28 What/Why?
08:17 Math
มุมมอง: 15 680
วีดีโอ
Precision-Recall
มุมมอง 437ปีที่แล้ว
In this video, I explain what the Precision-Recall curve is and when it is used. VIDEO CHAPTERS 0:00 Introduction 0:032 ROC, AUC recap 2:50 ROC, AUC limitations 4:45 Precision-Recall
PyTorch 2D Convolution
มุมมอง 7Kปีที่แล้ว
In this video, we cover the input parameters for the PyTorch torch.nn.Conv2d module. VIDEO CHAPTERS 0:00 Introduction 0:37 Example 2:46 torch.nn.Conv2d 3:28 Input/Output Channels 4:40 Kernel 5:15 Stride 6:17 Padding 8:01 Dilation 9:04 Groups 11:13 Bias 11:50 Output Shape
F-Beta Score
มุมมอง 228ปีที่แล้ว
This video explains the definition of the F-beta score VIDEO CHAPTERS 0:00 Introduction 0:38 Precision, Recall, F1-Score 4:41 F-Beta score
Number of Substrings in a String
มุมมอง 374ปีที่แล้ว
In this video, we will go over how to count the number of substrings in a given string
Generative Adversarial Networks GAN Loss Function (MinMaxLoss)
มุมมอง 6Kปีที่แล้ว
This video describes the MinMax loss function, also known as generative adversarial networks GAN loss function. #datascience #machinelearning #statistics #explained
Beeswarm Plots (Including SHAP Values)
มุมมอง 5Kปีที่แล้ว
This video describes how to read beeswarm plots and how they are different than histograms. VIDEO CHAPTERS 0:00 Introduction 0:21 Beeswarm and histograms 2:42 How to read beeswarm plots 3:50 How to read SHAP values beeswarm plots #datascience #machinelearning #statistics #shap_values #histogram
TP, FP, TN, FN, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC
มุมมอง 18Kปีที่แล้ว
In this video, we cover the definitions that revolve around classification evaluation - True Positive, False Positive, True Negative, False Negative, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC These metrics are widely used in machine learning, data science, and statistical analysis. #machinelearning #datascience #statistics #explanation #explained VIDEO CHAPTERS 0...
your'e voice is literally from Giorgio by moroder song
My input always read (N, Hin, Win, Cin). How can i fix it to (N, Cin, Hin, Win)?
Very good explanation thank you
dope
Helpful
Thanks a lot!
Thank you!
Excellent explanation
Thank you
It is cool although I don't really understand the second half. 😅
thanks sir.
WOW! THANK U. FINALLY MAKING IT EASY TK UNDERSTAND. WATCHED SO MANY VIDEOS ON VAE AND THEY JUST BRIEFLY GO OVER THE EQUATION WITHOUT EXPLAINING
Thanks for the video! I believe for the group example, the input sample is (8x5x5) instead of (8x7x7)
Very helpful. Thank you 🙏. Question. On the last plot in the video how to interpret that one SHAP red observation (at around -4 value) of the Age feature? Is that a potential outlier?
Great video, made me understand the CNN-s in Python much better. Thank you!
I have a small question about the video, that slightly bothers me. What this normal distribution we are sampling from consists of? If it's distribution of latent vectors, how do we collect them during training?
Thank you this is the best explanation ❤❤❤
Thanks! More videos please!
Thank you a lot! I finally understood from where all these filters comes!
Can I have these slides please within respective concern 🙏💓
great video, thank you so much. this video is worth more than 100 wiki pages
thx
Thank you. One of the best and clearest explanations I have seen without wading through a ton of documentation.
Thank you !
Sometimes understanding the complexity makes a concept clearer. This was one such example. Thanks a lot.
perfect esplanation, straight foward and pure informative. tnx.
Thanks, this is a good explanation of the black point of VAE
The derivative of the expectation is the expectation of the derivative? That's surprising to my feeble mind.
Thank you for the video!!!
great explanation
your formula n*(n - 1)/2 gives 5*4/2 = 10 which is incorrect
What if I change the kernel size and want to keep my input size? Would padding still be 1?
Thank you for this video, this has helped me a lot
Great intro
Very good explanation, thank you. Please continue to make videos.
you goated for this one fam
Thank you for this video, this has helped a lot in my own research on the topic
Very nice video, it helped me a lot. Finally someone explaining math without leaving the essential parts aside.
Super clear explaination. Thanks !!!
nice
Thank you, I liked your intuition, amazing effort.
Also please correct me if I am wrong but I think at minute 17 you should not use the same theta notation for both "g_theta()" and "p_theta()" since you assumed that you do not know the theta parameters (the main cause of differentiation problem) for "p()" but you know the parameters for "g()".
This was the analogy I got from ChatGPT to understand the problem 😅. Hope it's useful to someone: "Certainly, let's use an analogy involving shooting a football and the size of a goalpost to explain the reparameterization trick: Imagine you're a football player trying to score a goal by shooting the ball into a goalpost. However, the goalpost is not of a fixed size; it varies based on certain parameters that you can adjust. Your goal is to optimize your shooting technique to score as many goals as possible. Now, let's draw parallels between this analogy and the reparameterization trick: 1. **Goalpost Variability (Randomness):** The size of the goalpost represents the variability introduced by randomness in the shooting process. When the goalpost is larger, it's more challenging to score, and when it's smaller, it's easier. 2. **Shooting Technique (Model Parameters):** Your shooting technique corresponds to the parameters of a probabilistic model (such as `mean_p` and `std_p` in a VAE). These parameters affect how well you can aim and shoot the ball. 3. **Optimization:** Your goal is to optimize your shooting technique to score consistently. However, if the goalpost's size (randomness) changes unpredictably every time you shoot, it becomes difficult to understand how your adjustments to the shooting technique (model parameters) are affecting your chances of scoring. 4. **Reparameterization Trick:** To make the optimization process more effective, you introduce a fixed-size reference goalpost (a standard normal distribution) that represents a known level of variability. Every time you shoot, you still adjust your shooting technique (model parameters), but you compare your shots to the reference goalpost. 5. **Deterministic Transformation:** This reference goalpost allows you to compare and adjust your shooting technique more consistently. You're still accounting for variability, but it's structured and controlled. Your technique adjustments are now more meaningful because they're not tangled up with the unpredictable variability of the changing goalpost. In this analogy, the reparameterization trick corresponds to using a reference goalpost with a known size to stabilize the optimization process. This way, your focus on optimizing your shooting technique (model parameters) remains more effective, as you're not constantly grappling with unpredictable changes in the goalpost's size (randomness)."
oh my god !! So good.
dam nice bro, thank you for this
Very good!
Super helpful thx
This is a life changing video, thank you very much 😊 🙏🏻
Minimax is same as max for discriminator?
Not just gans it’s used in adversarial attacks also .
Very well explained. Thank you very much. I just pressed the Subscribe button :)
Holy God. What a great teacher..
The best explanation. Thank you.