ML Explained
ML Explained
  • 9
  • 52 914
The Reparameterization Trick
This video covers what the Reparameterization trick is and when we use it. It also explains the trick from a mathematical/statistical aspect.
CHAPTERS:
00:00 Intro
00:28 What/Why?
08:17 Math
มุมมอง: 15 680

วีดีโอ

Precision-Recall
มุมมอง 437ปีที่แล้ว
In this video, I explain what the Precision-Recall curve is and when it is used. VIDEO CHAPTERS 0:00 Introduction 0:032 ROC, AUC recap 2:50 ROC, AUC limitations 4:45 Precision-Recall
PyTorch 2D Convolution
มุมมอง 7Kปีที่แล้ว
In this video, we cover the input parameters for the PyTorch torch.nn.Conv2d module. VIDEO CHAPTERS 0:00 Introduction 0:37 Example 2:46 torch.nn.Conv2d 3:28 Input/Output Channels 4:40 Kernel 5:15 Stride 6:17 Padding 8:01 Dilation 9:04 Groups 11:13 Bias 11:50 Output Shape
F-Beta Score
มุมมอง 228ปีที่แล้ว
This video explains the definition of the F-beta score VIDEO CHAPTERS 0:00 Introduction 0:38 Precision, Recall, F1-Score 4:41 F-Beta score
Number of Substrings in a String
มุมมอง 374ปีที่แล้ว
In this video, we will go over how to count the number of substrings in a given string
Generative Adversarial Networks GAN Loss Function (MinMaxLoss)
มุมมอง 6Kปีที่แล้ว
This video describes the MinMax loss function, also known as generative adversarial networks GAN loss function. #datascience #machinelearning #statistics #explained
Beeswarm Plots (Including SHAP Values)
มุมมอง 5Kปีที่แล้ว
This video describes how to read beeswarm plots and how they are different than histograms. VIDEO CHAPTERS 0:00 Introduction 0:21 Beeswarm and histograms 2:42 How to read beeswarm plots 3:50 How to read SHAP values beeswarm plots #datascience #machinelearning #statistics #shap_values #histogram
TP, FP, TN, FN, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC
มุมมอง 18Kปีที่แล้ว
In this video, we cover the definitions that revolve around classification evaluation - True Positive, False Positive, True Negative, False Negative, Accuracy, Precision, Recall, F1-Score, Sensitivity, Specificity, ROC, AUC These metrics are widely used in machine learning, data science, and statistical analysis. #machinelearning #datascience #statistics #explanation #explained VIDEO CHAPTERS 0...
Schale illusion
มุมมอง 764 ปีที่แล้ว
Schale illusion

ความคิดเห็น

  • @vkkn5162
    @vkkn5162 18 วันที่ผ่านมา

    your'e voice is literally from Giorgio by moroder song

  • @adityategar6178
    @adityategar6178 21 วันที่ผ่านมา

    My input always read (N, Hin, Win, Cin). How can i fix it to (N, Cin, Hin, Win)?

  • @carlosgruss7289
    @carlosgruss7289 24 วันที่ผ่านมา

    Very good explanation thank you

  • @wodniktoja8452
    @wodniktoja8452 25 วันที่ผ่านมา

    dope

  • @AshishOmGourav
    @AshishOmGourav 27 วันที่ผ่านมา

    Helpful

  • @xiangli1133
    @xiangli1133 หลายเดือนก่อน

    Thanks a lot!

  • @Qdkdj
    @Qdkdj หลายเดือนก่อน

    Thank you!

  • @aristotlesocrates8409
    @aristotlesocrates8409 หลายเดือนก่อน

    Excellent explanation

  • @moatzmaloo
    @moatzmaloo หลายเดือนก่อน

    Thank you

  • @wilsonlwtan3975
    @wilsonlwtan3975 2 หลายเดือนก่อน

    It is cool although I don't really understand the second half. 😅

  • @OgulcanYardmc-vy7im
    @OgulcanYardmc-vy7im 2 หลายเดือนก่อน

    thanks sir.

  • @s8x.
    @s8x. 2 หลายเดือนก่อน

    WOW! THANK U. FINALLY MAKING IT EASY TK UNDERSTAND. WATCHED SO MANY VIDEOS ON VAE AND THEY JUST BRIEFLY GO OVER THE EQUATION WITHOUT EXPLAINING

  • @Gan-tingLoo
    @Gan-tingLoo 2 หลายเดือนก่อน

    Thanks for the video! I believe for the group example, the input sample is (8x5x5) instead of (8x7x7)

  • @shravan6457
    @shravan6457 2 หลายเดือนก่อน

    Very helpful. Thank you 🙏. Question. On the last plot in the video how to interpret that one SHAP red observation (at around -4 value) of the Age feature? Is that a potential outlier?

  • @balintfurmann5560
    @balintfurmann5560 2 หลายเดือนก่อน

    Great video, made me understand the CNN-s in Python much better. Thank you!

  • @user-td8vz8cn1h
    @user-td8vz8cn1h 2 หลายเดือนก่อน

    I have a small question about the video, that slightly bothers me. What this normal distribution we are sampling from consists of? If it's distribution of latent vectors, how do we collect them during training?

  • @aaomms7986
    @aaomms7986 2 หลายเดือนก่อน

    Thank you this is the best explanation ❤❤❤

  • @andrefurlan
    @andrefurlan 3 หลายเดือนก่อน

    Thanks! More videos please!

  • @andrefurlan
    @andrefurlan 3 หลายเดือนก่อน

    Thank you a lot! I finally understood from where all these filters comes!

  • @muhammadanasali7631
    @muhammadanasali7631 3 หลายเดือนก่อน

    Can I have these slides please within respective concern 🙏💓

  • @oguzhanoguz621
    @oguzhanoguz621 3 หลายเดือนก่อน

    great video, thank you so much. this video is worth more than 100 wiki pages

  • @Commenterrist
    @Commenterrist 3 หลายเดือนก่อน

    thx

  • @user-kn1we2kp3m
    @user-kn1we2kp3m 4 หลายเดือนก่อน

    Thank you. One of the best and clearest explanations I have seen without wading through a ton of documentation.

  • @hirotrex42
    @hirotrex42 4 หลายเดือนก่อน

    Thank you !

  • @advayargade746
    @advayargade746 4 หลายเดือนก่อน

    Sometimes understanding the complexity makes a concept clearer. This was one such example. Thanks a lot.

  • @hamidfallah9223
    @hamidfallah9223 5 หลายเดือนก่อน

    perfect esplanation, straight foward and pure informative. tnx.

  • @slimanelarabi8147
    @slimanelarabi8147 5 หลายเดือนก่อน

    Thanks, this is a good explanation of the black point of VAE

  • @dennisestenson7820
    @dennisestenson7820 5 หลายเดือนก่อน

    The derivative of the expectation is the expectation of the derivative? That's surprising to my feeble mind.

  • @lakshman587
    @lakshman587 5 หลายเดือนก่อน

    Thank you for the video!!!

  • @user-go9er5hv7b
    @user-go9er5hv7b 5 หลายเดือนก่อน

    great explanation

  • @alexandersmirnov4274
    @alexandersmirnov4274 6 หลายเดือนก่อน

    your formula n*(n - 1)/2 gives 5*4/2 = 10 which is incorrect

  • @brucemurdock5358
    @brucemurdock5358 6 หลายเดือนก่อน

    What if I change the kernel size and want to keep my input size? Would padding still be 1?

  • @ettahiriimane4480
    @ettahiriimane4480 6 หลายเดือนก่อน

    Thank you for this video, this has helped me a lot

  • @jhmrem
    @jhmrem 6 หลายเดือนก่อน

    Great intro

  • @gogaworm
    @gogaworm 6 หลายเดือนก่อน

    Very good explanation, thank you. Please continue to make videos.

  • @user-xm3mb6dj2g
    @user-xm3mb6dj2g 6 หลายเดือนก่อน

    you goated for this one fam

  • @chasekosborne2
    @chasekosborne2 7 หลายเดือนก่อน

    Thank you for this video, this has helped a lot in my own research on the topic

  • @user-hy4kl3my6h
    @user-hy4kl3my6h 8 หลายเดือนก่อน

    Very nice video, it helped me a lot. Finally someone explaining math without leaving the essential parts aside.

  • @user-tn9ol6wb4z
    @user-tn9ol6wb4z 8 หลายเดือนก่อน

    Super clear explaination. Thanks !!!

  • @volktrololo6528
    @volktrololo6528 8 หลายเดือนก่อน

    nice

  • @amirnasser7768
    @amirnasser7768 8 หลายเดือนก่อน

    Thank you, I liked your intuition, amazing effort.

    • @amirnasser7768
      @amirnasser7768 8 หลายเดือนก่อน

      Also please correct me if I am wrong but I think at minute 17 you should not use the same theta notation for both "g_theta()" and "p_theta()" since you assumed that you do not know the theta parameters (the main cause of differentiation problem) for "p()" but you know the parameters for "g()".

  • @mohammedyasin2087
    @mohammedyasin2087 9 หลายเดือนก่อน

    This was the analogy I got from ChatGPT to understand the problem 😅. Hope it's useful to someone: "Certainly, let's use an analogy involving shooting a football and the size of a goalpost to explain the reparameterization trick: Imagine you're a football player trying to score a goal by shooting the ball into a goalpost. However, the goalpost is not of a fixed size; it varies based on certain parameters that you can adjust. Your goal is to optimize your shooting technique to score as many goals as possible. Now, let's draw parallels between this analogy and the reparameterization trick: 1. **Goalpost Variability (Randomness):** The size of the goalpost represents the variability introduced by randomness in the shooting process. When the goalpost is larger, it's more challenging to score, and when it's smaller, it's easier. 2. **Shooting Technique (Model Parameters):** Your shooting technique corresponds to the parameters of a probabilistic model (such as `mean_p` and `std_p` in a VAE). These parameters affect how well you can aim and shoot the ball. 3. **Optimization:** Your goal is to optimize your shooting technique to score consistently. However, if the goalpost's size (randomness) changes unpredictably every time you shoot, it becomes difficult to understand how your adjustments to the shooting technique (model parameters) are affecting your chances of scoring. 4. **Reparameterization Trick:** To make the optimization process more effective, you introduce a fixed-size reference goalpost (a standard normal distribution) that represents a known level of variability. Every time you shoot, you still adjust your shooting technique (model parameters), but you compare your shots to the reference goalpost. 5. **Deterministic Transformation:** This reference goalpost allows you to compare and adjust your shooting technique more consistently. You're still accounting for variability, but it's structured and controlled. Your technique adjustments are now more meaningful because they're not tangled up with the unpredictable variability of the changing goalpost. In this analogy, the reparameterization trick corresponds to using a reference goalpost with a known size to stabilize the optimization process. This way, your focus on optimizing your shooting technique (model parameters) remains more effective, as you're not constantly grappling with unpredictable changes in the goalpost's size (randomness)."

    • @safau
      @safau 6 หลายเดือนก่อน

      oh my god !! So good.

    • @metalhead6067
      @metalhead6067 5 หลายเดือนก่อน

      dam nice bro, thank you for this

  • @chrisseberino
    @chrisseberino 10 หลายเดือนก่อน

    Very good!

  • @vikimancoridis5150
    @vikimancoridis5150 10 หลายเดือนก่อน

    Super helpful thx

  • @abdelrahmanahmad3054
    @abdelrahmanahmad3054 10 หลายเดือนก่อน

    This is a life changing video, thank you very much 😊 🙏🏻

  • @arpitqw1
    @arpitqw1 11 หลายเดือนก่อน

    Minimax is same as max for discriminator?

  • @arpitqw1
    @arpitqw1 11 หลายเดือนก่อน

    Not just gans it’s used in adversarial attacks also .

  • @thewisearchitect
    @thewisearchitect 11 หลายเดือนก่อน

    Very well explained. Thank you very much. I just pressed the Subscribe button :)

  • @liuzq7
    @liuzq7 11 หลายเดือนก่อน

    Holy God. What a great teacher..

  • @liuzq7
    @liuzq7 11 หลายเดือนก่อน

    The best explanation. Thank you.