The covariance matrix

แชร์
ฝัง
  • เผยแพร่เมื่อ 20 ส.ค. 2024

ความคิดเห็น • 93

  • @hansenmarc
    @hansenmarc 2 ปีที่แล้ว +21

    When I remember that var(x) is the same as cov(x, x), the formulas in the covariance matrix seem more consistent and make more sense to me. In other words, the whole matrix can also be defined in terms of covariances alone.

  • @cocoarecords
    @cocoarecords 3 ปีที่แล้ว +7

    seriously one of the best and most intuitive channels on this subject. I can show your videos to my child and he will understand

  • @shubhamtalks9718
    @shubhamtalks9718 3 ปีที่แล้ว +12

    While I know covariance matrix, It is always interesting to learn concepts from your perspective.

  • @atinsood
    @atinsood 3 ปีที่แล้ว +7

    thank you for this, there were so many hidden tidbits of knowledge in this. thank you for making these and appreciate the attention in explaining small details.

  • @blesucation4417
    @blesucation4417 9 หลายเดือนก่อน +1

    Just want to leave a comment so that more people could learn from your amazing videos! Many thanks for the wonderful and fun creation!!!

  • @deepthik2284
    @deepthik2284 10 หลายเดือนก่อน

    You know the way how people understand. Keep posting videos. These are much elaborated.

  • @YanivGorali
    @YanivGorali ปีที่แล้ว +1

    After watching many videos on the subject, this one finally helped me understand. Thank you

  • @prikas4313
    @prikas4313 6 หลายเดือนก่อน

    Extremely helpful and easy to understand as someone new to this topic. Thank you for your work and actually showing examples with numbers for how each part in the covariance matrix was calculated.

  • @mikhailgritskikh7074
    @mikhailgritskikh7074 3 ปีที่แล้ว +5

    In my opinion it would be useful to see connection between the covariance matrix and matrix transformations. Could you make a video on that please?

  • @wsylovezx
    @wsylovezx 3 ปีที่แล้ว +4

    Thank you for this great video. A bit inconsistency between the correction 1/3 (under ur comments) and the formula alpha^2 (at 12:24). I think the formula is correct and in the concrete example 1/3 should changed to be 1/9.

  • @vyince
    @vyince หลายเดือนก่อน

    thanks dude, couldn't understand any explanation of all that before i found your video

  • @abhisheksolanki4163
    @abhisheksolanki4163 ปีที่แล้ว

    best explanation of covariance on youtube

  • @luccaemmanuel4012
    @luccaemmanuel4012 2 ปีที่แล้ว

    this video deserves more views. Incredible work, thank you.

  • @blakeedwards3582
    @blakeedwards3582 ปีที่แล้ว +1

    Fantastic video. Made the covariance very intuitive. Thank you!

  • @moudjarikhadidja6930
    @moudjarikhadidja6930 2 ปีที่แล้ว +3

    Great video, very intuitive Thanks a lot.
    At 12:25 in the variance formula, we divide by the sum of all weights and not the sum of weights squared and it is the same for the covariance, right?

  • @ricardoraymond9037
    @ricardoraymond9037 2 ปีที่แล้ว

    This guy has a gift to make the tough look easy!

  • @shivkumarpippal1765
    @shivkumarpippal1765 3 ปีที่แล้ว +9

    at 10:56, shouldn't it be divided by 10/3 instead of 4 as we've 3 and one third data points?

    • @SerranoAcademy
      @SerranoAcademy  3 ปีที่แล้ว +6

      Yikes, you’re right!!!! Thank you!
      I’ll add a comment

    • @shivkumarpippal1765
      @shivkumarpippal1765 3 ปีที่แล้ว +1

      @@SerranoAcademy Thank you. Excellent video BTW

  • @jakewu9916
    @jakewu9916 3 ปีที่แล้ว +4

    Thank you Luis for always putting sense before the equations. May I suggest a related topic to cover: Gaussian Process? So far the only TH-cam video that I found intuitive is "Vincent Warmerdam: Gaussian Progress | PyData Berlin 2019". Even after watching several times, I feel that I'm still missing something fundamental, like how the conditioning on data works, or how to predict using multivariate features. A walkthrough of a real problem solved with Gaussian Process will be really helpful!

  • @yopedroaaron
    @yopedroaaron 3 ปีที่แล้ว +2

    ¡Gracias Luis!

  • @ananyakapoor6851
    @ananyakapoor6851 3 ปีที่แล้ว +1

    All your videos are fun to watch. Please continue making such high-quality content videos...👏

  • @nicholasteong2485
    @nicholasteong2485 3 ปีที่แล้ว +1

    Thanks! Luis for the awesome video about covariance. I would like to ask at 10:56 , shouldn't we divide by 1+1+1+(1/3)^2? same happen to finding covariance at 11:37. we should divide by 1+1+1+(1/3)^2, shouldn't we? please correct me if i was wrong.

    • @anythingit2399
      @anythingit2399 2 ปีที่แล้ว +2

      Yes, You are right. he has added a comment for 10:56 and forgot about 11:37.

  • @ArduinoHocam
    @ArduinoHocam 2 ปีที่แล้ว

    This is a great visualization and a perspective that should everyone need to know. To see what is the magic behind the scene, visualization is best way as always...

  • @simonpinnock2310
    @simonpinnock2310 2 ปีที่แล้ว +1

    Very clearly explained, well done and many thanks!

  • @jarojaros
    @jarojaros 5 หลายเดือนก่อน

    Thank you very much! ❤ Easily explained and everything included! :) 🎉

  • @play150
    @play150 8 หลายเดือนก่อน

    Amazing video!! Never felt like I understood as well I do now - favorited

  • @ash3844
    @ash3844 4 หลายเดือนก่อน

    Very clearly explained!! Thanks

  • @pjakobsen
    @pjakobsen 3 ปีที่แล้ว +1

    Hi Luis, this is an excellent video, and I thank you for making it. At 3:42 you say "Why square", then your answer is so the negative numbers do not cancel out the positive numbers. By that logic, then why not use absolute values and achieve the same. Turns out that the technical explanation for using squares seems hard to come by, and is not at all obvious. Perhaps you can do some digging and do another video on this? I found the discussions at Cross Validated forum to be helpful.

  • @denzowillis2590
    @denzowillis2590 2 ปีที่แล้ว

    Thx a lot sir. I am new in the BI Worl and have been strugle to understand those notions. Problem resolved today ! Thx a lot again.

  • @katyadimitrovapetrova9343
    @katyadimitrovapetrova9343 8 หลายเดือนก่อน

    absolutely brilliantly explained. thank you

  • @leandrogcosta
    @leandrogcosta 9 หลายเดือนก่อน

    Very good explanation

  • @harshitkapoor2977
    @harshitkapoor2977 2 ปีที่แล้ว

    Thanks a lot ! ...was stuck at a concept in a research paper..resolved many doubts

  • @kolavithonduraski5031
    @kolavithonduraski5031 3 ปีที่แล้ว +1

    its nice when somebody explain this, who knows what he is talking about... very nice and thank.
    and for all the profesors and teachers out there who cant explain this like this dude.... quit your job 😑

  • @kho2333
    @kho2333 3 ปีที่แล้ว

    Hi Luis, fantastic and clear video but quick question, at 11:01 are we sure we need to square the 1/3 weight?
    I don't follow the intuition of this as I'd assume having a single point weighted 1/3 would be equivalent to a situation where that point has a weight of 1 and the remaining three points have weights of 3 (said another way, imagine three of the points [(-0.4, 0.8), (1.6, 0.8), (-0.4, -1.2)] being duplicates three times and stacked upon one another). If this were the case we'd calculate the variance of 10 points of equal weights [-0.4, -0.4, -0.4, 1.6, 1.6, 1.6, -2.4, -0.4, -0.4, -0.4] with a value of 1.44
    Thank you in advance for any thoughts on this question and for making such a great video! I just pre-order your upcoming ML book this morning and subscribed to your channel.

  • @manoranjansahu7161
    @manoranjansahu7161 2 ปีที่แล้ว +3

    Background music is annoying

  • @n1tr0-b9c
    @n1tr0-b9c 18 วันที่ผ่านมา

    Hi, in 11:23, why are you not subtracting the values by mean before squaring them? the formulae you showed earlier shows subtracting by mean

  • @xiding2086
    @xiding2086 2 ปีที่แล้ว

    thank you! got a new view on the variance and covariance

  • @faizaafri5245
    @faizaafri5245 ปีที่แล้ว

    thanks for your explanations, it's very helpful for me!

  • @softpeachhy8967
    @softpeachhy8967 3 ปีที่แล้ว +2

    thanks for the explanation! I'm confused about the portion or weighted points, what does it mean when a data point has a fraction? Besides, at 8:58 isn't the covariance divided by (n-1)?

    • @andileaudry3332
      @andileaudry3332 2 ปีที่แล้ว

      Yeah i also think variance an covariance must be summation divided by (n-1)

  • @mirabaltaji2306
    @mirabaltaji2306 3 ปีที่แล้ว +1

    This is fantastic, Thank you!!

  • @NoahElRhandour
    @NoahElRhandour 3 ปีที่แล้ว +1

    Mr Serrano have you considered opening a patreon or something like that?

  • @larissacury7714
    @larissacury7714 2 หลายเดือนก่อน

    Thank you!

  • @VidyaBhandary
    @VidyaBhandary 3 ปีที่แล้ว

    Awesome ! Thank you so much. This is a very lucid explanation.

  • @willw4096
    @willw4096 ปีที่แล้ว

    6:35 9:06

  • @hich4155
    @hich4155 3 ปีที่แล้ว

    Thanks a lot, that's the best explanation i found, bonne continuation 👍

  • @JOHNSMITH-ve3rq
    @JOHNSMITH-ve3rq 2 ปีที่แล้ว

    Thanks so much for doing this dude

  • @ailanfarid7314
    @ailanfarid7314 3 ปีที่แล้ว +2

    Pretty good Man

  • @Ihsan_almohsin
    @Ihsan_almohsin 2 ปีที่แล้ว

    you are simply awesome

  • @shahulrahman2516
    @shahulrahman2516 3 หลายเดือนก่อน

    Thank you

  • @krupalshah1487
    @krupalshah1487 3 ปีที่แล้ว

    Awesome video ! Thank you for making it.

  • @karanshamspirzadeh2385
    @karanshamspirzadeh2385 ปีที่แล้ว

    Thank you very much , this help me so much 👏👍

  • @willsonlaw1243
    @willsonlaw1243 2 ปีที่แล้ว

    Thanks, very neat and clean

  • @cocoarecords
    @cocoarecords 3 ปีที่แล้ว

    Luis can you tell us whats your approach when learning a new thing or concept like where do you head to first? is it google, youtube , or some text books

  • @jonasmanuel
    @jonasmanuel 2 ปีที่แล้ว

    Thank you, what a good explanation

  • @jonykhan4395
    @jonykhan4395 3 หลายเดือนก่อน

    Good video! Does the highly correlated future in our dataset produces less ERROR when regresson is run? How to determine if a particular feature in our dataset should be considered or not??

  • @pontosinterligados
    @pontosinterligados 3 ปีที่แล้ว

    Hola Luis!! your videos are very nice to get an intuitive approach of what I am doing. I have seen your videos about Covariance and PCA few times now. Any plans for iterative closest point relating two sets of points? Muchas gracias pelo trabajo!​

  • @mr.p8766
    @mr.p8766 2 ปีที่แล้ว

    thank you

  • @tianchangli4453
    @tianchangli4453 ปีที่แล้ว

    Shouldn't the weight in covariance calculation be 1/3 instead of (1/3)^2? Also, the coefficient shouldn't be 1/4 anymore in the weighted case

  • @vjseomlol546
    @vjseomlol546 ปีที่แล้ว

    Why is the denominator in the example when alpha is not 1 "4" when sigma(alpha^2) should be (0.33^2 + 1^2 + 1^2 + 1^2)

  • @teodorbaraki5212
    @teodorbaraki5212 3 หลายเดือนก่อน

    "we should divide by 1+1+1+1/3, which is 10/3".
    Is it not squared as well? 1/(10/3)^2

  • @social.distancing
    @social.distancing ปีที่แล้ว

    The x value of the upper-right point in 11:04 turned magically from 1.6 to 2.6 in the next slide... isn't it a typo?

  • @moeinpourmand329
    @moeinpourmand329 2 ปีที่แล้ว

    This video was very useful for me

  • @Oliver-cn5xx
    @Oliver-cn5xx 3 ปีที่แล้ว

    If the data in the examples in the beginning weren't centered at 0, what would multiplying just the coordinates (without subtracting the mean) represent?

  • @seathru1232
    @seathru1232 3 ปีที่แล้ว

    Thanks for this video, it's just perfect!

  • @narasimedu
    @narasimedu 2 ปีที่แล้ว

    Hello sir, I don't understand why in the weighted covariance formula the weights are squared? also in the numerator.

  • @iknowright9533
    @iknowright9533 6 หลายเดือนก่อน

    please inform us why we make it and why we make it the way it is.

  • @mikoz8669
    @mikoz8669 3 ปีที่แล้ว

    At 12:30, If the weighted covariance is divided by sum of ai squares, shouldn't it be (1/3)^2+1+1+1 = 28/9? Is it a typo?

  • @AJ-et3vf
    @AJ-et3vf 2 ปีที่แล้ว

    Thank you very much for this video

  • @haneulkim4902
    @haneulkim4902 3 ปีที่แล้ว

    Thanks Luis! Could you explain the reason behind shifting center of mass to the origin?

  • @abhishekjhanji3014
    @abhishekjhanji3014 6 หลายเดือนก่อน

    I think in the second part of the video the calculation of the var(x) var(y) covar(x,y) is not done correctly..It should not be divided by 4 instead it should be divided by summation of square of weights..

  • @vperez4796
    @vperez4796 2 ปีที่แล้ว

    XClent. I suppose in a two dimensional space is easier to explain variance than 4 dimensions. Is the variance of every point in space, time included? What if the space is expanding anisotropically (lets say due to gravity wave), could the variance have an extra term?.

  • @masterjames4581
    @masterjames4581 3 ปีที่แล้ว

    logic math is interesting when focusing on abstract viewpoints

  • @sourav8051
    @sourav8051 3 ปีที่แล้ว

    thanks. I am viewing your unlisted video! and I got the discount for the book, reading it in my kindle before sleep is a blessing. I have a little query:
    1.what is meant the by half/3-4th of a point? the size of the point is virtual, which I confused at first time. now I get it: it is a portion of the vector associated with each point. now the classic question arrrises: how many clusters? how do we know?
    2. It is similar to kmeans, where we pick k-cluster centres and here pick "k" gaussian distribution. Can you please compare different clusterings in one video and what is the limitations, change, or similarity between them and how to overcome the limitations, or in what situation which method is to be used in real world data?

    • @SerranoAcademy
      @SerranoAcademy  3 ปีที่แล้ว +3

      Thanks Sourav, great questions!
      1. In some algorithms like Gaussian mixture models, you need only a fraction of the point there. I imagine it as, if every point weighted 1 kg, then points are allowed to weight any fraction of that weight. As for how many clusters, there are many heuristics such as the elbow method that work, you can find it here: th-cam.com/video/QXOkPvFM6NU/w-d-xo.html
      2. Yes, very similar to K-means. In k-means, we only update the mean, but in GMM we update mean, variances, and covariances. Also in k-means we have hard assignments (a point can belong to only one cluster), but in GMM (th-cam.com/video/q71Niz856KE/w-d-xo.html) we have soft assignments (this is why points can be split into several clusters, which goes back to question 1). In real life both are used, but there are times, for example in sound classification (telling voice apart from music and noise, etc), that the clusters really intersect, and you need a soft clustering algorithm to do a better job.
      Hope that helps! Let me know if there is anything else that needs clarification. :)

  • @kudvenkatarabic
    @kudvenkatarabic 3 ปีที่แล้ว

    I hope if you can make new lessons about generative modeling algorithms. It promosing for many fields. I have check your GAN lesson, but looking for more from amazing teacher.

  • @meiyueliu1400
    @meiyueliu1400 2 ปีที่แล้ว

    Thank you for the video. I have heard of the covariance matrix from the context of processing velocity-map imaging. I am finding it a bit hard to tie the information you have shared back to this context (how does constructing a covariance matrix help construct a covariance-map images?) Wonder if anyone can help me make the connection :)

  • @omar-kv9jb
    @omar-kv9jb ปีที่แล้ว

    Great ♥ but can anyone explain why we divided by (sum of a^2) instead of (sum of a)... when we calculate the variance of weighted points ?!

  • @shankerkc01
    @shankerkc01 2 ปีที่แล้ว

    Awesome.

  • @keshavkumar7769
    @keshavkumar7769 3 ปีที่แล้ว

    At 1:26 you say variance in the y direction that goes in
    the diagonal
    is it correct ?

  • @OGIMxGaMeR
    @OGIMxGaMeR 2 ปีที่แล้ว

    Does variance x = variance y implies covariance of 0?

  • @v3d332
    @v3d332 2 ปีที่แล้ว

    Good visuals, great teaching, best quality. Thank you! Your channel and StatQuest have been a huge help through understanding all this math. In this video, an extra example with a 3x3 or 4x4 covariance matrix would have been awesome, but I understand you might not have gone into it to simplify things (since 3D/4D)

  • @jineshr262
    @jineshr262 3 ปีที่แล้ว

    You are Dope man, simply awesome

  • @amirrezajahantab
    @amirrezajahantab หลายเดือนก่อน

    🙏🙏🙏

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w ปีที่แล้ว

    Curious why covariance could be related to information

  • @hungp.t.9915
    @hungp.t.9915 2 ปีที่แล้ว

    at 4:10, shouldn't it be 8/3 instead of 8/4?

    • @hungp.t.9915
      @hungp.t.9915 2 ปีที่แล้ว

      oh wait they have different formulas for population (all) and sample (part of all)

  • @tomgt428
    @tomgt428 3 ปีที่แล้ว

    so cool

  • @PABLO-oq1kt
    @PABLO-oq1kt 3 ปีที่แล้ว

    What type of software do you use to make this animation?

    • @SerranoAcademy
      @SerranoAcademy  3 ปีที่แล้ว +1

      I use Keynote for the animations and iMovie for editing

  • @oxfordsculler8013
    @oxfordsculler8013 3 ปีที่แล้ว

    Too many adverts. :-(