Principal Component Analysis (PCA) 1 [Python]

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ธ.ค. 2024

ความคิดเห็น • 27

  • @NiKogane
    @NiKogane 2 ปีที่แล้ว

    Thank you so much for providing all of this knowledge online for free !

  • @yenunadeesaselviento
    @yenunadeesaselviento 4 ปีที่แล้ว +4

    The code cuts off at the edge of the video. Where can we download it. Thanks for sharing this!

  • @EladM8a
    @EladM8a 4 ปีที่แล้ว +7

    Why the division in B/np.sqrt(nPoints)?

    • @anirbanbhattacharjee8093
      @anirbanbhattacharjee8093 ปีที่แล้ว

      In PCA literature, the covarience matrix B*B is normalized by nPoints (or the bessel correction (nPoints -1), but doesn't matter here because nPoints is large). So if you normalize B by np.sqrt(nPoints) instead, B* also gets normalized by np.sqrt(nPoints) and you end up getting the C normalized by nPoints

    • @anirbanbhattacharjee8093
      @anirbanbhattacharjee8093 ปีที่แล้ว

      where C = (B*)B, & B* is the transpose of B

  • @jbhsmeta
    @jbhsmeta 4 ปีที่แล้ว +3

    Hi Mr. Steve,
    I have one question, why are you dividing the "B by np.sqrt(nPoints)"
    U, S, VT = np.linalg.svd(B/np.sqrt(nPoints),full_matrices=0)
    dividing mean centered data by sqrt of no.of data -?? Could not understand.

    • @melvinlara6151
      @melvinlara6151 4 ปีที่แล้ว

      Actually i have the exact same question. Could you figure it out?

    • @JoaoVitorBRgomes
      @JoaoVitorBRgomes 4 ปีที่แล้ว +4

      @@melvinlara6151 I didn't see the whole lecture yet, but I guess B is data with mean =0 and np.sqrt(nPoints) probably is the standard deviation (variance squared). So he first standardize the data then he applies SVD ...

    • @melvinlara6151
      @melvinlara6151 4 ปีที่แล้ว

      @@JoaoVitorBRgomes hey! actually i figured the same thing out. But, thank you;

    • @JoaoVitorBRgomes
      @JoaoVitorBRgomes 4 ปีที่แล้ว

      @@melvinlara6151 no problem Melvin Lara, I am a student of Data Science too. If you have a kaggle profile and want to exchange knowledge my alias is " topapa .

    • @anirbanbhattacharjee8093
      @anirbanbhattacharjee8093 ปีที่แล้ว

      In PCA literature, the covarience matrix (B*)B is normalized by nPoints (or the bessel correction (nPoints -1), but doesn't matter here because nPoints is large). So if you normalize B by np.sqrt(nPoints) instead, B* also gets normalized by np.sqrt(nPoints) and you end up getting the (B*)B normalized by nPoints

  • @tomlane6590
    @tomlane6590 3 ปีที่แล้ว

    A brilliant set of videos. Thank you so much.

  • @muhammadmuneeburrahman1262
    @muhammadmuneeburrahman1262 3 ปีที่แล้ว

    You said in the video that each row of X will represent an example/or record, and column will represent the feature. In your code, X.shape = (2, 1000) where each column represent one data point and. B is passed to the SVD with same shape. Hence the VT matrix size is (2,1000) which means that there are 1000 principle Componenets, that is not possible of 2D data??? Am I right or wrong?? Please explain?

  • @subramaniannk3364
    @subramaniannk3364 4 ปีที่แล้ว +1

    Great lecture Steve!
    You explained that "u" in svd represents principal direction, "sigma" represents loading. What does "v" represents ?

    • @sambroderick5156
      @sambroderick5156 3 ปีที่แล้ว +1

      There’s a whole series a lectures explaining this (and a book.

  • @sheiladespard8861
    @sheiladespard8861 3 ปีที่แล้ว +1

    I tried to download the code from the website, but Python code folder includes only Matlab code :(

    • @NiKogane
      @NiKogane 2 ปีที่แล้ว

      Hi, it was corrected - I downloaded it today !

  • @kanacaredes
    @kanacaredes 3 ปีที่แล้ว +1

    excellent video!!! Thks

    • @Eigensteve
      @Eigensteve  3 ปีที่แล้ว

      You are welcome!

  • @nguyenvan-hau9577
    @nguyenvan-hau9577 4 ปีที่แล้ว

    Beautiful code!

  • @1PercentPure
    @1PercentPure ปีที่แล้ว

    i kneel............................................................

  • @charlespatterson8412
    @charlespatterson8412 4 ปีที่แล้ว +1

    I would prefer to do this in my head because I can visualize it and move it around. I am not a mathematician but many of these are terms for things I am already familiar with. Perhaps I should have kept my TRS80 and took Bill's Class at Juanita High. I decided to concentrate on 'Salmon Enhancement' and 'European History' instead. It's probably just as well, I find writing code quite boring because I am more into Concepts... "Keep up the Good work!"

    • @saitaro
      @saitaro 4 ปีที่แล้ว

      Math is fully about concepts. And how would you visualize something that is higher than 3 dimension?

    • @charlespatterson8412
      @charlespatterson8412 4 ปีที่แล้ว

      @@saitaro Extrapolation

    • @user-iiii234a5gc
      @user-iiii234a5gc 4 ปีที่แล้ว +2

      add a time term? or 4dimension more is exist just at theorical expression

    • @yaseenmohammad9600
      @yaseenmohammad9600 4 ปีที่แล้ว +3

      this technique is generally used when large amounts of higher dimensional data are there. like in image processing
      for example if u take 50(50*50) images it will become 50,2500 dimensional data resulting in covariance of 2500*2500
      matrix where pca is used to extract eigen faces.
      now i don't think there are people who can solve eigen value equation for 2500 * 2500 matrix in head

    • @charlespatterson8412
      @charlespatterson8412 4 ปีที่แล้ว

      @@yaseenmohammad9600 Maybe if the variables are 'round' enough I could 'take a shot' at it...