Mathematical objective function of PCA : Dimensionality reduction Lecture 13@Applied AI Course

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 พ.ย. 2024

ความคิดเห็น • 12

  • @humblesoul8685
    @humblesoul8685 ปีที่แล้ว

    Projection is a vector not a scaler right..... I think there's a slip of tongue...
    Beautiful lecture:)

  • @cypherecon5989
    @cypherecon5989 หลายเดือนก่อน

    3:09 you say "we learned" what video do you mean?

  • @RahulSingh-ex4uh
    @RahulSingh-ex4uh 3 ปีที่แล้ว +1

    How xi can be (nx1) it should be (nxd) as you project point on to u1

  • @chinmaymaganur7133
    @chinmaymaganur7133 4 ปีที่แล้ว +2

    what is col standerization, , how is it
    equal to 0

  • @saurabhchaturvedi2643
    @saurabhchaturvedi2643 6 ปีที่แล้ว +4

    At 3.03 , xi ' should be ( (ui) dotproduct (xi) )/ mod(ui) ,however it shows mod (ui) ^2
    as told in projection and unit vector lecture at 2:24
    link-
    th-cam.com/video/fbrMJbMcGoA/w-d-xo.html

  • @tumusaikarthik6632
    @tumusaikarthik6632 3 ปีที่แล้ว

    After column standardization variance will also become 1 how can we maximize variance term?

    • @AppliedAICourse
      @AppliedAICourse  3 ปีที่แล้ว

      We standardise each feature or axis of data. When we find the maximum variance direction, that need not be axis parallel. Oftentimes, it is a linear combination of other features and hence is a direction which is not parallel to any axis

  • @tejaswilakshmi4217
    @tejaswilakshmi4217 5 ปีที่แล้ว

    Why do you maximise variance?

    • @fahimshahriar3793
      @fahimshahriar3793 5 ปีที่แล้ว

      to get maximum information..

    • @aashishmalhotra
      @aashishmalhotra 2 ปีที่แล้ว

      Also variance refers to variability. By maximising it you get a diverse data.