7.4.2. Math Behind Lasso Regression

แชร์
ฝัง
  • เผยแพร่เมื่อ 1 ธ.ค. 2024

ความคิดเห็น • 12

  • @extremexplorer8930
    @extremexplorer8930 6 หลายเดือนก่อน

    Best TH-cam channel to learn Machine learning concepts

  • @amirmoshkani8462
    @amirmoshkani8462 7 หลายเดือนก่อน

    I just love you. thanks for the great explanation.

  • @udaykumarravada8820
    @udaykumarravada8820 3 ปีที่แล้ว +1

    Nice video!! Great explanation

  • @SumitKumar-uq3dg
    @SumitKumar-uq3dg 7 หลายเดือนก่อน

    Hi Sid. How will lasso come to know whether a particular feature is important or not before eliminating it!

  • @sujathanagalakshimi4561
    @sujathanagalakshimi4561 ปีที่แล้ว

    Good video siddarth my question is always does the loss function reduces to a quadratic function in a multivariable system or higher order polynomial functions possible and how does lasso works in this case???? Since a quadratic loss / cost function becomes imperative for a linear regression model???

  • @veerababuk2930
    @veerababuk2930 3 ปีที่แล้ว +1

    Plz starry Deep learning Bro as early as possible.. eagerly waiting

  • @sameerabanu3115
    @sameerabanu3115 8 หลายเดือนก่อน

    Sir take a breathe in between, delivery the content slowly, and your videos has much value, take it slow sir.

  • @vijayendrasdm
    @vijayendrasdm 2 ปีที่แล้ว +3

    Where exactly is the math in the math behind Lasso ? I was expecting rigorous math proof on why Lasso drags some of the feature coefficients to zero while Ridge doesnt. Why lasso cannot be optimised by GD.

    • @ganeshnageswaran4808
      @ganeshnageswaran4808 2 ปีที่แล้ว

      Have u got answer for ur question??

    • @makwanabhavin8089
      @makwanabhavin8089 2 ปีที่แล้ว

      @@ganeshnageswaran4808 th-cam.com/video/XNd7SfG-_ho/w-d-xo.html