13.3.1 L1-regularized Logistic Regression as Embedded Feature Selection (L13: Feature Selection)

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 ธ.ค. 2024

ความคิดเห็น • 5

  • @amrelsayeh4446
    @amrelsayeh4446 5 หลายเดือนก่อน

    @sebastian At 13:20, why is the solution between the global minimum and the penalty minimum lie somewhere where one of the weights is zero. In other words, why it should lie at the corner of the penalty function not just at the line. between the global minimum and the penalty minimum.

  • @AyushSharma-jm6ki
    @AyushSharma-jm6ki ปีที่แล้ว

    @sebastian amazing video. Thanks for sharing. I am getting deeper understanding of these topics with your videos.

  • @arunthiru6729
    @arunthiru6729 2 ปีที่แล้ว +3

    @sebastian I think using Logistic Regression directly for feature selection based on respective weights/coefficients means we are assuming all dimensions/features are independent. I understand this is not the correct way to do this. Pls advise.

    • @SebastianRaschka
      @SebastianRaschka  2 ปีที่แล้ว +1

      Yes, this assumption is correct. ML is full of trade-offs 😅. If you cannot make this assumption, I recommend the sequential feature selection approach

    • @wayne7936
      @wayne7936 7 หลายเดือนก่อน

      Thanks for pointing this out!