6. L1 & L2 Regularization

แชร์
ฝัง
  • เผยแพร่เมื่อ 18 พ.ย. 2024
  • We introduce "regularization", our main defense against overfitting. We discuss the equivalence of the penalization and constraint forms of regularization (see Hwk 4 Problem 8 for a precise statement). We compare regularization paths of L1- and L2-regularized linear least squares regression (i.e. "lasso" and "ridge" regression, respectively), and give a geometric argument for why lasso often gives "sparse" solutions. Finally, we present "coordinate descent", our second major approach to optimization. When applied to the lasso objective function, coordinate descent takes a particularly clean form and is known as the "shooting algorithm"
    Access the full course at bloom.bg/2ui2T4q

ความคิดเห็น •