Deep Learning Foundations: Misha Belkin's Talk on deep learning through the prism of interpolation

แชร์
ฝัง
  • เผยแพร่เมื่อ 4 ม.ค. 2025

ความคิดเห็น • 4

  • @yaram.bahram6715
    @yaram.bahram6715 2 ปีที่แล้ว

    Very interesting remarks on generalization. Thank you for sharing this great lecture!

  • @jfjfcjcjchcjcjcj9947
    @jfjfcjcjchcjcjcj9947 2 ปีที่แล้ว

    Thanks for the lecture!
    I think that there are a couple of points that would make the lecture more digestible if they are explained a bit more in depth. First, is the definition of interpolation, different people imply different things when referring to interpolation. I presume that Misha here by interpolation refers to the ability of estimators to achieve zero training error and still not overfit to the hold-out test set?
    The second aspect which might require some more explanation is @40:05 when Misha is describing the difference between interpolation accuracy and interpolation in the l2-norm. I presume that what is meant by l2-norm interpolation is that 2 estimators with the same l2-norm on their parameters exhibit the same generalisation capabilities, is that right?

  • @alexisfernandez8052
    @alexisfernandez8052 2 ปีที่แล้ว

    I can show you how! Let know!

  • @TheMaP142
    @TheMaP142 2 ปีที่แล้ว

    Michael