Lenka Zdeborová - Statistical Physics of Machine Learning (May 1, 2024)

แชร์
ฝัง
  • เผยแพร่เมื่อ 21 พ.ย. 2024

ความคิดเห็น •

  • @SSinse
    @SSinse 4 หลายเดือนก่อน +1

    Pleasure to listen

  • @atabac
    @atabac 6 หลายเดือนก่อน +6

    wow, if all teachers explain things like her, complexities are simplified.

  • @kevon217
    @kevon217 6 หลายเดือนก่อน +3

    Excellent talk. Love the connections and insights.

  • @ozachar
    @ozachar 6 หลายเดือนก่อน +3

    As a physicist, but non expert in AI, viewer: Very interesting insights. Over-parameterization (size) "compensates" for sub-optimal algorithm. Also non trivial that it doesn't lead to getting stack in fitting the noise. Organic neural brains (human or animal) obviously don't need so much data, and also are actually not that large in number of parameters (if I am not mistaken). So for sure there is room for improvement in the algorithm and structure, which is exactly her direction of research. A success there will be very impactfull.

    • @nias2631
      @nias2631 6 หลายเดือนก่อน

      FWIW iIf you consider a brain's neurons as analogs to neurons in an ANN then the human brain, at least, has more complexity by far. Jeffrey Hinton points out that the mechanism of backprop (chain rule) to adjust parameters is more efficient by far than biological organisms in its ability to store patterns.

    • @nias2631
      @nias2631 6 หลายเดือนก่อน

      That efficiency is what worries him and also points to a need for a definition of sentience arising under different learning mechanisms than our own.

  • @theK594
    @theK594 6 หลายเดือนก่อน +1

    Fantastic lecture! Very clear and well structured! Thank you, diky🇨🇿!

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 6 หลายเดือนก่อน +2

    You will be a good mother please make many babies 😊😊😊

  • @forcebender5079
    @forcebender5079 6 หลายเดือนก่อน +5

    想要理解机器学习内部黑箱,要靠更进一步的人工智能,由更先进的人工智能反过来解析黑箱,破解黑箱的机理,靠现在用人力去理解黑箱内部机制是不可能的。

    • @jiadong7873
      @jiadong7873 6 หลายเดือนก่อน

      huh?