Dynamic Deep Learning | Richard Sutton

แชร์
ฝัง
  • เผยแพร่เมื่อ 16 ธ.ค. 2024

ความคิดเห็น • 6

  • @williamjmccartan8879
    @williamjmccartan8879 28 วันที่ผ่านมา +5

    Good presentation on the current state of RL and its limitations in continuous learning, very relevant to what we are hearing about the dynamic happening across the large LLM companies regardless of the information they are able to acquire. This sounds more like rather than relying on static information they need more human engagement in their processes, thank you very much professor Sutton for sharing your time, work, experience and knowledge, cheers

  • @CemlynWaters
    @CemlynWaters 26 วันที่ผ่านมา

    Thank you Richard Sutton for giving this talk, very interesting! Also thanks to the ICARL team for setting up this presentation!

  • @Crack-tt2dh
    @Crack-tt2dh 21 วันที่ผ่านมา

    Regarding the period from 27:00 to 29:00, I personally believe that it is not so much about slow learning as it is about slow forgetting. The red line, due to its high learning rate, forgets previous tasks and thus causes a sharp decline in accuracy. In contrast, the yellow and brown lines, which have a slower forgetting rate, experience less impact on accuracy. Solving the forgetting problem might be related to the scale of the neural network.

  • @DanielKang-t6v
    @DanielKang-t6v 23 วันที่ผ่านมา +1

    Thanks for the wonderful lesson!

  • @webgpu
    @webgpu หลายเดือนก่อน +1

    Thank you very much! Great topic discussed in this presentation 🍻

  • @tylermoore4429
    @tylermoore4429 20 วันที่ผ่านมา +2

    Why are all the questions like: what is advantage of continual learning over frozen models? That is an extraordinarily dumb question. What am I missing?