How to Use Learning Rate Scheduling for Neural Network Training

แชร์
ฝัง
  • เผยแพร่เมื่อ 31 ต.ค. 2024

ความคิดเห็น • 7

  • @lakeguy65616
    @lakeguy65616 ปีที่แล้ว +3

    A learning rate that is too small can get trapped in a local minima with no escape. A learning rate that is too large will never find the global minima because it will keep "stepping" over it.

  • @rahiiqbal1294
    @rahiiqbal1294 7 หลายเดือนก่อน

    Thank you, these videos are very helpful :)

  • @토리-i5c
    @토리-i5c 7 หลายเดือนก่อน

    what a helpful videos!Thank you! :)

  • @alexandergraf8855
    @alexandergraf8855 ปีที่แล้ว +1

    Been tinkering for a day (and night lol)....
    Your training set has only 10 items and 10 classifiers assigned to it.
    You go through the list, make a prediction for every element. Everything is predicted wrong, because your network is fresh off the shelf and unknowing what you want from it.
    If an item is predicted wrong, backprop only once and then go to the next item. Skip items that have been predicted correctly.
    For every item in the dataset there is a count that's being incremented everytime you have to backprop and the learning rate will be count * 0.9 (really). For correct predictions count will reset to 1.
    So....items with low success rate will be trained everytime you loop though your data but at a monster rate, while successful items or items that have to be trained only occationally will have lower learning rate.
    The overall process is done, if no backprop happened inside the loop. 😘 And this is my criteria, it has to be incredible fast learning at high framerate while doing all the others stuff like extracting unknown objects from my desktop screen, all things that Johnny-Boy has never seen before.😁
    Training should not be the bottleneck of AGI, I mean....am I really so clever (lol)? Why not running me on a 500MHz Computer? It should be doable 😎

    • @alexandergraf8855
      @alexandergraf8855 ปีที่แล้ว

      Another phenomenon about constant learning rate is that the last elements in the list finish training first because the network seems to forget about the beginning of the list. Learn something and forget about the others items! That's not what I want!
      It seems that "dynamic monster learning" where LR is constantly fluctuating doesn't have so much trouble about forgetting, so it's more independent of the ordering of the list!

    • @ToroKaze
      @ToroKaze 2 หลายเดือนก่อน

      by if you have stick to learning you probably have discovered the answer huh 😁

  • @AguinaldoMulondeMulonde
    @AguinaldoMulondeMulonde 8 หลายเดือนก่อน

    you are so pretty..