Why Recurrent Neural Networks are cursed | LM2

แชร์
ฝัง
  • เผยแพร่เมื่อ 22 ธ.ค. 2024

ความคิดเห็น • 28

  • @vcubingx
    @vcubingx  8 หลายเดือนก่อน +11

    If you enjoyed the video, please consider subscribing!
    Part 3! th-cam.com/video/lOrTlKrdmkQ/w-d-xo.html
    A small mistake I _just_ realized is that I say trigram/3-gram for the neural language model when I have 3 words to input, but it's a 4-gram model, not 3 gram, since I'm considering 4 words at a time (including the output word). Hopefully that didn't confuse anyone!

  • @l16h7code
    @l16h7code 8 หลายเดือนก่อน +15

    Please keep making these machine learning videos. Animations are all we need. They make me 10x easier to understand the concepts.

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน +4

      Thanks! I’ll try my best to:)

    • @ShashankBhatta
      @ShashankBhatta 7 หลายเดือนก่อน +2

      Isn't"attention all we need"

    • @aero-mk9ld
      @aero-mk9ld 7 หลายเดือนก่อน +1

      @@ShashankBhatta

    • @eellauu
      @eellauu 4 หลายเดือนก่อน +1

      @@ShashankBhatta ahahhahaha, nice one

  • @vivekrai1974
    @vivekrai1974 หลายเดือนก่อน +1

    8:25 Shouldn't yt be equal to Sigma(Uh(t-1) +Kx(t)+bk) where K is matrix for transforming x(t)?

  • @tomtom5821
    @tomtom5821 4 หลายเดือนก่อน +3

    I had so many 'aha' moments in this video I lost count! I'm convinced that it is possible to learn any concept- if it's broken down into its simplistic components

  • @drdca8263
    @drdca8263 8 หลายเดือนก่อน +3

    I sometimes wonder how well it would work to take something that was mostly an n-gram model, but which added something that was meant to be like, a poor man’s approximation of the copying heads that have been found in transformers.
    So, like, in addition to looking at “when the previous (n-1) tokens were like this, how often were different following things, the next token?” as in an n-gram model, it would also look at, “previously in this document, did the previous token appear, and if so, what followed it?”, and “in the training data set, for the previous few tokens, how often did this kind of copying strategy do well, and how often did the the plain n-gram strategy do well?” , to weight between those.
    (Oh, and also maybe throw in some “what tokens are correlated just considering being in the same document” to the mix.)
    I imagine that this still wouldn’t even come *close* to GPT2 , but I do wonder how much better it could be than plain n-grams.
    I’m pretty sure it would be *very* fast at inference time, and “training” it would consist of just doing a bunch of counting, which would be highly parallelizable (or possibly counting and then taking a low-rank decomposition of a matrix, for the “correlations between what tokens appear in the same document” part)

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน +1

      I think you've gained a key insight, that the approximation does indeed work. I mean heck, if I was only generating two words, a bigram model would be pretty good too.
      I remember seeing a paper that shows that GPT-2 itself has learnt a bi-gram model inside itself. Given this, it might be fair to say that what you're describing could potentially even be what the LLMs today learn under the hood. I think your description is great though, as it's an interpretable way to see how models make predictions. Maybe a future line of research!

  • @varunmohanraj5031
    @varunmohanraj5031 8 หลายเดือนก่อน

    So insightful ‼️

  • @ZalexMusic
    @ZalexMusic 8 หลายเดือนก่อน +2

    Outstanding work, this series is required LM viewing now, like 3b1b. Also, are you from Singapore? That's the only way I can reconcile good weather meaning high temperature and high humidity 😂

    • @David-gn5rp
      @David-gn5rp 6 หลายเดือนก่อน

      I'm 90% sure the accent is Indian.

  • @calix-tang
    @calix-tang 8 หลายเดือนก่อน +4

    Incredible job mfv I look forward to seeing more videos

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน

      More to come!

  • @1XxDoubleshotxX1
    @1XxDoubleshotxX1 8 หลายเดือนก่อน +1

    Oh yes Vivek Vivek omg yes

  • @usama57926
    @usama57926 8 หลายเดือนก่อน

    Nice video

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน

      Thanks!

  • @VisibilityO2
    @VisibilityO2 8 หลายเดือนก่อน +5

    I am not criticizing your whole hard work but at some point you just messed up like at 7:54 without explaining the sum of weights you were computing it `Ht` and you could say` Backpropagation Through The Time ` as query in the video .
    . Also you could introduce "gated cells" in LSTMS . Long Short Term Memory networks most often rely on a gated cell to track information throughout many time steps.
    And Activation function like 'sigmoid' could be replaced by ' ReLu' and packages like TensorFlow have also preferred it in their documentation.
    But , honestly you've a created a good intermediate class for learning Recurrence .

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน +2

      Hey, thanks for the feedback.
      I personally found little value in mentioning BPTT, as I felt like it would confuse the viewer more in case they weren't familiar with backpropagation. The algorithm itself is pretty straightforward, and I personally felt like it didn't need an entire section explaining it.
      In response to LSTMs, the video wasn't meant to cover LSTMs at all. I last-minute introduced the section towards the end for curious viewers. I appreciate you talking about them though! I plan on making a short 5-7 minute video on them in the future.

  • @ml-ok3xq
    @ml-ok3xq 8 หลายเดือนก่อน

    maybe you can loop around to mamba and explain why it's popular again, what has changed to uncurse the model.

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน +2

      Sure! I wanted to make two follow ups - transformers beyond language and language beyond transformers. In the second part I’d talk about mamba and the future of language modeling

    • @NicitoStaAna
      @NicitoStaAna หลายเดือนก่อน

      Min Rnn and Min Lstm looks promising

  • @adithyashanker2852
    @adithyashanker2852 8 หลายเดือนก่อน +4

    Music is fire

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน

      ye

  • @BooleanDisorder
    @BooleanDisorder 8 หลายเดือนก่อน +1

    RNN = Remember Nothing Now

    • @vcubingx
      @vcubingx  8 หลายเดือนก่อน +2

      Hahaha, RNNs did indeed have "memory loss" issues :)

  • @RahulSingh-zo7sm
    @RahulSingh-zo7sm หลายเดือนก่อน

    Understood very little. You are not explaining enough.