If you enjoyed the video, please consider subscribing! Part 3! th-cam.com/video/lOrTlKrdmkQ/w-d-xo.html A small mistake I _just_ realized is that I say trigram/3-gram for the neural language model when I have 3 words to input, but it's a 4-gram model, not 3 gram, since I'm considering 4 words at a time (including the output word). Hopefully that didn't confuse anyone!
I had so many 'aha' moments in this video I lost count! I'm convinced that it is possible to learn any concept- if it's broken down into its simplistic components
I sometimes wonder how well it would work to take something that was mostly an n-gram model, but which added something that was meant to be like, a poor man’s approximation of the copying heads that have been found in transformers. So, like, in addition to looking at “when the previous (n-1) tokens were like this, how often were different following things, the next token?” as in an n-gram model, it would also look at, “previously in this document, did the previous token appear, and if so, what followed it?”, and “in the training data set, for the previous few tokens, how often did this kind of copying strategy do well, and how often did the the plain n-gram strategy do well?” , to weight between those. (Oh, and also maybe throw in some “what tokens are correlated just considering being in the same document” to the mix.) I imagine that this still wouldn’t even come *close* to GPT2 , but I do wonder how much better it could be than plain n-grams. I’m pretty sure it would be *very* fast at inference time, and “training” it would consist of just doing a bunch of counting, which would be highly parallelizable (or possibly counting and then taking a low-rank decomposition of a matrix, for the “correlations between what tokens appear in the same document” part)
I think you've gained a key insight, that the approximation does indeed work. I mean heck, if I was only generating two words, a bigram model would be pretty good too. I remember seeing a paper that shows that GPT-2 itself has learnt a bi-gram model inside itself. Given this, it might be fair to say that what you're describing could potentially even be what the LLMs today learn under the hood. I think your description is great though, as it's an interpretable way to see how models make predictions. Maybe a future line of research!
Outstanding work, this series is required LM viewing now, like 3b1b. Also, are you from Singapore? That's the only way I can reconcile good weather meaning high temperature and high humidity 😂
I am not criticizing your whole hard work but at some point you just messed up like at 7:54 without explaining the sum of weights you were computing it `Ht` and you could say` Backpropagation Through The Time ` as query in the video . . Also you could introduce "gated cells" in LSTMS . Long Short Term Memory networks most often rely on a gated cell to track information throughout many time steps. And Activation function like 'sigmoid' could be replaced by ' ReLu' and packages like TensorFlow have also preferred it in their documentation. But , honestly you've a created a good intermediate class for learning Recurrence .
Hey, thanks for the feedback. I personally found little value in mentioning BPTT, as I felt like it would confuse the viewer more in case they weren't familiar with backpropagation. The algorithm itself is pretty straightforward, and I personally felt like it didn't need an entire section explaining it. In response to LSTMs, the video wasn't meant to cover LSTMs at all. I last-minute introduced the section towards the end for curious viewers. I appreciate you talking about them though! I plan on making a short 5-7 minute video on them in the future.
Sure! I wanted to make two follow ups - transformers beyond language and language beyond transformers. In the second part I’d talk about mamba and the future of language modeling
If you enjoyed the video, please consider subscribing!
Part 3! th-cam.com/video/lOrTlKrdmkQ/w-d-xo.html
A small mistake I _just_ realized is that I say trigram/3-gram for the neural language model when I have 3 words to input, but it's a 4-gram model, not 3 gram, since I'm considering 4 words at a time (including the output word). Hopefully that didn't confuse anyone!
Please keep making these machine learning videos. Animations are all we need. They make me 10x easier to understand the concepts.
Thanks! I’ll try my best to:)
Isn't"attention all we need"
@@ShashankBhatta
@@ShashankBhatta ahahhahaha, nice one
8:25 Shouldn't yt be equal to Sigma(Uh(t-1) +Kx(t)+bk) where K is matrix for transforming x(t)?
I had so many 'aha' moments in this video I lost count! I'm convinced that it is possible to learn any concept- if it's broken down into its simplistic components
I sometimes wonder how well it would work to take something that was mostly an n-gram model, but which added something that was meant to be like, a poor man’s approximation of the copying heads that have been found in transformers.
So, like, in addition to looking at “when the previous (n-1) tokens were like this, how often were different following things, the next token?” as in an n-gram model, it would also look at, “previously in this document, did the previous token appear, and if so, what followed it?”, and “in the training data set, for the previous few tokens, how often did this kind of copying strategy do well, and how often did the the plain n-gram strategy do well?” , to weight between those.
(Oh, and also maybe throw in some “what tokens are correlated just considering being in the same document” to the mix.)
I imagine that this still wouldn’t even come *close* to GPT2 , but I do wonder how much better it could be than plain n-grams.
I’m pretty sure it would be *very* fast at inference time, and “training” it would consist of just doing a bunch of counting, which would be highly parallelizable (or possibly counting and then taking a low-rank decomposition of a matrix, for the “correlations between what tokens appear in the same document” part)
I think you've gained a key insight, that the approximation does indeed work. I mean heck, if I was only generating two words, a bigram model would be pretty good too.
I remember seeing a paper that shows that GPT-2 itself has learnt a bi-gram model inside itself. Given this, it might be fair to say that what you're describing could potentially even be what the LLMs today learn under the hood. I think your description is great though, as it's an interpretable way to see how models make predictions. Maybe a future line of research!
So insightful ‼️
Outstanding work, this series is required LM viewing now, like 3b1b. Also, are you from Singapore? That's the only way I can reconcile good weather meaning high temperature and high humidity 😂
I'm 90% sure the accent is Indian.
Incredible job mfv I look forward to seeing more videos
More to come!
Oh yes Vivek Vivek omg yes
Nice video
Thanks!
I am not criticizing your whole hard work but at some point you just messed up like at 7:54 without explaining the sum of weights you were computing it `Ht` and you could say` Backpropagation Through The Time ` as query in the video .
. Also you could introduce "gated cells" in LSTMS . Long Short Term Memory networks most often rely on a gated cell to track information throughout many time steps.
And Activation function like 'sigmoid' could be replaced by ' ReLu' and packages like TensorFlow have also preferred it in their documentation.
But , honestly you've a created a good intermediate class for learning Recurrence .
Hey, thanks for the feedback.
I personally found little value in mentioning BPTT, as I felt like it would confuse the viewer more in case they weren't familiar with backpropagation. The algorithm itself is pretty straightforward, and I personally felt like it didn't need an entire section explaining it.
In response to LSTMs, the video wasn't meant to cover LSTMs at all. I last-minute introduced the section towards the end for curious viewers. I appreciate you talking about them though! I plan on making a short 5-7 minute video on them in the future.
maybe you can loop around to mamba and explain why it's popular again, what has changed to uncurse the model.
Sure! I wanted to make two follow ups - transformers beyond language and language beyond transformers. In the second part I’d talk about mamba and the future of language modeling
Min Rnn and Min Lstm looks promising
Music is fire
ye
RNN = Remember Nothing Now
Hahaha, RNNs did indeed have "memory loss" issues :)
Understood very little. You are not explaining enough.