I'm a mathematician working at the university, this is a wonderful explanation of Transformers, explaining the meaning and not just the algorithm, very good
Amazing video ! I really appreciate that you explained the Transformer model *from scratch*, and didn't just give a simplistic overview of it 👍 I can definitely see that *a lot* of work was put into this video, keep it up !
@@qwsafirkmc9093 I disagree. Everything in this video is simplified to the extreme, on purpose. Because it is the only way to understand the global behavior quickly. Yes the attention formula is not shown but the whole process is illustrated (including the softmax operation). The tokenizer is far more complicated in practice than a one-hot encoding at word level (and a good tokenizer is apparently quite important for good performance). The positional encoding is, you guessed it, not a one-hot encoding either. It may be complicated enough on its own to require a whole explanation video. Point is, the whole approach is to avoid details. And I think it works quite well.
@@BenjaminDorra if I don't see how matrices are multiplied, the general shape of the tensor at each step and all that jazz I barely understand anything. That simple explanation would've done wonders are some tensor multiplication shenanigans
@@qwsafirkmc9093 Ok fair. From my understanding the attention matmuls behave more like a vector outer product, so every possible pairs of tokens (every token being themselves vectors) are combined (through simple element by element product), pretty much what is shown in the video. But yes it is not simple and I may be wrong. Math is hard !
Thanks for your explanation; This is probably the best video on TH-cam about the core of transformer architecture so far, other videos are more about the actual implementation but lack the fundamental explanation. I 100% recommend it to everyone on the field.
Dude, your explanations are truly next level. This really opened my eyes to understanding transformers like never before. Thank you so much for making these videos. Really amazing resource that you have created.
This is an excellent video! Highly underrated. While most videos explain algorithms, this explains the why, which gets me to understand the algorithm on a much deeper level. I wish this video would have ended with a summary of all the ideas covered and how those ideas are addressed by the transformer architecture. I was doing that in my head during the video, but not everyone may be as familiar. Thanks anyway. Please make many more videos!
I am currently doing my PhD in machine learning (well, on its theoretical aspects), and this video is the best explanation of transformers I've seen on TH-cam. Congratulations and thank you for your work
This is the best I have found. I will watch it once a day for a few weeks to really be able to remember all of the steps. It amazes me how crudely simple LLMs are.
Not so long ago I was searching for hours trying to understand transformers. In this 18 min video I learned more than I learned in 3 hours of researching. This is best computer science video I have ever watched in my entire life.
Truly this is the best explanation of transformers I have seen so far. Especially great logical flow makes it easier to understand difficult concepts. Appreciate your hard work!
It seems like whenever I want to dive deeper into the workings of a subject, I always only find videos that simply define the parts to how something works, like it is from a textbook. You not only explained the ideas behind why the inner workings exist the way they do and how they work, but acknowledged that it was an intentional effort to take a improved approach to learning.
I love the algorithmic way of explaining what mathematics does. Not too deep, not too shallow, just the right level of abstraction and detail. Please please explain RNNs and LSTMs, I'm unable to find a proper explanation. Thanks !
Hey man, I watched your video months ago, and found it excellent. Then I forgot the title, and could not find it again for a long time. It doesn't show up when I search for "transformers deep learning", "transformers neural network", etc. Consider changing the title to include that keyword? This is such a good video, it should have millions of views.
This video is by far the clearest and best explained I've seen! I've watched so many videos on how transformers work and still came away lost. After watching this video (and the previous background videos) I feel like I finally get it. Thank you so much!
Thank you for not using slides filled with math equations. If someone understands the math they're probably not watching these videos, if they're watching these videos they're not understanding the math. It's incredible that so many TH-cam teachers decide to add math and just point at it for an hour without explaining anything their audience can grasp, and then in the comments you can tell everybody golf clapped and understood nothing except for the people who already grasp the topic. Thank you again for thinking of a smart way to teach simple concepts.
Wonderful video. Easily the best video I've seen on explaining transformer networks. This "incremental problem-solving" approach to explaining concepts personally helps me understand and retain the information more efficiently.
Absolutely love how you explain the process of discovery, in other words figure out one part which then causes a new problem, which then can be solved with this method, etc. The insight into this process for me was even more valuable than understanding this architecture itself.
Wow just wow. This video makes you understanding really the reason behind the architecture, something that even reading the original paper you don't really get.
Really well done, I haven't seen your channel before and this is a breath of fresh air. I've been working on my GPT + transformer video for months and this is the only video online which is trying to simplify things through an indepdnent realization approach. Before I watched this video my 1 sentence summary of why Transformers matter was: "They contain layers that have weights which adapt based on context" (vs. using deeper networks with static layers). and this video helped solidify that further, would you agree? I also wanted to boil down the attention heads as "mini networks" (or linear functions) connected to each token which are trained to do this adaptation. One network pulls out what's important in each word given the context around it, the other networks combines these values to decide the important those two words in that context, and this is how the 'weights adapt' I still wonder how important the distinction of linear layer vs. just a single layer, I like how you pulled that into the optimization section. i know how hard this stuff is to make clear and you did well here
My one-sentence summary of why transformers matter would be "they are standard CNNs, except the words are re-ordered in a way that makes the CNN's job easier first before being fed ". Also, a single NN layer IS a linear layer; I'm not sure what you mean by saying you don't know how important the distinction between the two is.
I wasn’t aware that they were using a convolutional neural network in the transformer, so I was extremely confused about why the positional vectors were needed. Nobody else in any of the other videos describing transformers pointed this out. Thanks.
"they were using a convolutional neural network in the transformer" No no, Transformers do not have any convolutional layers, the author of the video just chose CNN as a starting point in the process "Let's start with the solution that doesn't work well, understand why it doesn't work well and try to improve it, changing the solution completely along the way". The main architecture in natural language processing before transformers was RNN, recurrent neural network. Then in 2014 researchers improved it with attention mechanism. However, RNNs do not scale well, because they are inherently sequential, and scale is very important for accuracy. So, researchers tried to get rid of RNNs and succeded in 2017. CNNs were also tried, but, to my not-very-deep knowledge, were less succesful. Interesting that the author of the video chose CNN as a starting point.
@@terjeoseberg990 A little off topic, but... Not long ago I noticed that TH-cam deletes comments with links. Ok, automatic spam protection. (Still, the thing that it does this silently frustrates a lot...) But, does it also delete comments where links are separated into words with "dot" between them ? I tried to give you a resource I learned this from, but my comment got dropped two times...
@@Hexanitrobenzene, The best thing to do when TH-cam deletes comments is to provide a title or something so I can find it. A lot of words are banned too.
This is AMAZING I've been working on coding a transformer network from scratch, and although the code is intuitive, the underlying reasoning can be mind bending. Thank you for this fantastic content.
I’ve watched so many video explainers on transformers and this is the first one that really helped show the intuition in a unique and educational way. Thank you, I will need to rewatch this a few times but I can tell it has unlocked another level of understanding with regard to the attention mechanism that has evaded me for quite some time.(darned KQV vectors…) Thanks for your work!
Damn FNN's and CNN's are basic stuff we were taught in our 4semester of our undergrad. Adam and RNNs were in the "additional resources" section for an Introdcutory course for Deep Learning I took in the same semester. Encountered LSTMs through personal projects lol Still haven't used GANs and Autoencoders, but it they were talk of the town back then due to the diffusion models.
@@newbie8051 yea I did FNN from scratch in high school, I was really hopeful for getting into Ai Research and then the transformers arrived in my college year…
This video is exactly what I needed. Despite knowing what a transformer's made of, I still felt incompleteness and didn't know the motivation behind it. And your video answered this question perfectly. Now understanding why it works is another question.
Halfway through the video and I pressed the subscribed button. Very intutive and easy to understand. Keep up the good work man :) 1 suggestion: Change the title of video and you'll get more traction.
This was so helpful. I was reading through how other models work like ELMo and it makes sense how they came up with ideas for those, but the transformer it just seemed like it popped out of nowhere with random logic. This video really helps to understand their thought process.
Your visualization and explanation are very good. Helped me understand a lot. I hope you can put more videos, it must be not easy otherwise you would have done it. Keep it up.
As both a math enthusiasts and a programme (who obv also works on AI) I rly liked this vid. I can confirm that this is one of the best and genuine explanation of transformers...
Thanks! Now what you do with the output of each self attention layer, as it now have the appropiate information of its context, you pass it through a CNN? or what is it you do with of each output vector from each word?
You feed it through another self-attention layer! A transformer consists of multiple self attention + MLP layers (usually hundreds of layers for a large model). At the end, you run the output vectors from the final layer through a linear classifier to predict what ever the label is (e.g. the next word in the input text for language modelling).
so what does it really classify? The image recognition needed to output a label of that image, What does this transformer output after processing the text?
What ever you train it to. People have trained transformers to categorize text, predict the sentiment of sentences, all sorts of things. ChatGPT is specifically trained to predict the next word that comes after a partial piece of text. It turns out that you can use this to generate new text from scratch by repeatedly applying it to its own output. This technique is known as 'auto-regression' and I explain it in more detail in this video: th-cam.com/video/zc5NTeJbk-k/w-d-xo.html
Can you explain how the NN produces the important-word-pair information-scores method described after 12:15 from the sentence problem raised at 10:17? Well it’s just another trained set of values. I supposs it scores pairs importance over the pairs’ uses in ~billions of sentences.
The importance-scoring neural network is trained in exactly the same way that the representation neural network is. Roughly speaking, for every weight in the importance-scoring neural network you increase the value of that weight slightly and then re-evaluate the entire transformer on a training example. If the new output is closer to the training label, then that was a good change so the weight stays at its new value. If the new output is further away, then you reverse the change to that weight. Repeat this over and over again on billions of training examples and the importance-scoring neural network weights will end up set to values so that that the produced scores are useful.
Great video, but it left me with a question. I tried to compare what you arrived at (16:25) to the original transformer equations, and if I understand it correctly, in the original we don't add the red W2X matrix, but we have a residual connection instead, so it is as if we would add X without passing it through an additional linear layer. Am I correct in this observation, and do you have an explanation for this difference?
Yes that's correct, the transformer just adds x without passing it through an additional linear layer. Including the additional linear layer doesn't actually change the model at all, because when the result of self attention is run through the MLP in the next layer, the first thing the MLP does is apply a linear transform to the input. Composition of 2 linear transforms is a linear transform, so we may as well save computation and just let the MLP's linear transform handle it.
13:00 This may be a silly question, but would it be possible for the transformer to encounter a sentence where all words would have a score of 0.0, creating an issue with simply using an exponential function? I imagine it would be vanishingly rare, but something along the lines of Chomsky's "Colorless green ideas sleep furiously" would seem like the type of sentence that would create such an issue. I assume that this is not a real problem, but I am curious as to why it isn't one.
It's almost impossible for that to happen in practice because we compare words against themselves. So if one word has no relationship with any other word in the sentence, it will still have a large score for itself: so the normalized weight will be 1 for itself and 0 for all other words. Which means that its vector won't include information from any other words, but that's kind of what you want if it really doesn't have any relationship to any other words.
@@algorithmicsimplicity Right, of course, that makes sense. I hadn't thought about words having weights for themselves. Thanks! Your channel is really great, I love the level of depth you go into while still keeping the material approachable.
Thanks. Amazing video. One question though - how do you train the network to output the "importance score"? I get the other part of the self-attention mechanism, but the score seems a bit out of the blue.
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes. You do this process for every parameter in all layers and in all value and score networks, at the same time. By doing this process, the importance score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. For standard training tasks, such as predicting the next word in a piece of text, it turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning to do.
So why don't we just train a model which has ability to change it's own weights and biasis, it would be a great way where modal it self adapts to it's own architecture, the best way I think would be that there are small NN modals where instead of having label data they have a cost value, each NN can give feedback to another, and are connected with each other, and than there will be a slightly Bigger NN or RNN where we will give our cost value , it will than give different cost values to each of the smaller NN, notice that each small NN can communicate with each other that means they can also change there Weights as well, as doing this we can do it in multiple layers, where each layer has multiple groups supose a group which uses another group which is slightly lower, the fact that we get so much more dynamic and conceptual room is best, it can also rember it self instead, I actually tried it in JavaScript (yes I know it's not a good language for these works) the results where mixed but it may be due to my programing skills
All neural nets change their weights during training, if you mean specifically change their weights depending on their current input, that is called 'dynamic weights' and it has been explored. The main issue is that weight matrices are dxd and inputs are length d matrices, so if you use a linear function to generate the new weight matrices, it needs to have d^3 weights, which quickly becomes computationally infeasible. In the Mamba architecture, where the recurrent weights are only size d, they do use dynamic weights and it helps a lot.
@@algorithmicsimplicity no I meant a very different architecture, I derived it my self over course of 3 months (not because it's complex but because I'm very busy) So it was like bunch of small 5 to 10 hidden layers modals where every modal connects to every other modal, including there fitness input which is average of what all modals think, the fitness is just a single value (cost Or loss) which is used to change the modal's parameters, and than there's a head Nuron, slightly bigger with 15 to 20 hidden, it has a single input, multiple outputs, the single input is like a way to tell our current entire modal's cost, loss etc, and the output connects to every small NN directly, so it can basically give different costs to each NN, it also is trained by the main cost input, where it's input is also cost, now imagine this all architecture as a single Group, in next Group we use small Group instead of NN (so that information is processed in modules), and than we can have a Layering where there are multiple Layers containing Groups or combination of Groups, the idea is that the fact that each NN can or will be change depending on Inputs, the Groups inside Groups means that information is processed as modules like MINi functions doing some smaller tasks, the fact that every NN eventually gets the same input, Layers making it high dimensional processing while understanding all Data carefully, the idea is that by using something like this you don't even need to give input they will figure out automatically (predict next and next input there so understand as well) even though it will be more computationally intensive while training it will run parallel when using it as running, it would work with sequential or simple data, because it can dynamically change it's weights it is easy for it to learn, store , etc, Infact it can create it's own architecture which could be more effecint for the input output labels , (It was inspired by brain and trying to make it programming friendly) It is fully dynamic, for the computational part I ran a 15 NN, 5 Groups, 5 Layers where NN had 5 hidden, head NN had 19 NN, the results were mixed, but I just trained for 30 epochs these modals would need higher epochs because they are time dependent, it ran on web on CPU using broswer JS, no library (it was blocking main thread due to I writing it as synchronous but it did worked taking just few seconds to train and run)
Great video, but I was wondering how one aspect of the transformer is handled in the real world. How are importance scores assigned to pairs in order to determine their importance? Basically, on a massive scale, how can important scores be automatically assigned in order to get the correct importance for a pair for a given sentence?
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes. By doing this process, the score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. It turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning.
Great video, maybe you could cover retentive network (from the RetNet paper) in the same fashion next - as it aims to be a replacement for the quadratic/linear attention in transformer (I'm curious as to how much of the "blurry vector" problem their approach suffers from).
How does the explanation in this video relate to Query, Key and Values (as defined in the Attention is all you need paper)? This is really a great video - thank you!!
The "key-query" attention scoring is equivalent to the bi-linear scoring function in my explanation, where the bi-linear form matrix is given by K^TQ. The value transformation V is exactly the linear representation function in my explanation. I still have no idea why they decided to give the scoring function matrix two different names (key and query), it just confuses everyone.
Let's assume X is our input, a sentence containing N words. Each word has embedding dimension of size P. Thus X is an NxP matrix. Then according to the "Attention is All You Need" paper, we have: K = X * W_k Q = X * W_q V = X * W_v A = softmax(QK^T) Output = AV Where: W_k, W_q and W_v are PxP matrices K, Q and V are NxP matrices. A is an NxN matrix Output is an NxP matrix. I am confused about how the V matrix connects to the "pair-wise representations". In the video, you show operations being done on pairs of words (such as 13:40). However, there doesn't seem to be any pair-wise operations occurring when computing V? If there was a pair-wise operation, wouldn't the dimension of W_v need to be NxN instead of PxP? I agree that we are computing an a single "attention" scalar value for each word pair. This is why A has dimension NxN. However, it seems like V contains individual representation of the words that are "smooshed" together when we multiply by A, rather than V containing (or operating on) pair-wise representations? Again, great video! And I greatly appreciate your help!!@@algorithmicsimplicity
@@dmlqdk When you apply the linear transform V to the pair [x1, x2] the result is V1x1 + V2x2. Basically we are applying a linear transform to each input and summing them. Because, in a given column, x2 is the same for every pair, you are effectively just adding a constant value to each V1x_i. You can factor this constant value outside of the attention weights, at which point it just becomes part of the residual term. I explained this process in more detail here: www.reddit.com/r/MachineLearning/comments/17cmzcz/comment/k5t7g70/?context=3 At this point, you no longer have 'pair' representations, since each value vector is now just a linear transform applied to one word. Each column of the [NxN] grid of value vectors contains V1x_i for i in {1,...n}, i.e. all of the columns are identical. Since all of the columns are identical, instead of elementwise multiplying the matrix of attention values by the matrix of value vectors and then summing, you can instead rewrite this operation as a single matrix-vector product, which is what the AV operation is in the standard self attention. V is that column of value vectors, where each entry is just V1x_i.
I wish this had tied in specifically to the nomenclature of the transformer such as where these operations appear in a block, if they are part of both encoder and decoder paths, how they relate to "KQV" and if there's any difference between these basic operations and "cross attention".
I"ll be doing this, but in short, the little networks he showed connected to each pair are KQ (word pair representation) and the V is the value network., all of this can be done in the decoder only model as well. and cross attention is the same thing but you are using two separate sequences looking at each other (such as two sentences in a translation network). it's nice to know that GPT for example is decorder only, and so doesn't even need this
Video about Diffusion/Generative models coming next, stay tuned!
Was coming to comment this, thanks
Please make video
Please!
I now realise that the key to understanding transformers is to ask why they work, not how. Thanks!
Thank you so much!
@@algorithmicsimplicity this is indeed quite eye-opening! Thanks for your video!
Awesome
I'm a mathematician working at the university, this is a wonderful explanation of Transformers, explaining the meaning and not just the algorithm, very good
Amazing video ! I really appreciate that you explained the Transformer model *from scratch*, and didn't just give a simplistic overview of it 👍
I can definitely see that *a lot* of work was put into this video, keep it up !
Would you share the source code for the animations?
Incredible, every one of your videos are crazy good. Post more!
Thank you so much! I am working on the next one at the moment!
This gem is underrated. This is the only video that after watching, I feel like I know how transformers work. Thanks!
you most likely don't. He didn't even show the attention formula
@@qwsafirkmc9093 I disagree.
Everything in this video is simplified to the extreme, on purpose. Because it is the only way to understand the global behavior quickly.
Yes the attention formula is not shown but the whole process is illustrated (including the softmax operation).
The tokenizer is far more complicated in practice than a one-hot encoding at word level (and a good tokenizer is apparently quite important for good performance).
The positional encoding is, you guessed it, not a one-hot encoding either. It may be complicated enough on its own to require a whole explanation video.
Point is, the whole approach is to avoid details. And I think it works quite well.
@@BenjaminDorra if I don't see how matrices are multiplied, the general shape of the tensor at each step and all that jazz I barely understand anything. That simple explanation would've done wonders are some tensor multiplication shenanigans
@@qwsafirkmc9093 Ok fair.
From my understanding the attention matmuls behave more like a vector outer product, so every possible pairs of tokens (every token being themselves vectors) are combined (through simple element by element product), pretty much what is shown in the video.
But yes it is not simple and I may be wrong. Math is hard !
Thanks for your explanation; This is probably the best video on TH-cam about the core of transformer architecture so far, other videos are more about the actual implementation but lack the fundamental explanation. I 100% recommend it to everyone on the field.
Dude, your explanations are truly next level. This really opened my eyes to understanding transformers like never before. Thank you so much for making these videos. Really amazing resource that you have created.
Awesome video bro. i always like some intutive explanation.
Thanks so much!
This is the best Video on Transformers i have seen on whole youtube.
Thank you for answering my questions!!
Thanks for the tip! I'm always happy to answer questions.
This is an excellent video! Highly underrated. While most videos explain algorithms, this explains the why, which gets me to understand the algorithm on a much deeper level. I wish this video would have ended with a summary of all the ideas covered and how those ideas are addressed by the transformer architecture. I was doing that in my head during the video, but not everyone may be as familiar. Thanks anyway. Please make many more videos!
Thanks for the feedback, I will keep it in mind for my next videos!
I am currently doing my PhD in machine learning (well, on its theoretical aspects), and this video is the best explanation of transformers I've seen on TH-cam. Congratulations and thank you for your work
This is the best I have found. I will watch it once a day for a few weeks to really be able to remember all of the steps. It amazes me how crudely simple LLMs are.
Amazing presentation. Thanks!
Thank you so much!
Not so long ago I was searching for hours trying to understand transformers. In this 18 min video I learned more than I learned in 3 hours of researching. This is best computer science video I have ever watched in my entire life.
Truly this is the best explanation of transformers I have seen so far. Especially great logical flow makes it easier to understand difficult concepts. Appreciate your hard work!
It seems like whenever I want to dive deeper into the workings of a subject, I always only find videos that simply define the parts to how something works, like it is from a textbook. You not only explained the ideas behind why the inner workings exist the way they do and how they work, but acknowledged that it was an intentional effort to take a improved approach to learning.
I love the algorithmic way of explaining what mathematics does. Not too deep, not too shallow, just the right level of abstraction and detail. Please please explain RNNs and LSTMs, I'm unable to find a proper explanation. Thanks !
yeah! this "functional" approach to the explanation rather than "mechanical" is truly amazing 👍👍👍👏👏👏
Hey man, I watched your video months ago, and found it excellent. Then I forgot the title, and could not find it again for a long time. It doesn't show up when I search for "transformers deep learning", "transformers neural network", etc. Consider changing the title to include that keyword? This is such a good video, it should have millions of views.
Thanks for the tip.
This video is by far the clearest and best explained I've seen! I've watched so many videos on how transformers work and still came away lost. After watching this video (and the previous background videos) I feel like I finally get it. Thank you so much!
That is the possibly the best explanation of Attention I have ever seen!
Thanks!
Thank you so much for your support!
Takk!
Wow, thanks so much for the support. It means a lot.
Thank you for not using slides filled with math equations. If someone understands the math they're probably not watching these videos, if they're watching these videos they're not understanding the math. It's incredible that so many TH-cam teachers decide to add math and just point at it for an hour without explaining anything their audience can grasp, and then in the comments you can tell everybody golf clapped and understood nothing except for the people who already grasp the topic. Thank you again for thinking of a smart way to teach simple concepts.
amen. the power of out of the box teachers is infinite.
This is the best transformer explanation video on youtube! Everything is so clear now!
Thank you for your work, I am currently doing a PhD in ML Systems and I learned several things from your video! Thank you for your service!
There are many explanations of what a transformer is and how it works, but this one is the best I've seen. Really good work.
Wonderful video. Easily the best video I've seen on explaining transformer networks. This "incremental problem-solving" approach to explaining concepts personally helps me understand and retain the information more efficiently.
Absolutely love how you explain the process of discovery, in other words figure out one part which then causes a new problem, which then can be solved with this method, etc. The insight into this process for me was even more valuable than understanding this architecture itself.
Wow just wow. This video makes you understanding really the reason behind the architecture, something that even reading the original paper you don't really get.
Really well done, I haven't seen your channel before and this is a breath of fresh air. I've been working on my GPT + transformer video for months and this is the only video online which is trying to simplify things through an indepdnent realization approach. Before I watched this video my 1 sentence summary of why Transformers matter was: "They contain layers that have weights which adapt based on context" (vs. using deeper networks with static layers). and this video helped solidify that further, would you agree?
I also wanted to boil down the attention heads as "mini networks" (or linear functions) connected to each token which are trained to do this adaptation. One network pulls out what's important in each word given the context around it, the other networks combines these values to decide the important those two words in that context, and this is how the 'weights adapt'
I still wonder how important the distinction of linear layer vs. just a single layer, I like how you pulled that into the optimization section. i know how hard this stuff is to make clear and you did well here
My one-sentence summary of why transformers matter would be "they are standard CNNs, except the words are re-ordered in a way that makes the CNN's job easier first before being fed ".
Also, a single NN layer IS a linear layer; I'm not sure what you mean by saying you don't know how important the distinction between the two is.
thanks@@maxkho00
I wasn’t aware that they were using a convolutional neural network in the transformer, so I was extremely confused about why the positional vectors were needed. Nobody else in any of the other videos describing transformers pointed this out. Thanks.
"they were using a convolutional neural network in the transformer"
No no, Transformers do not have any convolutional layers, the author of the video just chose CNN as a starting point in the process "Let's start with the solution that doesn't work well, understand why it doesn't work well and try to improve it, changing the solution completely along the way".
The main architecture in natural language processing before transformers was RNN, recurrent neural network. Then in 2014 researchers improved it with attention mechanism. However, RNNs do not scale well, because they are inherently sequential, and scale is very important for accuracy. So, researchers tried to get rid of RNNs and succeded in 2017. CNNs were also tried, but, to my not-very-deep knowledge, were less succesful. Interesting that the author of the video chose CNN as a starting point.
@@Hexanitrobenzene, I suppose I’ll have to watch this video again. I’ll look for what you mentioned.
@@terjeoseberg990
A little off topic, but... Not long ago I noticed that TH-cam deletes comments with links. Ok, automatic spam protection. (Still, the thing that it does this silently frustrates a lot...) But, does it also delete comments where links are separated into words with "dot" between them ? I tried to give you a resource I learned this from, but my comment got dropped two times...
...Silly me, I figured I could just give you the title you can search for: "Dive into deep learning". It's an open textbook with code included.
@@Hexanitrobenzene, The best thing to do when TH-cam deletes comments is to provide a title or something so I can find it. A lot of words are banned too.
Very nicely done. Your graphics had a calming, almost hypnotic effect.
Great videa, I am just starting with Transformers, but never thought about them in relation to convolutional networks
Explained thoroughly and clearly from basic principles and practical motivations. Basically the perfect explanation video.
really made me appreciate NN even more. Thanks for the video
This is BY FAR the BEST explenation I have seen on this topic. You Sir are extremely talneted! keep up the great work and thank ou!
This is AMAZING
I've been working on coding a transformer network from scratch, and although the code is intuitive, the underlying reasoning can be mind bending.
Thank you for this fantastic content.
I’ve watched so many video explainers on transformers and this is the first one that really helped show the intuition in a unique and educational way. Thank you, I will need to rewatch this a few times but I can tell it has unlocked another level of understanding with regard to the attention mechanism that has evaded me for quite some time.(darned KQV vectors…) Thanks for your work!
I still remember when all the cool acronyms I had to deal with was just FNNs, CNNs, ADAM, RNNs, LSTMs and the newest kid on the block, GANs.
Damn FNN's and CNN's are basic stuff we were taught in our 4semester of our undergrad. Adam and RNNs were in the "additional resources" section for an Introdcutory course for Deep Learning I took in the same semester.
Encountered LSTMs through personal projects lol
Still haven't used GANs and Autoencoders, but it they were talk of the town back then due to the diffusion models.
@@newbie8051 yea I did FNN from scratch in high school, I was really hopeful for getting into Ai Research and then the transformers arrived in my college year…
You remember 2014 - 2015 too?!? 😂
Cant wait for more content from your channel. Brilliantly explained.
Good job! There was a lot of intuition in this explanation.
This video is exactly what I needed. Despite knowing what a transformer's made of, I still felt incompleteness and didn't know the motivation behind it. And your video answered this question perfectly. Now understanding why it works is another question.
Halfway through the video and I pressed the subscribed button. Very intutive and easy to understand. Keep up the good work man :)
1 suggestion: Change the title of video and you'll get more traction.
Thanks, any title in particular you'd recommend?
This was top notch. Please do one for RetNets and Liquid Neural Nets.
This was so helpful. I was reading through how other models work like ELMo and it makes sense how they came up with ideas for those, but the transformer it just seemed like it popped out of nowhere with random logic. This video really helps to understand their thought process.
This is by far the best explanation of the transformer architecture. Well done, and thank you very much.
Your visualization and explanation are very good. Helped me understand a lot. I hope you can put more videos, it must be not easy otherwise you would have done it. Keep it up.
This was an excellent video on the global design structure for transformer. Love all your videos!
What a simple but perfect explanation!! You deserve 100s time more subscriber.
best explainer of transformers I saw so far, thnx!
This video was all I needed for LLMs/transformers!
Man, your explanation just blow my mind! You should keep doing good work!
I keep coming back to this because it's the best explanation!!
Great concise visual presentation!
Thank you, much appreciated!
👍👍
As both a math enthusiasts and a programme (who obv also works on AI) I rly liked this vid. I can confirm that this is one of the best and genuine explanation of transformers...
the first so far this year
Thanks! Now what you do with the output of each self attention layer, as it now have the appropiate information of its context, you pass it through a CNN? or what is it you do with of each output vector from each word?
You feed it through another self-attention layer! A transformer consists of multiple self attention + MLP layers (usually hundreds of layers for a large model). At the end, you run the output vectors from the final layer through a linear classifier to predict what ever the label is (e.g. the next word in the input text for language modelling).
@@algorithmicsimplicity Thank you! Appreciate it! Again, very high quality content. Thanks for your time and effort.
you deserve my like bro, really awesome video
so what does it really classify? The image recognition needed to output a label of that image, What does this transformer output after processing the text?
What ever you train it to. People have trained transformers to categorize text, predict the sentiment of sentences, all sorts of things. ChatGPT is specifically trained to predict the next word that comes after a partial piece of text. It turns out that you can use this to generate new text from scratch by repeatedly applying it to its own output. This technique is known as 'auto-regression' and I explain it in more detail in this video: th-cam.com/video/zc5NTeJbk-k/w-d-xo.html
Very nice video. Name of your channel reflects in the content of the video. Thank you.🙏🙏
Wow! I knew about attention mechanisms but this really brought my understanding to a new level. Thank you!!
Thanks!
Thank you so much for the kind words.
I've had to watch this a few times, great explanation!
the best explanation i ever seen, thank you
Great explanation. Havent found this perspective before.
Can you explain how the NN produces the important-word-pair information-scores method described after 12:15 from the sentence problem raised at 10:17?
Well it’s just another trained set of values. I supposs it scores pairs importance over the pairs’ uses in ~billions of sentences.
The importance-scoring neural network is trained in exactly the same way that the representation neural network is. Roughly speaking, for every weight in the importance-scoring neural network you increase the value of that weight slightly and then re-evaluate the entire transformer on a training example. If the new output is closer to the training label, then that was a good change so the weight stays at its new value. If the new output is further away, then you reverse the change to that weight. Repeat this over and over again on billions of training examples and the importance-scoring neural network weights will end up set to values so that that the produced scores are useful.
Clear Explanation. Fantastic.
Great video, but it left me with a question. I tried to compare what you arrived at (16:25) to the original transformer equations, and if I understand it correctly, in the original we don't add the red W2X matrix, but we have a residual connection instead, so it is as if we would add X without passing it through an additional linear layer. Am I correct in this observation, and do you have an explanation for this difference?
Yes that's correct, the transformer just adds x without passing it through an additional linear layer. Including the additional linear layer doesn't actually change the model at all, because when the result of self attention is run through the MLP in the next layer, the first thing the MLP does is apply a linear transform to the input. Composition of 2 linear transforms is a linear transform, so we may as well save computation and just let the MLP's linear transform handle it.
This video is gold!
Subscribed.
thank a lot lot! this visual lecture cleared the dense fogs over my cognitive picture of the transformer.
The visualisation was amazing.
Excellent explanation! All kudos to the author!
I've started binge watching all your videos. 😁
13:00 This may be a silly question, but would it be possible for the transformer to encounter a sentence where all words would have a score of 0.0, creating an issue with simply using an exponential function? I imagine it would be vanishingly rare, but something along the lines of Chomsky's "Colorless green ideas sleep furiously" would seem like the type of sentence that would create such an issue. I assume that this is not a real problem, but I am curious as to why it isn't one.
It's almost impossible for that to happen in practice because we compare words against themselves. So if one word has no relationship with any other word in the sentence, it will still have a large score for itself: so the normalized weight will be 1 for itself and 0 for all other words. Which means that its vector won't include information from any other words, but that's kind of what you want if it really doesn't have any relationship to any other words.
@@algorithmicsimplicity Right, of course, that makes sense. I hadn't thought about words having weights for themselves. Thanks! Your channel is really great, I love the level of depth you go into while still keeping the material approachable.
Thanks. Amazing video. One question though - how do you train the network to output the "importance score"? I get the other part of the self-attention mechanism, but the score seems a bit out of the blue.
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes. You do this process for every parameter in all layers and in all value and score networks, at the same time.
By doing this process, the importance score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. For standard training tasks, such as predicting the next word in a piece of text, it turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning to do.
I may be too late to the party but glad I found this channel.
The best explanation I found so far!
Insane that this website is free. Thanks!
This a truly great introduction. I've watched other also excellent introductions, but yours is superior in a few ways. Congrats and thanks! 🤙
So why don't we just train a model which has ability to change it's own weights and biasis, it would be a great way where modal it self adapts to it's own architecture, the best way I think would be that there are small NN modals where instead of having label data they have a cost value, each NN can give feedback to another, and are connected with each other, and than there will be a slightly Bigger NN or RNN where we will give our cost value , it will than give different cost values to each of the smaller NN, notice that each small NN can communicate with each other that means they can also change there Weights as well, as doing this we can do it in multiple layers, where each layer has multiple groups supose a group which uses another group which is slightly lower, the fact that we get so much more dynamic and conceptual room is best, it can also rember it self instead, I actually tried it in JavaScript (yes I know it's not a good language for these works) the results where mixed but it may be due to my programing skills
All neural nets change their weights during training, if you mean specifically change their weights depending on their current input, that is called 'dynamic weights' and it has been explored. The main issue is that weight matrices are dxd and inputs are length d matrices, so if you use a linear function to generate the new weight matrices, it needs to have d^3 weights, which quickly becomes computationally infeasible. In the Mamba architecture, where the recurrent weights are only size d, they do use dynamic weights and it helps a lot.
@@algorithmicsimplicity no I meant a very different architecture, I derived it my self over course of 3 months (not because it's complex but because I'm very busy) So it was like bunch of small 5 to 10 hidden layers modals where every modal connects to every other modal, including there fitness input which is average of what all modals think, the fitness is just a single value (cost Or loss) which is used to change the modal's parameters, and than there's a head Nuron, slightly bigger with 15 to 20 hidden, it has a single input, multiple outputs, the single input is like a way to tell our current entire modal's cost, loss etc, and the output connects to every small NN directly, so it can basically give different costs to each NN, it also is trained by the main cost input, where it's input is also cost, now imagine this all architecture as a single Group, in next Group we use small Group instead of NN (so that information is processed in modules), and than we can have a Layering where there are multiple Layers containing Groups or combination of Groups, the idea is that the fact that each NN can or will be change depending on Inputs, the Groups inside Groups means that information is processed as modules like MINi functions doing some smaller tasks, the fact that every NN eventually gets the same input, Layers making it high dimensional processing while understanding all Data carefully, the idea is that by using something like this you don't even need to give input they will figure out automatically (predict next and next input there so understand as well) even though it will be more computationally intensive while training it will run parallel when using it as running, it would work with sequential or simple data, because it can dynamically change it's weights it is easy for it to learn, store , etc, Infact it can create it's own architecture which could be more effecint for the input output labels , (It was inspired by brain and trying to make it programming friendly) It is fully dynamic, for the computational part I ran a 15 NN, 5 Groups, 5 Layers where NN had 5 hidden, head NN had 19 NN, the results were mixed, but I just trained for 30 epochs these modals would need higher epochs because they are time dependent, it ran on web on CPU using broswer JS, no library (it was blocking main thread due to I writing it as synchronous but it did worked taking just few seconds to train and run)
This was a summery of architecture I derived and I don't think anyone is gonna read it but it's nice to have a dicription
Finally!!!! Exactly the video I wanted!!!!
This is one of the genuinely best and most innovative explanations of transformers/attention I've ever seen! Thank you.
Great video, but I was wondering how one aspect of the transformer is handled in the real world. How are importance scores assigned to pairs in order to determine their importance? Basically, on a massive scale, how can important scores be automatically assigned in order to get the correct importance for a pair for a given sentence?
The entire model is trained end-to-end to solve the training task. What this means is you have some training dataset consisting of a bunch of input/label pairs. For each input, you run the model on that input, then you change the parameters in the model a bit, evaluate it again and check if the new output is closer to the training label, if it is you keep the changes.
By doing this process, the score generating networks will change over time so that they produce scores which cause the model's outputs to be closer to the training dataset labels. It turns out that the best way for the score generating networks to influence the model's output is by generating 'correct' scores which roughly correspond to how related 2 words are, so this is what they end up learning.
Great video, maybe you could cover retentive network (from the RetNet paper) in the same fashion next - as it aims to be a replacement for the quadratic/linear attention in transformer (I'm curious as to how much of the "blurry vector" problem their approach suffers from).
Fantastic! Loved it! Exactly what I needed.
best explanation i have seen so far.
Basically The transformer is cnn with a lot of extra upgrades. Good to know.
How does the explanation in this video relate to Query, Key and Values (as defined in the Attention is all you need paper)?
This is really a great video - thank you!!
The "key-query" attention scoring is equivalent to the bi-linear scoring function in my explanation, where the bi-linear form matrix is given by K^TQ. The value transformation V is exactly the linear representation function in my explanation. I still have no idea why they decided to give the scoring function matrix two different names (key and query), it just confuses everyone.
Let's assume X is our input, a sentence containing N words. Each word has embedding dimension of size P. Thus X is an NxP matrix.
Then according to the "Attention is All You Need" paper, we have:
K = X * W_k
Q = X * W_q
V = X * W_v
A = softmax(QK^T)
Output = AV
Where:
W_k, W_q and W_v are PxP matrices
K, Q and V are NxP matrices.
A is an NxN matrix
Output is an NxP matrix.
I am confused about how the V matrix connects to the "pair-wise representations". In the video, you show operations being done on pairs of words (such as 13:40). However, there doesn't seem to be any pair-wise operations occurring when computing V? If there was a pair-wise operation, wouldn't the dimension of W_v need to be NxN instead of PxP?
I agree that we are computing an a single "attention" scalar value for each word pair. This is why A has dimension NxN. However, it seems like V contains individual representation of the words that are "smooshed" together when we multiply by A, rather than V containing (or operating on) pair-wise representations?
Again, great video! And I greatly appreciate your help!!@@algorithmicsimplicity
@@dmlqdk When you apply the linear transform V to the pair [x1, x2] the result is V1x1 + V2x2. Basically we are applying a linear transform to each input and summing them. Because, in a given column, x2 is the same for every pair, you are effectively just adding a constant value to each V1x_i. You can factor this constant value outside of the attention weights, at which point it just becomes part of the residual term. I explained this process in more detail here: www.reddit.com/r/MachineLearning/comments/17cmzcz/comment/k5t7g70/?context=3
At this point, you no longer have 'pair' representations, since each value vector is now just a linear transform applied to one word. Each column of the [NxN] grid of value vectors contains V1x_i for i in {1,...n}, i.e. all of the columns are identical. Since all of the columns are identical, instead of elementwise multiplying the matrix of attention values by the matrix of value vectors and then summing, you can instead rewrite this operation as a single matrix-vector product, which is what the AV operation is in the standard self attention. V is that column of value vectors, where each entry is just V1x_i.
This makes so much more sense now. Thank you!!@@algorithmicsimplicity
Amazing explainations and video!
Great video
This video is damn impressive mann
I wish this had tied in specifically to the nomenclature of the transformer such as where these operations appear in a block, if they are part of both encoder and decoder paths, how they relate to "KQV" and if there's any difference between these basic operations and "cross attention".
I"ll be doing this, but in short, the little networks he showed connected to each pair are KQ (word pair representation) and the V is the value network., all of this can be done in the decoder only model as well. and cross attention is the same thing but you are using two separate sequences looking at each other (such as two sentences in a translation network). it's nice to know that GPT for example is decorder only, and so doesn't even need this
Can you do a video on tricks like layer normalization, residual connections, byte pair encoding, etc.?
Thanks. I had read the original Transformer paper and I barely understood the underlying ideas.
fantastic video, congratulations on and thank you for making it
FINALLY I have something me basic understanding. Thank you so much!
2:36 wow, just 50k words... that soud pretty easy for computers. amazing.
I think they were actually used as far back or more as 2006, in compressor algorithm competitions publicly
9:07 Will the result be similar if we sum the vectors down each row instead of each column?
Yep, everything is completely symmetric for rows/columns, since in either case you are still summing over all pairs that contain a particular word.