Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads. Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful. And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!
Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer. Thank you good sir.
are you kidding me ? seriously ? Lol some TH-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man
This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure. Please keep creating content like this!
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you. I really like maths but it is good that you keep math simple that one don't loose the overview. You really have a talent to explain complex things in a simple way. Greets from Switzerland
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
I studied linear algebra during the day on Coursera and watch TH-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
Great video series! Thanks you! That helped a ton 🙂 One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
noone of the videos I seen on this subject actually explain where the hell qkv values come from! its amazing people jump on making video while not understanding the concepts clearly! I guess youtube must pay a lot of money! But this video does a good job of explaining most of the things, it never does tell us where the actual qkv values come from, how do the embendings turn into them, and actually got things wrong in my oppinion. the q comes from embeddings that are multiplied by the wq, which is a weight and parameter in the model, but then the question is, where does wq wk wv come from???
simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
Thank You very much sir. I am so pleased by the way you teach. Alhumdulillah. Thank GOD. However, I was unable to grasp the key, query, values part. Thank You Very Much
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
@SerranoAcademy If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper) Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
I have a small question: Up until this point, we've seen sentences that consist of a word that belongs to either the "technology" class or the "fruit" class. How might we create an embedding that works for a sentence like "I saw an apple on a phone today"?
Great question! Yes, these things are always ambiguous. The window for attention is larger, so hopefully if there are more sentences around, there will be a hint that you're talking about apple as a fruit.
@SerranoAcademy At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.
Dr. Luis - Excellent explanation for the math behind Q,K,V magic from AIAUN paper! In the video when you are explaining Multi-head attention (length 30:20), if I am not mistaken you have not explained the W matrix in the formula both at the "head" level and "concat" level. Please shed some light on this if you dont mind. My hunch is that to keep things simplified you might have decided to skip this W matrix that is part of NN loss function when the decoders are in action.
I have been trying to conceptualise what QKV do from many videos (and the original AIAYN paper) but yours is the first to resonate with what I was thinking i.e. that 1) Q contains the 'word mix' (more accurately token mix) for our input text 2) K contains the real deep meaning (thing/action/feature, etc) behind every word so Q*K gives us our actual word mix (aka input text sentence) deep meaning. We need a consistent way of storing meaning for the computer since two different human language sentences or 'word mixes' could yield the same deep meaning ("a likes b", "b is liked by a") and vice versa i.e. two identical sentences could give a different deep meaning depending on what came before them. QK gives us that. 3) V gives us the features of the word that would likely come next following our input. Does this 'intuition' sound roughly right?
@SerranoAcademy Thank you for getting back and helping my understanding of this area 🙌 You may know Karpathy has a video on building a simple GPT and that helped me understand qkv better too. Wishing all the best with your channel and ventures, and compliments of the season! 🎄
@36.15 divide it by the square root of the dimensions of the vector and not the length of the vector. The length of the vector is sq(coeff i ^2 + coeff j^2) amazing video ! Keep it up ! Thanks
@@SerranoAcademy The only thing between me and the paper was this video and it helped me clear the blanks I had after reading the paper. Thank You once again !
Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads.
Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.
No worries. It might help to pin this comment to the top. Thanks a lot for the video.
Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.
another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.
I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.
Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful.
And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!
Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?
Agree, an excellelt öööököööööööövnp
@@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.
Absolutely ❤
This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!
Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!
This is really great, thanks a lot!
That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.
this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing
things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"
This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.
💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤
This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.
I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!
Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer.
Thank you good sir.
you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.
Thank you! :)
are you kidding me ? seriously ? Lol some TH-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man
This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..
I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure.
Please keep creating content like this!
Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!
Thank you! :)
These are the best videos so far I saw to understand how Transformer / LLM works. Thank you.
I really like maths but it is good that you keep math simple that one don't loose the overview.
You really have a talent to explain complex things in a simple way.
Greets from Switzerland
This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.
I haven't seen a better video explaining Attention. Thanks a ton for your time and effort. God bless.
The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)
Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way
this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!
Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!
Thank you! So glad you like the videos!
Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!
One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!
Absolutely the best set of videos explaining the most discussed topic. Thank you!!
This video is, without a doubt, the best video on transformers and attention that I have ever seen.
A professor here - preparing for my couse and tryng to find an easier way to talk about these ideas. I learned a lot! Thank you!
This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation
What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!
Please continue making videos. You're the best teacher on this planet.
Best video explaining what the query, key, and value matrices are! You saved my day.
Simply the best explanation on this subject.Crystal clear .Thank you
Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.
Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊
Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one
I studied linear algebra during the day on Coursera and watch TH-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!
Thank you, it’s an honor to be part of your learning journey! :)
Finally! This is the best from the tons of videos/articles I saw/read.
Thank you for your work!
This is truly the best video explaining each stage of a transformer, thanks man
this is the best video i have seen on attention model. Even after reading through so many articles it was not intuitively clear but now it is!! thanks
The best ever videos on transformers in the internet. You are the best teacher!
Great video series! Thanks you! That helped a ton 🙂
One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).
This probably is “the best video “ on this topic
12:30 attention mechanism finding similarity (scaled dot product or cosine similarity) between each word in the sentence and every other word
I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!
This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!
Most intuitive explanation for QKV, as someone with only an elementary understanding of linear algebra.
I really liked the way you showed the motivation behind softmax function. i was blown away. thanks a lot Serrano!
MAN! I have no words! Your channel is priceless! thank you for everything!!!
Super clear ! Great video !!
Awesome . You explained everything very well. It made life easy for me.
This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts
This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!
La mejor explicación que he visto sobre los Transformers. Gracias!
One of the best explanations I have ever watched
really thanks for this video , i am a stu in China , and none of my teachers teach me this clearly.
noone of the videos I seen on this subject actually explain where the hell qkv values come from! its amazing people jump on making video while not understanding the concepts clearly! I guess youtube must pay a lot of money! But this video does a good job of explaining most of the things, it never does tell us where the actual qkv values come from, how do the embendings turn into them, and actually got things wrong in my oppinion. the q comes from embeddings that are multiplied by the wq, which is a weight and parameter in the model, but then the question is, where does wq wk wv come from???
This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^
Thank you for the great tutorial. This is the clearest explanation I have found so far.
Excellent job! Please continue making videos that breakdown the math.
simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man
This is the best video I’ve seen on this topic. Well done sir
This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.
I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!
Thanks!
This is one of the best explanations of Q, K & V I've heard!
Thank You very much sir.
I am so pleased by the way you teach. Alhumdulillah. Thank GOD.
However, I was unable to grasp the key, query, values part.
Thank You Very Much
I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.
Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤
HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊
Very instructive and mind-opening on a difficult topic. Thanks
The best explanation l've ever seen about the attention mechanism, amazing
This is the best video I had seen explaining attention mechanism. Keep up the good work!
Best Video to get clear understanding of transformers
This is great videos with clarity! on Keys, Query, and Values. Thank you
Thank you, really good job on the visualization! They make the process really understandable.
Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!
Today i have understood attention mechanism better than never before
@SerranoAcademy
If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper)
Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.
Amazing explanation. I am a professional pedagogue and this is stellar work
Thank you so much !! I watched several video and none could explain the concept so well
Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!
Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤
The best video I have ever watched about this!
Best video so far on this topic
Best video on this topic so far!
Amazing explanation. Thanks a lot for your efforts.
"This step is called softmax" . 😮😮😮
Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.
Damn! There's no better video to understand Attention than this!!
Amazing video! Took my intuition to the next level.
Great explanation. I just really needed the third video. Hope you will post it soon.
OMG this is so well explained! Thank you so much for the tutorials!
by far the best explanation, Thanks for sharing!
Great video finally understood all the concepts in their context
I have a small question: Up until this point, we've seen sentences that consist of a word that belongs to either the "technology" class or the "fruit" class. How might we create an embedding that works for a sentence like "I saw an apple on a phone today"?
Great question! Yes, these things are always ambiguous. The window for attention is larger, so hopefully if there are more sentences around, there will be a hint that you're talking about apple as a fruit.
It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this
Please do more😅 with more complex ones
Hey I know this 👦. He is my Maths teacher who don't only teach but make us visualize why we learn the topic and how will it useful in real world ❤
amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!
@SerranoAcademy
At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way
Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)
Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.
Very well explained. Got a bit closer to understanding attention models.
Now whenever I watch Serrano's video, I first like it and the start watching it coz I know the video will gonna be outstanding as always.
Dr. Luis - Excellent explanation for the math behind Q,K,V magic from AIAUN paper! In the video when you are explaining Multi-head attention (length 30:20), if I am not mistaken you have not explained the W matrix in the formula both at the "head" level and "concat" level. Please shed some light on this if you dont mind. My hunch is that to keep things simplified you might have decided to skip this W matrix that is part of NN loss function when the decoders are in action.
I have been trying to conceptualise what QKV do from many videos (and the original AIAYN paper) but yours is the first to resonate with what I was thinking i.e. that
1) Q contains the 'word mix' (more accurately token mix) for our input text
2) K contains the real deep meaning (thing/action/feature, etc) behind every word so Q*K gives us our actual word mix (aka input text sentence) deep meaning. We need a consistent way of storing meaning for the computer since two different human language sentences or 'word mixes' could yield the same deep meaning ("a likes b", "b is liked by a") and vice versa i.e. two identical sentences could give a different deep meaning depending on what came before them. QK gives us that.
3) V gives us the features of the word that would likely come next following our input.
Does this 'intuition' sound roughly right?
Thank you! Yes I love your reasoning, that's the intuition behind them!
@SerranoAcademy Thank you for getting back and helping my understanding of this area 🙌 You may know Karpathy has a video on building a simple GPT and that helped me understand qkv better too. Wishing all the best with your channel and ventures, and compliments of the season! 🎄
@36.15
divide it by the square root of the dimensions of the vector and not the length of the vector. The length of the vector is sq(coeff i ^2 + coeff j^2)
amazing video ! Keep it up !
Thanks
Thank you! Yes absolutely, I should have said dimensions instead of length.
@@SerranoAcademy The only thing between me and the paper was this video and it helped me clear the blanks I had after reading the paper.
Thank You once again !