The math behind Attention: Keys, Queries, and Values matrices

แชร์
ฝัง
  • เผยแพร่เมื่อ 24 พ.ย. 2024

ความคิดเห็น • 359

  • @SerranoAcademy
    @SerranoAcademy  ปีที่แล้ว +66

    Hello all! In the video I made a comment about how the Key and Query matrices capture low and high level properties of the text. After reading some of your comments, I've realized that this is not true (or at least there's no clear reason for it to be true), and probably something I misunderstood while reading in different places in the literature and threads.
    Apologies for the error, and thank you to all who pointed it out! I've removed that part of the video.

    • @tantzer6113
      @tantzer6113 ปีที่แล้ว +3

      No worries. It might help to pin this comment to the top. Thanks a lot for the video.

    • @chrisw4562
      @chrisw4562 9 หลายเดือนก่อน

      Thanks for note. That comment actually sounds very reasonable to me. If I understand this right, keys and querys help to determine the context.

    • @masatoedamura184
      @masatoedamura184 9 วันที่ผ่านมา

      another big mistake on "measure 3: scaled dot product" you wrote "divided by a length of a vector" which is incorrect. At the same time you divide by a number of dimentions in vector, which is correct. Please fix it to avoid confusion.

  • @JTedam
    @JTedam 11 หลายเดือนก่อน +79

    I have watched more than 10 videos trying to wrap my head around the paper, attention is all you need. This video is by far the best video. I have been trying to assess why it is so effective at explaining such a complex concept and why the concept is hard to understand in the first place. Serrano explains the concepts, step by step, without making any assumptions. It helps a great deal. He also used diagrams, showing animations along the way as he explains. As for the architecture, there are so many layers condense in to the architecture. It has obviously evolved over the years with multiple concepts interlaced into the attention mechanism. so it is important to break it down into the various architectures and take each one at a time - positional encoding, tokenization, embedding, feed forward, normalization, neural networks, the math behind it, vectors, query-key -values. etc. Each of these are architectures that need explaining, or perhaps a video of their own, before putting them together. I am not quite there yet but this has improved my understanding a great deal. Serrano, keep up your approach. I would like to see you cover other areas such as Transformer with human feedback, the new Qstar architecture etc. You break it down so well.

    • @SerranoAcademy
      @SerranoAcademy  11 หลายเดือนก่อน +6

      Thank you for such a thorough analysis! I do enjoy making the videos a lot, so I'm glad you find them useful.
      And thank you for the suggestions! Definitely RLHF and QStar are topics I'm interested in, so hopefully soon there'll be videos of those!

    • @blahblahsaurus2458
      @blahblahsaurus2458 8 หลายเดือนก่อน +1

      Did you also try reading the original Attention is All you Need paper, and if so, what was your experience? Was there too much jargon and math to understand?

    • @visahonkanen7291
      @visahonkanen7291 7 หลายเดือนก่อน

      Agree, an excellelt öööököööööööövnp

    • @JTedam
      @JTedam 7 หลายเดือนก่อน +3

      @@blahblahsaurus2458 too much jargon obviously intended for those already Familiar with the concepts. The diagram appears upside down and not intuitive at all. Nobody has attempted to redraw the architecture diagram in the paper. It follows no particular convention at all.

    • @TomChenyangJI
      @TomChenyangJI หลายเดือนก่อน

      Absolutely ❤

  • @Rish__01
    @Rish__01 ปีที่แล้ว +113

    This might be the best video on attention mechanisms on youtube right now. I really liked the fact that you explained matrix multplications with linear transformations. It brings a whole new level of understanding with respect to embedding space. Thanks a lot!!

    • @SerranoAcademy
      @SerranoAcademy  ปีที่แล้ว +7

      Thank you so much! I enjoy seeing things pictorially, especially matrices, and I'm glad that you do too!

    • @maethu
      @maethu 11 หลายเดือนก่อน +1

      This is really great, thanks a lot!

    • @JosueHuaman-oz4fk
      @JosueHuaman-oz4fk 8 หลายเดือนก่อน

      That is what many disseminators lack: explaining things with the mathematical foundations. I understand that it is difficult to do so. However, you did it, and in an amazing way. The way you explained the linear transformation was epic. Thank you.

  • @fcx1439
    @fcx1439 9 หลายเดือนก่อน +27

    this is definitely the best explained video for attention model, the original paper sucks because there is not intuition at all, just simple words and crazy math equations that I don't know what it's doing

    • @nbtble
      @nbtble 2 หลายเดือนก่อน +2

      things don't suck just because you are not able to understand them. w/o the original paper, there would be no neccesity for this video, as the content wouldn't "exist"

  • @Aaron洪希仁
    @Aaron洪希仁 ปีที่แล้ว +23

    This is unequivocally the best introduction to Transformers and Attention Mechanisms on the entire internet. Luis Serrano has guided me all the way from Machine Learning to Deep Learning and onto Large Language Models, maximizing the entropy of my AI thinking, allowing for limitless possibilities.

    • @JonMasters
      @JonMasters 7 หลายเดือนก่อน +2

      💯 agree. Everything else is utter BS by comparison. I’ve never tipped someone $10 for a video before this one ❤

  • @computersciencelearningina7382
    @computersciencelearningina7382 8 หลายเดือนก่อน +9

    This is the best description of Keys, Query, and Values I have ever seen across the internet. Thank you.

  • @olivergrau4660
    @olivergrau4660 2 หลายเดือนก่อน +1

    I am so grateful that there are people like Luis Serrano who present incredibly complex material in a clear way. It must be an incredible job. I noticed Mr. Serrano very positively in Udacity. Just by reading the original papers, it is unlikely for “normal people” to understand such material. Many, many thanks!

  • @kennethm.4998
    @kennethm.4998 9 วันที่ผ่านมา

    Best explanation of attention on the internet hands down. Finally someone who explains the 'why' in the internals of the transormer.
    Thank you good sir.

  • @23232323rdurian
    @23232323rdurian ปีที่แล้ว +13

    you explain very well Luis. Thank you. It's HARD to explain complicated topics in a way people can easily understand. You do it very well.

  • @mushfikurahmaan
    @mushfikurahmaan 2 หลายเดือนก่อน +2

    are you kidding me ? seriously ? Lol some TH-camrs thinks that if they use fancy words they are good at teaching. but you're totally different man. you've cleared all of my confusions. Thanks man

  • @Bramsmelodic
    @Bramsmelodic 8 วันที่ผ่านมา

    This is one of the best explanations I have seen. Making complex things simple is an art and Serrano is a master in that. I saw first video of Serrano on RNN a few years back and really got impressed by his way of teaching. Keep it up Serrano! We need more people like you to help the students..

  • @__redacted__
    @__redacted__ 11 หลายเดือนก่อน +4

    I really like how you're using these concrete examples and combining them with visuals. These really help build an intuition on what's actually happening. It's definitely a lot easier for people to consume than struggling with reading academic papers, constantly looking things up, and feeling frustrated and unsure.
    Please keep creating content like this!

  • @channel8048
    @channel8048 ปีที่แล้ว +4

    Just the Keys and Queries section is worth the watch! I have been scratching my head on this for an entire month!

  • @joelegger2570
    @joelegger2570 ปีที่แล้ว +10

    These are the best videos so far I saw to understand how Transformer / LLM works. Thank you.
    I really like maths but it is good that you keep math simple that one don't loose the overview.
    You really have a talent to explain complex things in a simple way.
    Greets from Switzerland

  • @dekasthiti
    @dekasthiti 7 หลายเดือนก่อน

    This really is one of the best videos explaining the purpose of K, Q, V. The illustrations provide a window into the math behind the concepts.

  • @decryptifi2265
    @decryptifi2265 14 วันที่ผ่านมา

    I haven't seen a better video explaining Attention. Thanks a ton for your time and effort. God bless.

  • @WhatsAI
    @WhatsAI ปีที่แล้ว +9

    The best explanation I've seen so far! Really cool to see how much closer the field is getting to understanding those models instead of being so abstract thanks to people like you, Luis! :)

  • @Chill_Magma
    @Chill_Magma ปีที่แล้ว +1

    Honestly you are the best content creator for learning Machine learning and Deep learning in a visual and intuitive way

  • @MrMacaroonable
    @MrMacaroonable 11 หลายเดือนก่อน +1

    this is absolutely the best video that clearly illustrate and explains why we need v,k,q in attention. Bravo!

  • @ChujiOlinze
    @ChujiOlinze ปีที่แล้ว +5

    Thanks for sharing your knowledge freely. I have been waiting patiently. You add a different perspective that we appreciate. Looking forward to the 3rd video. Thank you!

    • @SerranoAcademy
      @SerranoAcademy  ปีที่แล้ว

      Thank you! So glad you like the videos!

  • @leilanifrost771
    @leilanifrost771 8 หลายเดือนก่อน

    Math is not my strong suit, but you made these mathematical concepts so clear with all the visual animations and your concise descriptions. Thank you so much for the hard work and making this content freely accessible to us!

  • @snehotoshbanerjee1938
    @snehotoshbanerjee1938 ปีที่แล้ว +2

    One of the Best video on Attention. Such a complex subject been taught in a simple manner.Thank u!

  • @ganapathysubramaniam
    @ganapathysubramaniam ปีที่แล้ว +1

    Absolutely the best set of videos explaining the most discussed topic. Thank you!!

  • @aravind_selvam
    @aravind_selvam ปีที่แล้ว

    This video is, without a doubt, the best video on transformers and attention that I have ever seen.

  • @shuang7877
    @shuang7877 6 หลายเดือนก่อน

    A professor here - preparing for my couse and tryng to find an easier way to talk about these ideas. I learned a lot! Thank you!

  • @kranthikumar4397
    @kranthikumar4397 8 หลายเดือนก่อน

    This is one of the best videos on attention and w,k,v so far.Thank you for a detailed explanation

  • @shubha07m
    @shubha07m 2 หลายเดือนก่อน

    What a flawed youtube algorithm , that it showed this Gem after so many over complicated videos of attention, every student should understand attention from THIS VIDEO!

  • @rohitchan007
    @rohitchan007 ปีที่แล้ว +5

    Please continue making videos. You're the best teacher on this planet.

  • @puwanatsangkhapreecha7847
    @puwanatsangkhapreecha7847 6 หลายเดือนก่อน

    Best video explaining what the query, key, and value matrices are! You saved my day.

  • @lengooi6125
    @lengooi6125 10 หลายเดือนก่อน +1

    Simply the best explanation on this subject.Crystal clear .Thank you

  • @redmond2582
    @redmond2582 11 หลายเดือนก่อน +1

    Amazing explanation of very difficult concepts. The best explanation I have found on the topic so far.

  • @johnschut164
    @johnschut164 11 หลายเดือนก่อน

    Your explanations are truly great! You have even understood that you sometimes have to ‘lie’ first to be able to explain things better. My sincere compliments! 👊

  • @guitarcrax127
    @guitarcrax127 ปีที่แล้ว +3

    Amazing video. pushed forward my understanding of attention by quite a few steps and helped me build an intuition for what’s happening under the hood. Eagerly waiting for the next one

  • @danherman212nyc
    @danherman212nyc 8 หลายเดือนก่อน

    I studied linear algebra during the day on Coursera and watch TH-cam videos at night on state of the art machine learning. I’m amazed by how fast you learn with Luis. I’ve learned everything I was curious about. Thank you!

    • @SerranoAcademy
      @SerranoAcademy  8 หลายเดือนก่อน +1

      Thank you, it’s an honor to be part of your learning journey! :)

  • @alexrypun
    @alexrypun ปีที่แล้ว

    Finally! This is the best from the tons of videos/articles I saw/read.
    Thank you for your work!

  • @MrSikesben
    @MrSikesben 10 หลายเดือนก่อน

    This is truly the best video explaining each stage of a transformer, thanks man

  • @nikhilbelure
    @nikhilbelure 4 หลายเดือนก่อน

    this is the best video i have seen on attention model. Even after reading through so many articles it was not intuitively clear but now it is!! thanks

  • @cachegrk
    @cachegrk 5 หลายเดือนก่อน

    The best ever videos on transformers in the internet. You are the best teacher!

  • @Ludwighaffen1
    @Ludwighaffen1 ปีที่แล้ว +3

    Great video series! Thanks you! That helped a ton 🙂
    One small remark: the concept of the "length" of a vector that you use here confused me. Here, I guess you take the point of view of a programmer: len(vector) outputs the number of dimensions of the vector. However, for a mathematician, the length of a vector is its norm or also called magnitude (square root of x^2 + y^2).

  • @awinashjha
    @awinashjha ปีที่แล้ว

    This probably is “the best video “ on this topic

  • @mostinho7
    @mostinho7 11 หลายเดือนก่อน +1

    12:30 attention mechanism finding similarity (scaled dot product or cosine similarity) between each word in the sentence and every other word

  • @shannawallace7855
    @shannawallace7855 ปีที่แล้ว +1

    I had to read this research paper for my Intro to AI class and it's obviously written for people who already have a lot of background knowledge in this field. so being a newbie I was so lost lol. Thanks for breaking it down and making it easy to understand!

  • @rachadlakis1
    @rachadlakis1 5 หลายเดือนก่อน

    This is such a detailed and informative explanation of Transformer models! I appreciate the effort put into breaking down complex concepts with visuals and examples. Keep up the great work!

  • @iantanwx
    @iantanwx 5 หลายเดือนก่อน

    Most intuitive explanation for QKV, as someone with only an elementary understanding of linear algebra.

  • @gauravruhela007
    @gauravruhela007 6 หลายเดือนก่อน

    I really liked the way you showed the motivation behind softmax function. i was blown away. thanks a lot Serrano!

  • @bzaruk
    @bzaruk ปีที่แล้ว

    MAN! I have no words! Your channel is priceless! thank you for everything!!!

  • @cooperwu38
    @cooperwu38 9 หลายเดือนก่อน +1

    Super clear ! Great video !!

  • @subterraindia5761
    @subterraindia5761 3 หลายเดือนก่อน +1

    Awesome . You explained everything very well. It made life easy for me.

  • @antraprakash2562
    @antraprakash2562 10 หลายเดือนก่อน

    This is one of best video I've come across to understand embeddings, attention. Looking forward to more such explanations which can simplify such complex mechanisms in AI world. Thanks for your efforts

  • @SeyyedMohammadLoghmanDastgheyb
    @SeyyedMohammadLoghmanDastgheyb ปีที่แล้ว +1

    This is the best video that I have seen about the concept of attention! (I have seen more than 10 videos but none of them was like this.) Thank you so much! I am waiting for the next videos that you have promised! You are doing a great job!

  • @andresfeliperiostamayo7307
    @andresfeliperiostamayo7307 6 หลายเดือนก่อน

    La mejor explicación que he visto sobre los Transformers. Gracias!

  • @alnouralharin
    @alnouralharin 8 หลายเดือนก่อน

    One of the best explanations I have ever watched

  • @王禹博-s8i
    @王禹博-s8i หลายเดือนก่อน

    really thanks for this video , i am a stu in China , and none of my teachers teach me this clearly.

  • @laodrofotic7713
    @laodrofotic7713 5 หลายเดือนก่อน

    noone of the videos I seen on this subject actually explain where the hell qkv values come from! its amazing people jump on making video while not understanding the concepts clearly! I guess youtube must pay a lot of money! But this video does a good job of explaining most of the things, it never does tell us where the actual qkv values come from, how do the embendings turn into them, and actually got things wrong in my oppinion. the q comes from embeddings that are multiplied by the wq, which is a weight and parameter in the model, but then the question is, where does wq wk wv come from???

  • @_ncduy_
    @_ncduy_ 8 หลายเดือนก่อน

    This is the best video for people trying to understand basic knowledge about transformer, thank you so much ^^

  • @chrisw4562
    @chrisw4562 9 หลายเดือนก่อน

    Thank you for the great tutorial. This is the clearest explanation I have found so far.

  • @danielmoore4311
    @danielmoore4311 ปีที่แล้ว +1

    Excellent job! Please continue making videos that breakdown the math.

  • @ravindra1607
    @ravindra1607 หลายเดือนก่อน

    simply , the best video on attention is all you need. Tried to understand it from different videos, blogs paper itself , couldn't understand close enough to what i understood from this video. It clarified almost all the questions i had, except for few which i think will be clarified in next video. You have amazing teaching skills , kudos to you man

  • @brianburton6669
    @brianburton6669 2 หลายเดือนก่อน

    This is the best video I’ve seen on this topic. Well done sir

  • @celilylmaz4426
    @celilylmaz4426 11 หลายเดือนก่อน

    This video has the best explanations of QKV matrices and linear layers among the other resources i ve come across. I don't know why but people seem not interested in explaining whats really happening with each action we take which results in loads of vague points. Yet, the video could ve been further improved with more concrete examples and numbers. Thank you.

  • @0xSingletOnly
    @0xSingletOnly 10 หลายเดือนก่อน

    I'm going to try implement self-attention and multi-head attention myself, thanks so much for doing this guide!

  • @RoyBassTube
    @RoyBassTube 6 หลายเดือนก่อน

    Thanks!
    This is one of the best explanations of Q, K & V I've heard!

  • @syedmustahsan4888
    @syedmustahsan4888 2 หลายเดือนก่อน +1

    Thank You very much sir.
    I am so pleased by the way you teach. Alhumdulillah. Thank GOD.
    However, I was unable to grasp the key, query, values part.
    Thank You Very Much

  • @kylelau1329
    @kylelau1329 11 หลายเดือนก่อน

    I've been watching over 10 of the Transformers architecture tutorial videos, This one is so far the most intuitive way to understand it! really good work! yeah, Natural language processing is a hard topic, This tutorial is kind of revealed the black boxe from the large language model.

  • @deveshnandan323
    @deveshnandan323 8 หลายเดือนก่อน

    Sir , You are a Blessing to New Learners like me , Thank You , Big Respect.❤

  • @MarkusEicher70
    @MarkusEicher70 ปีที่แล้ว +2

    HI Luis. Thank you for this video. I'm sure, this is a very good way to explain this complex topic, but I just won't get this into my brain. I'm currently doing the Math for Machine Learning specialization on Coursera and brushing up my algebra and calculus skills that are way to low. In any case, you made me getting involved into this and now I will grind through it till I make it. I'm sure the pain will become less and the fog will lighten up. 😊

  • @Vercoquin64
    @Vercoquin64 9 วันที่ผ่านมา

    Very instructive and mind-opening on a difficult topic. Thanks

  • @YahyaMohand-r7f
    @YahyaMohand-r7f ปีที่แล้ว

    The best explanation l've ever seen about the attention mechanism, amazing

  • @vasanthakumarg4538
    @vasanthakumarg4538 11 หลายเดือนก่อน

    This is the best video I had seen explaining attention mechanism. Keep up the good work!

  • @sreelakshminarayanan.m6609
    @sreelakshminarayanan.m6609 7 หลายเดือนก่อน

    Best Video to get clear understanding of transformers

  • @devmum2008
    @devmum2008 8 หลายเดือนก่อน +1

    This is great videos with clarity! on Keys, Query, and Values. Thank you

  • @TheMotorJokers
    @TheMotorJokers ปีที่แล้ว +1

    Thank you, really good job on the visualization! They make the process really understandable.

  • @lijunzhang2788
    @lijunzhang2788 ปีที่แล้ว +1

    Great explanation. I was waitinig for this after your first video on attention mechanism! Your are so talented in explaining things in easily understandable ways! Thank you for the effort put into this and keep up the great work!

  • @januaymagori4642
    @januaymagori4642 ปีที่แล้ว +2

    Today i have understood attention mechanism better than never before

  • @rollingstone1784
    @rollingstone1784 7 หลายเดือนก่อน +1

    @SerranoAcademy
    If you want to come to the same notation as in the mentioned paper, Q times K_transpose, than the orange is the query and the phone is the key here. The you calculate q times Q times K_transpose times key_transpose (as mentioned in the paper)
    Remark: the paper uses "sequences", described as a "row vectors". However, usually one uses column vectors. Using row vectors, the linear transformation is a left multiplication a times A and the dot product is written as a times b_transpose. Using column vectors, the linear transformation is A times a and the dot product is written as a_transpose times b. This, in my opinion, is the standard notation, e.g. to write Ax = b and not xA=b.

  • @brandonheaton6197
    @brandonheaton6197 ปีที่แล้ว

    Amazing explanation. I am a professional pedagogue and this is stellar work

  • @BrikeshKumar987
    @BrikeshKumar987 11 หลายเดือนก่อน

    Thank you so much !! I watched several video and none could explain the concept so well

    • @SerranoAcademy
      @SerranoAcademy  11 หลายเดือนก่อน

      Thanks, I'm so glad you enjoyed it! Lemme know if you have suggestions for more topics to cover!

  • @BABA-oi2cl
    @BABA-oi2cl 11 หลายเดือนก่อน

    Thanks a lot for this. I always got terrified of the maths that might be there but the way you explained it all made it seem really easy ❤

  • @deniz517
    @deniz517 ปีที่แล้ว

    The best video I have ever watched about this!

  • @PeterGodek2
    @PeterGodek2 ปีที่แล้ว

    Best video so far on this topic

  • @knobbytrails577
    @knobbytrails577 11 หลายเดือนก่อน

    Best video on this topic so far!

  • @brainxyz
    @brainxyz ปีที่แล้ว +1

    Amazing explanation. Thanks a lot for your efforts.

  • @Wise_Man_on_YouTube
    @Wise_Man_on_YouTube 9 หลายเดือนก่อน

    "This step is called softmax" . 😮😮😮
    Today I understood why softmax is used. Such a beautiful function. And such a great way to demonstrate it.

  • @debnath22026
    @debnath22026 4 หลายเดือนก่อน

    Damn! There's no better video to understand Attention than this!!

  • @joshuaohara7704
    @joshuaohara7704 ปีที่แล้ว

    Amazing video! Took my intuition to the next level.

  • @TaianeCoelhoRamos
    @TaianeCoelhoRamos ปีที่แล้ว

    Great explanation. I just really needed the third video. Hope you will post it soon.

  • @glacierxs6646
    @glacierxs6646 4 หลายเดือนก่อน

    OMG this is so well explained! Thank you so much for the tutorials!

  • @user-um4di5qm8p
    @user-um4di5qm8p 6 หลายเดือนก่อน

    by far the best explanation, Thanks for sharing!

  • @joehannes23
    @joehannes23 11 หลายเดือนก่อน

    Great video finally understood all the concepts in their context

  • @purusharthmalik9341
    @purusharthmalik9341 11 หลายเดือนก่อน +1

    I have a small question: Up until this point, we've seen sentences that consist of a word that belongs to either the "technology" class or the "fruit" class. How might we create an embedding that works for a sentence like "I saw an apple on a phone today"?

    • @SerranoAcademy
      @SerranoAcademy  11 หลายเดือนก่อน +1

      Great question! Yes, these things are always ambiguous. The window for attention is larger, so hopefully if there are more sentences around, there will be a hint that you're talking about apple as a fruit.

  • @alieskandarian5258
    @alieskandarian5258 10 หลายเดือนก่อน

    It was fascinating to me, I searched a lot for a math explained which didn't find thanks for this
    Please do more😅 with more complex ones

  • @Priyanshuc2425
    @Priyanshuc2425 5 หลายเดือนก่อน

    Hey I know this 👦. He is my Maths teacher who don't only teach but make us visualize why we learn the topic and how will it useful in real world ❤

  • @tankado_ndakota
    @tankado_ndakota 6 หลายเดือนก่อน

    amazing video. that's what i looking for. I need to know mathematical background to understand what is happening behind. thank you sir!

  • @rollingstone1784
    @rollingstone1784 7 หลายเดือนก่อน

    @SerranoAcademy
    At 13:23, you show a matrix-vector multiplication with a column-vector (rows of the table times columns of the vector) by right-multiplication. On the right side, maybe you could use, additionally to "is sent to", the icon "orange' (orange prime). This would show the multiplication in a clearer way
    Remark: you use a matrix-vector multiplication here (using a row of the matrix and the words as a column on the right of the matrix). If you use row vectors, the the word vector should be placed horizontally on the left of the matrix and in the explanation, a column of the matrix has to be used. The result is then a row vector again (maybe a bit hard to sketch)

  • @SaeclumSolvet
    @SaeclumSolvet 3 หลายเดือนก่อน

    Thank you @Serrano.Academy, very useful video. The only thing that is a bit misleading is around 24:50, where Q,K are implied to be multiplied with the word embeddings to produce the cosine distance, when in fact the embediings are included in Q,K. I guess you are using Wq, Wk interchangeably with Q,K for simplicity.

  • @pavangupta6112
    @pavangupta6112 ปีที่แล้ว

    Very well explained. Got a bit closer to understanding attention models.

  • @mayyutyagi
    @mayyutyagi 5 หลายเดือนก่อน

    Now whenever I watch Serrano's video, I first like it and the start watching it coz I know the video will gonna be outstanding as always.

  • @nileshkikle8112
    @nileshkikle8112 10 หลายเดือนก่อน

    Dr. Luis - Excellent explanation for the math behind Q,K,V magic from AIAUN paper! In the video when you are explaining Multi-head attention (length 30:20), if I am not mistaken you have not explained the W matrix in the formula both at the "head" level and "concat" level. Please shed some light on this if you dont mind. My hunch is that to keep things simplified you might have decided to skip this W matrix that is part of NN loss function when the decoders are in action.

  • @GIChow
    @GIChow 11 หลายเดือนก่อน +1

    I have been trying to conceptualise what QKV do from many videos (and the original AIAYN paper) but yours is the first to resonate with what I was thinking i.e. that
    1) Q contains the 'word mix' (more accurately token mix) for our input text
    2) K contains the real deep meaning (thing/action/feature, etc) behind every word so Q*K gives us our actual word mix (aka input text sentence) deep meaning. We need a consistent way of storing meaning for the computer since two different human language sentences or 'word mixes' could yield the same deep meaning ("a likes b", "b is liked by a") and vice versa i.e. two identical sentences could give a different deep meaning depending on what came before them. QK gives us that.
    3) V gives us the features of the word that would likely come next following our input.
    Does this 'intuition' sound roughly right?

    • @SerranoAcademy
      @SerranoAcademy  11 หลายเดือนก่อน +1

      Thank you! Yes I love your reasoning, that's the intuition behind them!

    • @GIChow
      @GIChow 11 หลายเดือนก่อน

      ​@SerranoAcademy Thank you for getting back and helping my understanding of this area 🙌 You may know Karpathy has a video on building a simple GPT and that helped me understand qkv better too. Wishing all the best with your channel and ventures, and compliments of the season! 🎄

  • @gunamrit
    @gunamrit 6 หลายเดือนก่อน

    @36.15
    divide it by the square root of the dimensions of the vector and not the length of the vector. The length of the vector is sq(coeff i ^2 + coeff j^2)
    amazing video ! Keep it up !
    Thanks

    • @SerranoAcademy
      @SerranoAcademy  6 หลายเดือนก่อน +1

      Thank you! Yes absolutely, I should have said dimensions instead of length.

    • @gunamrit
      @gunamrit 6 หลายเดือนก่อน

      @@SerranoAcademy The only thing between me and the paper was this video and it helped me clear the blanks I had after reading the paper.
      Thank You once again !