Transformer Embeddings - EXPLAINED!

แชร์
ฝัง
  • เผยแพร่เมื่อ 15 ธ.ค. 2024

ความคิดเห็น • 53

  • @CodeEmporium
    @CodeEmporium  2 ปีที่แล้ว +8

    Hope you all liked the video. Also, if you want to know more about stratascratch (the data science interview prep website), check out the link in the description. Cheers!

    • @yogeshsharma4308
      @yogeshsharma4308 2 ปีที่แล้ว

      Saw few videos of yours for my research paper for masters. I have started to become your fan for the your multi pass approach and stitching all the nuances. Wow!!!

  • @timtanida7994
    @timtanida7994 2 ปีที่แล้ว +12

    Great video! I think the reason why concatenation works better than addition in the multi-head attention module is because after the concatenation, there is a (fully-connected) linear layer that transforms the concatenated vectors back to the embedding dimension.
    So in your example of 8 heads, we have a concatenated vector of dimension 16, which gets transformed back to a dimension of 2.
    And since the weights in the linear layer are learned, it can give more or less weight to specific heads. Meaning if one head creates better encodings than others, then I will get higher weights.
    Whereas if we add all encodings from all heads, all of them essentially get the same weight, and there is no discrimination.

  • @TURBOKNUL666
    @TURBOKNUL666 2 ปีที่แล้ว +2

    This is an insanely good video. Thanks for clarifying bits and pieces. This high level perspective really helps.

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      Thank you for the comment and for watching :)

  • @kevon217
    @kevon217 ปีที่แล้ว +1

    This is the best intuitive explanation I’ve encountered so far. Great channel!

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว

      Thanks a ton for the compliment :)

  • @andrewchang7194
    @andrewchang7194 2 ปีที่แล้ว +3

    This was a really good video. I’ve gone through the Attention is all you need paper and read a lot about seq2seq models, but it’s really nice to solidify my understanding with a higher-level overview. Appreciate this.

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      A much appreciated comment. Thank you so much

  • @l_urent
    @l_urent 2 ปีที่แล้ว +3

    As usual, nice video. Vectors are concatenated because they are split prior to entering the multiheaded attention. Hence the (parallelizable) "fork" we see on the diagram. Also, the concatenation does not augment the dimensionality compared to that of input embeddings, i.e. it actually restores it, thus authorizing the residual link's addition, otherwise impossible "as is" du to non-conformable operands. All this is more obvious when taking everything from the matricial point of view.
    Regarding the addition of the positional and the meaning-space embeddings, the more convenient way to think about it is first to keep in mind that we are dealing with (mathematical space) coordinates. Would you add a longitude with a latitude ? Same intuition here : positional vectors components inject (additively, along each meaning axis through words) deterministic structures between the words of the sequence. This is supposed to uniquely identify the sequence for the downstream network to deal with. Put differently, the only things that matter are the (sinus/cosinus-based) differences we have between words. In practice, (or after normalization) embeddings components are (unit-variance) centered-on-0 values, just like sinus and cosines outputs. Having a meaning-space component of, say, 50, is extremely unlikely or quickly cleaned by "(add and) norm".

  • @mehular0ra
    @mehular0ra 2 ปีที่แล้ว +4

    Great content! Please keep making these

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      Thanks a ton! And i shall :)

  • @viratmani7011
    @viratmani7011 2 ปีที่แล้ว

    How do multiheaded attention can differentiate between word vectors and position beddings?

  • @radiradev3013
    @radiradev3013 2 ปีที่แล้ว +6

    Why are you showing a word2vec embedding used as in input to the transformer? I thought it was common to use a tokenizer and let the model learn an embedding using the encoder part of the transformer.

    • @krranaware8396
      @krranaware8396 2 ปีที่แล้ว +3

      We can use already learned embedding to convert the input and output token as mentioned in transformer paper.

  • @LiaAnggraini1
    @LiaAnggraini1 2 ปีที่แล้ว +2

    Another awesome content. Thank you so much ❤️

  • @woosukbyun2455
    @woosukbyun2455 16 วันที่ผ่านมา

    I think there is a miscalculcation at 8:26

  • @AbdurRahman-mv4vz
    @AbdurRahman-mv4vz ปีที่แล้ว

    Thank you for this helpful video.

  • @anan7s294
    @anan7s294 2 ปีที่แล้ว

    I’ve been bombarded with Data analyst courses. So I landed here, but I’m a bit confused about the Google courses- Data analyst, Project mgmt., Digital marketing . I’ve a master’s degree in geography and no work experience, but I have done O Level (foundation of IT) and I can work on html. My interests are automobiles, tech gadgets, travelling and learning new skills. What do u reckon would be the most suitable job from Google’s certificate and other certificates.

  • @mehular0ra
    @mehular0ra 2 ปีที่แล้ว

    Can you make a video on neural dependency parser also

  • @Data_scientist_t3rmi
    @Data_scientist_t3rmi 2 ปีที่แล้ว

    Hello, could you tell me why @ 11:00 you used 8 vectors ? why not 9 or 10 ?
    thank you in advance

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว

      8 was just the official number of heads in the main paper. You can technically use as many as you wish. I assume they didn’t see noticeable differences even increasing this number

  • @proxyme3628
    @proxyme3628 2 ปีที่แล้ว

    Why Multi Head Attention generates 8 word vectors for each?

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว

      Because we assumed 8 attention heads in multi head attention. This could have been any other number in theory. I mentioned 8 because that was how many heads were used in the main paper

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w ปีที่แล้ว

    Can I ask, the material covered in this video, is it something that data scientist would learn and use? or would it be ML engineers? or just ML scientists? Just wondering where such understanding of Transformer architecture fall under.

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว +1

      Good question. If you’re talking about just using these architectures, I would say Machine Learning Engineers would use them. If you are talking about developing new ideologies for constructing the architectures themselves, it might fall in the researchers (they have many titles : AI researchers, Deep Learning Researchers, ML researchers)
      The meaning of “Data Science” has changed over the years. These days, it comes across as a fuzzy title that could be a Data Analyst, Data Engineer, ML engineer and everything in between. So do data scientists use Transformers? It depends. If the data scientist is more a machine learning engineer, then yes. If they are Data Analysts or Data Engineers, I doubt they would need to even use this beyond curiosity

  • @sanoturman
    @sanoturman 2 ปีที่แล้ว

    Very well done, thank you. If that's not a secret, what software is used to design this beautiful video (including the music)?

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว +1

      Thanks for watching! I used Camtasia Studio for this

  • @rayeneaminaboukabouya4782
    @rayeneaminaboukabouya4782 2 ปีที่แล้ว

    Thank you so much so if we want to use this context for am image just we replace the words by patches !

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว

      That’s an interesting thought! I’m sure with some modifications to the input , that could be possible !

  • @divyanshoze8962
    @divyanshoze8962 2 ปีที่แล้ว

    That's some lovely content, thank you.

  • @waisyousofi9139
    @waisyousofi9139 2 ปีที่แล้ว

    Thanks,
    Please make a video for :
    what happens with decoder part .

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว +1

      I'll make a video on GPT at some point that may address this

    • @clairewang8370
      @clairewang8370 2 ปีที่แล้ว

      @@CodeEmporium Looking forward to your GPT video !!!😍😍😍😍😍

  • @aneekdas3056
    @aneekdas3056 2 ปีที่แล้ว

    Extremely helpful. Thanks a ton.

  • @felixakwerh5189
    @felixakwerh5189 2 ปีที่แล้ว

    Thank you 🙏🏽

  • @0xmahdirostami
    @0xmahdirostami 2 ปีที่แล้ว

    Thank you

  • @mehular0ra
    @mehular0ra 2 ปีที่แล้ว

    btw which tool do you use to make these animations?

  • @adityarajora7219
    @adityarajora7219 2 ปีที่แล้ว

    can you make same for the bert!

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      I have in the past. You want to click on the "i" a the top right hand side of the video. This should have links

  • @tolbaahmed
    @tolbaahmed 2 ปีที่แล้ว

    💖

  • @ScriptureFirst
    @ScriptureFirst 2 ปีที่แล้ว

    TIGBA! Please post a link to your script, even if you did it stick to it strictly or even veer completely off. 🙏🏼

  • @jayseb
    @jayseb ปีที่แล้ว

    From the get go, the French translation isn't right...

  • @WhatsAI
    @WhatsAI 2 ปีที่แล้ว

    Great video as always, especially loved the previous about Transformers and this follow-up! I just wanted to say that I love your videos and you are one of the reason with Yannick that I started to make some myself. I'd love to collaborate on a project with you someday if you'd be interesting! Added you on discord if you'd like to chat.

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      This makes my day. So glad there are more educators like yourself out there! Let's do this! We can chat collabs soon too :)

  • @asyabenali3779
    @asyabenali3779 2 ปีที่แล้ว

    Can I contact you please. I am interested in this field and have encountered some problems, can you help me? By email