RoPE (Rotary positional embeddings) explained: The positional workhorse of modern LLMs

แชร์
ฝัง
  • เผยแพร่เมื่อ 29 ม.ค. 2025

ความคิดเห็น • 34

  • @kristophersmith4067
    @kristophersmith4067 9 หลายเดือนก่อน +1

    Keep it up. I really enjoy your teaching style and visualizations!

  • @rajanghimire4022
    @rajanghimire4022 ปีที่แล้ว +4

    Wow, this is by far the best video on this topic that I have come across. The information presented was clear, concise, and very informative. I learned a lot.

  • @elliotstein165
    @elliotstein165 9 หลายเดือนก่อน +6

    Breaking up the visualisation of the 4D vector into 2x2D vectors is lovely - looks like a clock. A very intuitive notion for encoding position (in time)

  • @1littlecoder
    @1littlecoder ปีที่แล้ว +4

    This is a great explanation and I'd quote it in my upcoming updates video!

  • @飛鴻-q1c
    @飛鴻-q1c ปีที่แล้ว +1

    Great work!The best explanation I have ever seen for RoPE.

  • @anujlahoty8022
    @anujlahoty8022 ปีที่แล้ว

    I loved the analogies and the concept is explained very beautifully!

  • @adityagulati1540
    @adityagulati1540 ปีที่แล้ว

    This video is highly under-rated! :D

  • @jjwangg
    @jjwangg 28 วันที่ผ่านมา

    brilliant illustrations!

  • @egeres14
    @egeres14 ปีที่แล้ว

    This was incredibly well explained, thank you for the effort on editing this and publishing this video, it's been incredibly helpful

  • @tejag8149
    @tejag8149 ปีที่แล้ว

    Great explanation. Looking forward to more such videos. Would appreciate some videos around Computer Vision and Diffusion too!

  • @sujantkumarkv5498
    @sujantkumarkv5498 8 หลายเดือนก่อน

    incredibly explained sensei.

  • @octour
    @octour ปีที่แล้ว

    Great video, thank you! It is really the only one source, except paper explaining it in very approachable manner. An manim also helps a lot ;)

    • @octour
      @octour ปีที่แล้ว

      @deeplearninghero
      and you mention, that positional embedding is applied to k and q vectors. Is it new with RoPE? Because I thought, that in transformer architecture positional embedding is added to token embedding (which we get from tokenizer). And this summed vector goes to encoder/decoder, where it splitted to k, q and v. And inside encoder/decoder we are not applying any positional encoding.

    • @deeplearninghero
      @deeplearninghero  ปีที่แล้ว

      Yes it's a major change from sinusoidal embeddings to RoPE. As per RoPE's motivation you need the positional distinction between q and k so adding them there is ideal. :)

  • @mattoh1468
    @mattoh1468 ปีที่แล้ว

    A question about 9:28, when computing theta, if d=2, why does theta=1? or did you mean that there is only one value for theta?

    • @kingalpha2006
      @kingalpha2006 2 หลายเดือนก่อน

      Actually it means one value of theta since d=2

  • @nmxnunezz8214
    @nmxnunezz8214 4 หลายเดือนก่อน

    Amazing video!

  • @felipemello1151
    @felipemello1151 8 หลายเดือนก่อน

    Amazing video. Thank you

  • @mkamp
    @mkamp ปีที่แล้ว +3

    Great video.
    There is still a question left for me though. With the traditional PE they are very small and added to the original embeddings of the inputs. It is easy to see why the embeddings are still recognizable.
    But with RoPE, in your nice animations, the input embeddings are changed dramatically. How does the network learn that a dog embedding rotated by 180 degrees is still a dog?

    • @yourmomsboyfriend3337
      @yourmomsboyfriend3337 ปีที่แล้ว +1

      Hi, I'm speaking as a bit of a newbie to this concept, but I was also curious about your question.
      From what I found, in the transformer's architecture the meaning of a word, like "dog", within a given context is influenced by both its semantic embedding and its position in the sequence. The model is forced to learn these two pieces of information in conjunction, meaning it will see "dog" in many different positions and many different contexts.
      The other big thing is that transformers are not isolated word processors; the model processes the entire sequence when generating text, so even though the vector for "dog" is rotated, it's interpreted in the context of the surrounding words and their respective positional encodings. This is combined with the benefits of high-dimensionality. As you add more and more dimensions, it becomes increasingly less likely that the word "dog" could get rotated at any position to match any other word.
      Since the model processes sequences in parallel, it almost always will have contextual information such as "walk" or "leash", etc. that teaches the model the original semantic meaning during training regardless of how it is rotated.

    • @qinranqu
      @qinranqu ปีที่แล้ว

      very intuitive@@yourmomsboyfriend3337

    • @mkamp
      @mkamp ปีที่แล้ว

      @@yourmomsboyfriend3337 hey, took me a while to answer as I had to mull it over, still ongoing. Thanks for your answer. I suggest to change our PoV a little. Instead of seeing the embedding as a whole, that we look at the individual dimensions. Each dimension is rotated differently. So it would only be a few dimensions at a time, and even fewer important ones, that would be totally distorted by an 180 degrees rotation. So most of the dimensions would still be recognizable? I am still not really sold though.

  • @sherinmuckatira8333
    @sherinmuckatira8333 ปีที่แล้ว

    Nice explanation!

  • @1PercentPure
    @1PercentPure ปีที่แล้ว

    amazing, thank you so much!

  • @yannickpezeu3419
    @yannickpezeu3419 ปีที่แล้ว

    Thanks !

  • @jordanfarr3157
    @jordanfarr3157 ปีที่แล้ว

    Spectacular! Thank you so much for making this!
    Can I ask a very naive question as someone who is fairly new in this field?
    Are embeddings from LLMs, like the ones obtained through the OpenAI API, rotational? Do they take on the shape that you describe in the video or are they more positional?
    I currently use a vector database to compare language embeddings from Steam reviews, and that database utilizes a simple L2 Euclidean distance metric when making its comparisons.
    Are these concepts related?

    • @deeplearninghero
      @deeplearninghero  ปีที่แล้ว +1

      Hi, thanks for the question. It's a good question, not a naive question :)
      It's a bit difficult to know exactly what OpenAI is doing due to their secretive way of operating. But I can make an educated guess. There's three popular models; GPT3/3.5/4.
      GPT3 (arxiv.org/abs/2005.14165) - This came before the rotary embeddings paper, so I'm assuming this uses standard sinusoidal embeddings.
      GPT3.5 - Don't think there's a paper for this, so again I'm not sure if they use RoPE. But there's a good chance they are, as the timing is right.
      GPT4 (arxiv.org/pdf/2303.08774.pdf) - They do have a reference for RoPE in their paper, so quite possibly GPT4 is using RoPE
      But keep in mind these are speculative guesses and unfortunately there's no way to say which type of embedding by looking at the embeddings themselves.

    • @jordanfarr3157
      @jordanfarr3157 ปีที่แล้ว

      @@deeplearninghero that is absolutely fascinating! Thank you for such an insightful response.
      The two types of embeddings cannot be distinguished from another if given a set of similar text inputs? I'm not saying I'm savvy enough to figure out how to accomplish something like that, but I suppose that surprises me.
      If the visual analogy of the clock holds, would similar inputs not read as somewhat similar "times"? I know that's underselling of the complexity of working with high-dimensional embeddings.
      I guess the rotational nature of capturing information in this way has sparked my imagination as someone with a background in genetics.

    • @deeplearninghero
      @deeplearninghero  ปีที่แล้ว +1

      > If the visual analogy of the clock holds, would similar inputs not read as somewhat similar "times"?
      That's an interesting analogy. Thanks for sharing. But IMO, numerically it wouldn't hold. I guess your suggesting that we can perhaps "count" how many time a vector passes through a certain point? The problem would be two-fold. 1) You wouldn't get exact overlaps because these vectors are high dimensional and they may not be rotating in properly divisible pieces of the 360 degree circle. 2) Sinusoidal embeddings does something similar. So if you do this counting on approximate basis, you'd still see sinusoidals passing through this point. So would be difficult. And finally, being high dimensional, it's very hard to reason about (unfortunately).

  • @kevinxu9562
    @kevinxu9562 ปีที่แล้ว +2

    Coming from Yacine

  • @ledescendantdeuler6927
    @ledescendantdeuler6927 ปีที่แล้ว +4

    plugged by kache

    • @HeyFaheem
      @HeyFaheem ปีที่แล้ว

      Kind of dingboard?

  • @susdoge3767
    @susdoge3767 3 หลายเดือนก่อน

    everything is messed up after: rope 2d