The Narrated Transformer Language Model

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 พ.ย. 2024

ความคิดเห็น • 229

  • @parthchokhra948
    @parthchokhra948 4 ปีที่แล้ว +248

    Your blog on Illustrated Transformer was my intro to Deep Learning with NLP. Thanks for the amazing contributions for the community.

    • @jc_777
      @jc_777 3 ปีที่แล้ว +6

      Yeah it is being referenced in my DL class too. Truly great content for new learners!

    • @ahmeterdonmez9195
      @ahmeterdonmez9195 หลายเดือนก่อน

      @@jc_777 Gemini also refers Mr Alammar's blog post👍

  • @andresjvazquez
    @andresjvazquez 2 ปีที่แล้ว +36

    Dear Teacher Alammar , thanks to this video I was able to accepted into BYU lab as an external researcher (even though I didn’t finish college) and have been invited by my professor to participate with the lab in CASP15 . You really changed the course of my life by demystifying such complex topics for non traditional learners like me . I’m eternally in your debt

  • @ans1975
    @ans1975 4 ปีที่แล้ว +28

    The Illustrated Transformer blog is a masterpiece!

  • @Roshan-xd5tl
    @Roshan-xd5tl 3 ปีที่แล้ว +18

    Your ability to explain and breakdown complex topics into simpler and intuitive sections is legendary. Thank you for your contribution!

  • @drtariqahmadphd3372
    @drtariqahmadphd3372 3 ปีที่แล้ว +1

    Never been more excited by a TH-camr channel than when I saw this guy had a channel.

  • @bighit7596
    @bighit7596 3 ปีที่แล้ว +2

    you have a gift for explaining complex materials... many other technical talks assumes the audience is very knowledgeable and are attending the session just for networking

  • @diogo.magalhaes
    @diogo.magalhaes 4 ปีที่แล้ว +10

    Jay, as a PhD student, I'm a fan of your ability to explain complex topics, in a very simple, illustrated and didactic way! I always recommend your ' illustrated' posts to my colleagues. Thanks again for this great video, keep up the good work!

  • @ayush612
    @ayush612 3 ปีที่แล้ว +4

    I remember Seeing your Transformer's Blog Jay.. It was legendary!! Was referred to by other youtubers as well... And thanks a lot for the wonderful explanation as well!

  • @nisalbandara
    @nisalbandara 3 ปีที่แล้ว

    Im doing a Twitter sentiment analysis and i couldn't wrap my head around BERT and i came across this video. Perfectly explained. Thanks alot

  • @maruthiprasad8184
    @maruthiprasad8184 11 หลายเดือนก่อน

    Amazing explanation, my search to understand the transformers ended here, you done the wonderful job, thank you so much for the simplest explanation I ever seen.

  • @kalinda619
    @kalinda619 4 ปีที่แล้ว +2

    A phenomenal extension of your blog post. Commenting for that bump in the recommendation algorithm!

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว

      Thank you! Much appreciated!

  • @kazimafzal
    @kazimafzal ปีที่แล้ว +1

    You sir are an amazing teacher! I'm absolutely flabbergasted by how well you've explained, to think its all mathematics at the end of the day! Thank you for taking the time to put together such a concise yet complete guide to transformers!

  • @nileshkikle8112
    @nileshkikle8112 10 หลายเดือนก่อน

    Outstanding job demystifying the inner working details of the Transformer model architecture! All the illustrations and animations for the inference working are awesome. Thank you for taking all the time and sharing your understanding with all of us. Kudos! 👍

  • @jacakopl
    @jacakopl 3 ปีที่แล้ว +1

    This is the best video I have seen by far in this domain. You strike a perfect balance in assuming the level of understanding of audience :)

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว

      Awesome! Glad you found it useful!

  • @goelnikhils
    @goelnikhils ปีที่แล้ว

    I haven't see such a clear explanation of Transformers and Decoder LM Models, Amazing Work Jay

  • @quietkael7349
    @quietkael7349 4 ปีที่แล้ว +9

    Thank you so much for all the tireless work you do for us visual learners out there! I’m looking forward to videos where you get into your excellent visualizations of the underlying matrix operations. Your visual abstractions both at the flow chart level and matrix/vector level have really shaped my mental model for what I think about when I’m engineering models. I’m so grateful and so excited to see what you come out with next (this library you hint at looks wonderful!)

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +1

      Thanks Jack!

  • @curiouspie1264
    @curiouspie1264 ปีที่แล้ว

    One of the most comprehensive video and blog overviews of Transformers I've seen. Thank you. 🙏

  • @tachyon7777
    @tachyon7777 2 ปีที่แล้ว +7

    It would nice to have a step by step walkthrough of the training process. And why each of those steps makes sense intuitively.

  • @jesuslopez3306
    @jesuslopez3306 ปีที่แล้ว

    Definitely it is easier to understand in a vertical way. Thanks for everything!

  • @Halterofilic
    @Halterofilic 7 หลายเดือนก่อน

    2024, still a great reference to Transformers. Million thanks for the amazing work!

  • @raminbakhtiyari5429
    @raminbakhtiyari5429 3 ปีที่แล้ว

    i don't khnow how must say thank you, I just can say please continue uploading your amazing videos. I live in a constrained country and this video is my only hope for learning like other peoples. yours sincerely.
    Ramin Bakhtiyari.

  • @JimBob-lq1db
    @JimBob-lq1db 10 หลายเดือนก่อน

    Thank you for this great explanation. Visualize , visualize, visualize, the best way to undestand how it works.

  • @IyadKhuder
    @IyadKhuder 2 ปีที่แล้ว

    I've ended up here to familiarize myself with NLP transformers. Your video was the optimal choice for me, as it' explains the concept in an understandable scientific manner. Thanks.

  • @OslecVardeven
    @OslecVardeven 7 หลายเดือนก่อน

    Jay, recentemente estive em um curso de I.A, Mas voce apresentou muito bem, de forma didática a PNL.... eu aprendi muito com voce.
    Obrigado. Continue sendo este cara maravilhoso.

  • @ultraviolenc3
    @ultraviolenc3 3 ปีที่แล้ว +1

    I’ve just read your “The illustrated transformer” article and I wanted to say that you made very smart and simple visual representations. It seems you put a lot of thought into that.

  • @tiborsaas
    @tiborsaas 11 หลายเดือนก่อน

    This video really aged well. It came out just after GPT3 and before ChatGPT. I love it how it gives massive insights to how current generative AI works behind the scenes (but obviously in a simplified way).

  • @NarkeEmpire
    @NarkeEmpire ปีที่แล้ว

    You are a great teacher!!! If you chek the EQ settings and lower the music at the beginning the video is perfect!!! Thanks a lot for sharing your knowledge in this very understandable way

  • @stephenngumbikiilu3988
    @stephenngumbikiilu3988 2 ปีที่แล้ว

    Your blog was referred to me by my lecture Julia Kreutzer of Google Translate, it's just amazing piece of work. It has really helped me in my understanding of these concepts. Thanks.

  • @ishandindorkar2846
    @ishandindorkar2846 11 หลายเดือนก่อน

    Jay, many thanks for your work. These videos help me a lot to understand key concepts in NLP domain through visualization.

  • @gergerger53
    @gergerger53 4 ปีที่แล้ว +1

    Amazing video. Have to admit that every time I heard the wrong pronunciation of "Shawshank" it did feel a bit like nails on a blackboard but easily forgivable. Jay, your resources and videos are phenomenal :) Thank you for putting in the work to help us all out.

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +1

      Haha! Wrong how? Am I overpronouncing the shaWshank? Thank you!

    • @gergerger53
      @gergerger53 4 ปีที่แล้ว +1

      @@arp_ai The "Shaw" is pronounced like "sure/shore" but in the video you use the vowel that's in "how/cow". Anyway, I only meant this as a tiny point :) Take home message is that you are an incredible ML / NLP teacher!!

  • @yudiguzman8926
    @yudiguzman8926 3 ปีที่แล้ว

    I really appreciate your explanation about this topic. One more time, I check that DL is my new passion. Thanks a lot.

  • @studmatze958
    @studmatze958 ปีที่แล้ว

    Thank you so much for you work on attention and transformers. Your posts and videos are the best i have encountered so far in terms of visualization and explanation. And you did it way better than my Professor. Again thank you :)

  • @1Kapachow1
    @1Kapachow1 3 ปีที่แล้ว

    Really enjoyed your blog post and video, super clear - thank you very much for this amazing resource :)

  • @a.e.5054
    @a.e.5054 4 ปีที่แล้ว

    The best explanation of the Transformer and GPT model !!

  • @nmstoker
    @nmstoker 4 ปีที่แล้ว +2

    Watching it now, thanks so much! It's really helpful to go through these kinds of things with clear examples and explanations.
    My only preference would've been to reduce the volume of the background music in the intro. So many podcasts do this and it's an annoying trend!

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +1

      Thanks Neil! Noted on the audio!

  • @KlimovArtem1
    @KlimovArtem1 3 ปีที่แล้ว +2

    27:56 - this explains a lot, thank you so much!

  • @sudzam
    @sudzam ปีที่แล้ว

    Wow! One of THE best explanation of Transformers.. Thanks @Jay!!

  • @exxzxxe
    @exxzxxe 11 หลายเดือนก่อน

    Maybe the best video on this subject.

  • @rsilveira79
    @rsilveira79 4 ปีที่แล้ว +4

    Nice collection of albuns man! Miles Davis, Radiohead, John Coltrane, very classy! 👏👏👏

    • @kumarvikas_134
      @kumarvikas_134 4 ปีที่แล้ว +1

      Spot on observation, kind of ironic to be listening to Ok Computer and teaching about artificial intelligence :D

  •  3 ปีที่แล้ว +10

    Just a personal comment on the format of the videos: I, personally, find that constant change of scene (like in "The architecture of the transformer" section) where the camera changes constantly showing you and then showing the computer screen and then back to you, is extremely annoying.
    The content of the video itself was informative.

  • @jpmarinhomartins
    @jpmarinhomartins 3 ปีที่แล้ว

    Dude I freakin love your blog, keep up with the good work! Thanks for everything!

  • @sharkeyryan
    @sharkeyryan 2 ปีที่แล้ว

    Thanks for creating this content. Your explanation is quite easy to follow, especially for someone like me who is just beginning to explore these areas of AI/ML.

  • @abugigi
    @abugigi 3 หลายเดือนก่อน

    Great video, and perhaps just as important, great selection of albums

  • @damonandrews1887
    @damonandrews1887 3 ปีที่แล้ว

    I found this very helpful visual explainer, thanks so much for your time, and thanks for chopping it up into sections for easy revision 🤓!

  • @utsavshukla7516
    @utsavshukla7516 3 ปีที่แล้ว

    great explanation! also love all the pop culture references in your room :p

  • @Udayanverma
    @Udayanverma ปีที่แล้ว

    loved it. thanks. got some new neurons in my head created by this video.

  • @niundisponible
    @niundisponible 2 ปีที่แล้ว

    I see Miles Davis vinyl, kind of blue. Awesome album, and thanks for the video!

  • @AdityPai
    @AdityPai 4 ปีที่แล้ว +1

    Thank you for writing the blog. It has helped me .

  • @javierechevarria1548
    @javierechevarria1548 3 ปีที่แล้ว

    Your are really good (excellent) at explaining a complex topic in a simple way. Congratulations !!!!

  • @romulodrumond3526
    @romulodrumond3526 3 ปีที่แล้ว

    One of the best videos of the subject

  • @zongmianli9072
    @zongmianli9072 ปีที่แล้ว

    Thanks for the very clear and concise explanation, Jay!

  • @HelenTueni
    @HelenTueni 2 ปีที่แล้ว

    Amazing video. Thank you very much for making this topic accessible.

  • @tehseenzia3135
    @tehseenzia3135 3 ปีที่แล้ว

    Amazing illustration. Keep going Jay.

  • @sachinr3823
    @sachinr3823 3 ปีที่แล้ว

    Omg, thanks lot for these amazing videos. Your lectures and blogs are so easy to understand.

    • @sachinr3823
      @sachinr3823 3 ปีที่แล้ว

      Small request, please pin the BGM you used in the video

  • @itall9025
    @itall9025 4 ปีที่แล้ว

    Great explanation! Please keep doing this format.

  • @ygorgallina2691
    @ygorgallina2691 2 ปีที่แล้ว

    Thank you so much for your work ! The illustration help to clearly understand these models !!

  • @o_felipe_reis
    @o_felipe_reis 4 ปีที่แล้ว +1

    Great video! Best regards from Brazil!

  • @TusharKale9
    @TusharKale9 3 ปีที่แล้ว

    Great master piece explanation of NLP in real life scenario. Thank you

  • @omarsultan827
    @omarsultan827 2 ปีที่แล้ว

    Thank you for this awesome introduction!

  • @Opinionman2
    @Opinionman2 2 ปีที่แล้ว

    Awesome stuff. your blog really helped clarify my deep learning class.

  • @tusharkhustule3316
    @tusharkhustule3316 ปีที่แล้ว

    1 minute into the video and I already subscribed.

  • @maxbeber
    @maxbeber 4 ปีที่แล้ว

    Thank you so much for the clear and concise explanation. Keep it up the great work.

  • @pypypy4228
    @pypypy4228 8 หลายเดือนก่อน

    A huge thank you for this explanation!

  • @thecutestcat897
    @thecutestcat897 ปีที่แล้ว

    Thanks, your Blog is so clear!

  • @jemmaj2919
    @jemmaj2919 2 หลายเดือนก่อน

    this is amazing. One thing I didn't understand is the matrix, how it is generated and used in the processing to return the probability (how "the" turns into a big array of inputs)

  • @parmarsuraj99
    @parmarsuraj99 4 ปีที่แล้ว +5

    ❤️ That library!!!!

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +6

      It's been my entire focus the last few months. Stay tuned!

  • @KlimovArtem1
    @KlimovArtem1 3 ปีที่แล้ว +5

    14:15 - so, the Self-Attention layer is actually the thing that’s trying to understand the meaning of the whole sequence? How does it work and how can it be trained? How long sequenced can it analyze?

  • @armingh9283
    @armingh9283 3 ปีที่แล้ว

    Thanks for the explanation. Good music taste at the background by the way👍

    • @arp_ai
      @arp_ai  3 ปีที่แล้ว

      Thank you!

  • @jackdavidweber
    @jackdavidweber 3 ปีที่แล้ว

    This is really great! Highly recommend!

  • @tshepisosoetsane4857
    @tshepisosoetsane4857 ปีที่แล้ว

    Amazing work indeed thanks for simplifying things for everyone to understand this AI great work

  • @hasanb2312
    @hasanb2312 3 ปีที่แล้ว

    Great video Jay, thank you so much!

  • @tsadigov1
    @tsadigov1 ปีที่แล้ว

    I am trying to understand working of transformer, you explain it much accessible way. One small thing I wish the video had less of transitions between two cameras.

  • @amirhosseinfereidooni1798
    @amirhosseinfereidooni1798 3 ปีที่แล้ว

    Thanks for the great explanation. MLP (at 11:35) stands for multilayer perceptron :)

  • @RupertBruce
    @RupertBruce ปีที่แล้ว +1

    "We have ways to calculate the error..." - there's a lot of 'ways'-the chosen way would be interesting!

  • @junlinguo77
    @junlinguo77 2 ปีที่แล้ว

    I like the way you are teaching! !!

  • @Alex-oo5rt
    @Alex-oo5rt ปีที่แล้ว

    6:13 actually, GPT-2 and GPT-3 models are both composed of an encoder-decoder architecture. The encoder-decoder architecture is a common framework used in natural language processing (NLP) tasks, particularly in sequence-to-sequence models. while GPT-2 and GPT-3 have an encoder component, it is not as prominently utilized as the decoder for generating text outputs.

  • @josephsueke
    @josephsueke 9 หลายเดือนก่อน

    Really clear. amazing job!

  • @FabioAlmeida-k6t
    @FabioAlmeida-k6t 6 หลายเดือนก่อน

    Excellent explanation, Thanks!

  • @rupakgoyal1611
    @rupakgoyal1611 3 ปีที่แล้ว

    loved the music behind ..

  • @spacewaves94
    @spacewaves94 3 ปีที่แล้ว

    Haha the chicken was a man, thanks for all the work breaking this down!

  • @yuchenyang4394
    @yuchenyang4394 4 ปีที่แล้ว

    Great content! can't wait for more.

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว

      Thank you Yuchen!

  • @yoonyamm
    @yoonyamm ปีที่แล้ว

    Thank you for sharing wonderful insight!

  • @kl6544
    @kl6544 3 ปีที่แล้ว +2

    “A robot must troll”
    Dnw bout you but the model sounds trained to me

  • @ankitmaheshwari7310
    @ankitmaheshwari7310 2 ปีที่แล้ว +1

    Helpful.. you missed to import torch in your GitHub code.

  • @WanderNatureDaily
    @WanderNatureDaily 3 ปีที่แล้ว

    absolutely amazing video

  • @hailongle
    @hailongle 3 ปีที่แล้ว

    Fantastic teacher. Thanks Jay!

  • @hunorszegi4007
    @hunorszegi4007 ปีที่แล้ว

    Thank you for your videos and blog posts. These were my inspiration to create a Java GPT-2 implementation for learning purposes. I can't use a link here, but as huplay I uploaded it to the biggest hosting site, and it is called gpt2-demo.

  • @mertcokelek4595
    @mertcokelek4595 4 ปีที่แล้ว +3

    Thank you for the great explaination.
    I am new to this topic, and I wonder why the "shawshank" word is tokenized into 3 pieces, the "sh" and "ank" are meaningless, is it a result of a learned model? Or the tokenization is done hand-crafted?
    Thanks in advance.

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +4

      That is the result of training the tokenizer using BPE en.wikipedia.org/wiki/Byte_pair_encoding

  • @hongkyulee9724
    @hongkyulee9724 3 ปีที่แล้ว +1

    You are my hero. You give me reason of my life :D

  • @MsFearco
    @MsFearco 2 ปีที่แล้ว

    I just found this now. it's super. thanks

  • @evertonlimaaleixo1084
    @evertonlimaaleixo1084 3 ปีที่แล้ว

    Amazing!
    Thank you for share!

  • @NilaMasrourisaadat
    @NilaMasrourisaadat ปีที่แล้ว

    Amazinnnng illustration of language model transformers

  • @peterkahenya
    @peterkahenya ปีที่แล้ว

    Wow! 🎉 Awesome into.

  • @haswanthaekula7656
    @haswanthaekula7656 4 ปีที่แล้ว +3

    This is a noob question, I was just curious when I was watching the video. How is it Unsupervised pre-training when you are actually providing the correct output (label) at the end?

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว +3

      Great question! It's unsupervised (or now more commonly called "self-supervised) because we didn't need a labeled dataset to train it. Just running text that we can use to generate examples.

    • @haswanthaekula7656
      @haswanthaekula7656 4 ปีที่แล้ว

      @@arp_ai Thank you so much for such great detailed videos. :)

    • @jleape1989
      @jleape1989 3 ปีที่แล้ว +1

      @@arp_ai I had the same question. Self-supervised seems like a better description. Great video!

  • @akshikaakalanka
    @akshikaakalanka ปีที่แล้ว

    Thank you very much! this is awesome and easy to understand.

  • @RK-fr4qf
    @RK-fr4qf ปีที่แล้ว

    Impressive. Thank you.

  • @vslobody
    @vslobody 4 ปีที่แล้ว +1

    Jay - i think this question was asked somewhere else, but i cannot find good answer -
    From the article:
    > In the decoder, the self-attention layer is only allowed to attend to earlier positions in the output sequence. This is done by masking future positions (setting them to -inf) before the softmax step in the self-attention calculation.
    In other words, the output logits (i.e. word translations) of the decoder are fed back into that first position, with future words at each time-step masked.
    I'm not quite sure how it all flows, b/c with several rows representing words all going through at once (a matrix), it seems like you would need to run the whole thing forward several times per sentence, each time moving the decoded focal point to the next output word...
    where is this loop in the Decoder layer, i am struggling to figure it out n my own.
    Thanks much in advance,
    Volodimir

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว

      By "rows" I assume you mean when the model is processing a batch, and every row is an example sentence. This visual might explain that:
      jalammar.github.io/images/gpt2/transformer-attention-masked-scores-softmax.png
      from jalammar.github.io/illustrated-gpt2/

    • @vslobody
      @vslobody 4 ปีที่แล้ว

      @@arp_ai Thanks! If every row is an example sentence, then why do you only look into the first word in the first row, but you look into the two words in the second row and so on?

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว

      @@vslobody sorry, let clarify. In the image, each row is for processing the same sentence with an additional word.
      The section in the article that starts with "This masking is often implemented as a matrix called..." explains in more detail

    • @vslobody
      @vslobody 4 ปีที่แล้ว

      @@arp_ai Great, thanks a lot. So this is my question - where is the loop that allows to go me to go through each word in the sentence, it seems to me i cannot find one in the code.

    • @arp_ai
      @arp_ai  4 ปีที่แล้ว

      @@vslobody I believe that would be the forward pass that generates each token. What implementation are you looking at? Huggingface?

  • @KlimovArtem1
    @KlimovArtem1 3 ปีที่แล้ว +3

    15:30 - when it was trained on the huge texts, how did they decide how to tokenize is? Is it based on some linguistic objects? Syllables?

    •  3 ปีที่แล้ว

      if you're using pre-trained word embeddings , you have to tokenize it in the exact fashion the so-called word embedding was tokenized. Other than that , if you won't use pre-trained embeddings (which is usually not the case), you can just keep going over the entire corpus and create a list of distinct words or n-grams or whatever way you have chosen to define a token.

    • @KlimovArtem1
      @KlimovArtem1 3 ปีที่แล้ว

      @ why tokens are needed at all? Why not to use letters?

    •  3 ปีที่แล้ว

      @@KlimovArtem1 all a model understands, is numbers

    • @KlimovArtem1
      @KlimovArtem1 3 ปีที่แล้ว

      @ letters are numbers too. Again, I asked why not to use letters? When words are separated onto other constructs instead - what are they from a linguist point of view?

  • @mahdiamrollahi8456
    @mahdiamrollahi8456 2 ปีที่แล้ว

    A question: let’s suppose that we are going to translate these two sentences into another language:
    1- i go to university by bike.
    2-i go to work by bike.
    ( the difference is just for work, university)
    we know that each token has different embedding vector according to input sequence( the encoder results).
    So, according to different embedding vectors , how transformer block can translate “the” to corresponding word in another language? I mean, when we have different representations of any token based on input, how we can get the proper results in output( decoder output)?

  • @wangwu9299
    @wangwu9299 4 หลายเดือนก่อน

    Can you explain the mathematics model behind prompt? I found no mathematics model to explain prompt, just guideline to say you should write prompt as if you are talking to a human being