Transformers and Self-Attention (DL 19)

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 พ.ย. 2024

ความคิดเห็น • 10

  • @browncow7113
    @browncow7113 ปีที่แล้ว +1

    Simply superb pedagogical style. I am looking forward to watching many of your other videos.

  • @sagar3482
    @sagar3482 ปีที่แล้ว +2

    I really never comment
    But this is the best explanation on complete internet for transformation.
    Really appreciate your hard work.
    Thanks for sharing this content for free

  • @jeangatti9384
    @jeangatti9384 ปีที่แล้ว +1

    Excellent explanation, very clear, loving your teaching style clear and not too fast. Thanks so much

  • @AdityaSingh-qk4qe
    @AdityaSingh-qk4qe 8 หลายเดือนก่อน

    This was superb, I never thought learning the transformer architecture can be completed in 17 minutes - and if viewed at 2x within 8-9 minutes - and the explanation was very very good.

  • @1166NYC
    @1166NYC 2 ปีที่แล้ว

    When I was first studying machine learning, there were only supervised and unsupervised methods of learning with all these different application-specific structures like CNN and RNN. I'm so excited about the future. Transformers are a crazy step towards greater achievement in ML. Your video is well-explained and helps my comprehension of this next step. Thanks!

  • @XuanJr.
    @XuanJr. ปีที่แล้ว

    Your video is the ideal style for studing CS

  • @minkijung3
    @minkijung3 8 หลายเดือนก่อน

    It's clear and accurate!!! Thank you very much for making this video!!

  • @Azman_Hamid
    @Azman_Hamid ปีที่แล้ว

    This is gold.

  • @AdityaSingh-qk4qe
    @AdityaSingh-qk4qe 7 หลายเดือนก่อน +1

    Hope sir that you can make more new videos on deep learning - your videos are succinct and also full of information

  • @gauravmenghani4
    @gauravmenghani4 ปีที่แล้ว +3

    Very well explained. Could you also 'augment' this lecture by explaining the intuition behind Q, K, and V? Thanks!