Sentence Transformers - EXPLAINED!

แชร์
ฝัง
  • เผยแพร่เมื่อ 14 ธ.ค. 2024

ความคิดเห็น • 46

  • @CodeEmporium
    @CodeEmporium  2 ปีที่แล้ว +23

    Hey Everyone! Hope you're all doing super well. This video will give you everything you need to know about Transformer Neural Networks, BERT Networks and Sentence Transformers - or at least all that we can cover in 17 minutes. Hoping we all understand why these Architectures were developed the way they were, painting the picture as a fluid story. I'm trying another teaching style here. If you like this kind of video, please do let me know in the comments. Put a lot of effort into this, so I hope you think this is good! Enjoy! And Cheers!

  • @Daniel-gy1rc
    @Daniel-gy1rc 2 ปีที่แล้ว +8

    dude you are amazing. Hope you keep this work up! Explaining complex things in an easy-to-follow and examplified way is a great skill!

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว

      Thanks a ton Daniel! Much appreciated complements :)

  • @manash.b4892
    @manash.b4892 ปีที่แล้ว

    Wow. Thanks a lot for all these videos. I am self-studying beginner and your videos have been a boon. Keep up the good work, man!

  • @TheHamoodz
    @TheHamoodz 2 ปีที่แล้ว

    This channel has orders of magnitude more views than it deserves

  • @LiaAnggraini1
    @LiaAnggraini1 2 ปีที่แล้ว +1

    Thank you! This is what I need for my thesis

  • @brewingacupofdata
    @brewingacupofdata 3 หลายเดือนก่อน +1

    Great Video! One minor comment: Shouldn't the loss equation from the triplet method (13:58) be the other way around? The difference that is subtracted should be between the anchor sentence and the negative sentence.

  • @MohammadShafkatIslam-k4x
    @MohammadShafkatIslam-k4x 3 หลายเดือนก่อน

    Great video. I just figured out the issue with my dataset, after I had bad results from directly using Roberta

  • @kevon217
    @kevon217 ปีที่แล้ว

    Excellent overview!

  • @BinayGupta-ny7bp
    @BinayGupta-ny7bp 4 หลายเดือนก่อน

    Great explanation dude

  • @nadavnesher8641
    @nadavnesher8641 4 หลายเดือนก่อน

    Brilliant video 🚀

  • @sooryaprakash6390
    @sooryaprakash6390 24 วันที่ผ่านมา

    Great Video

  • @WhatsAI
    @WhatsAI 2 ปีที่แล้ว

    Amazing overview !

  • @simoneparvizi775
    @simoneparvizi775 2 ปีที่แล้ว +1

    Hey man huge fan! Would you do a video about the "vanishing gradient problem"? Tbh I've been looking for a good video on it, but they're just not on point as you are....I'd really like your explanation on such argument!
    Keep up with the great work

  • @JJ-dz2ne
    @JJ-dz2ne ปีที่แล้ว

    Very informative, thank you!

    • @CodeEmporium
      @CodeEmporium  ปีที่แล้ว

      You are very welcome! Thanks for watching and commenting

  • @prasannabiswas2727
    @prasannabiswas2727 2 ปีที่แล้ว

    Really the best info out. Thank you.

  • @safwanmohammed7715
    @safwanmohammed7715 5 หลายเดือนก่อน

    Masterpiece 💯

  • @NicholasRenotte
    @NicholasRenotte 2 ปีที่แล้ว

    Oooooooh, this is so freaking cool!! When are we teaming up to build something?!

    • @CodeEmporium
      @CodeEmporium  2 ปีที่แล้ว +1

      Dude. I will reach out ma guy (sorry i didn't before) :)

    • @NicholasRenotte
      @NicholasRenotte 2 ปีที่แล้ว

      @@CodeEmporium ayyy no problemo man!

  • @HazemAzim
    @HazemAzim ปีที่แล้ว

    really neat . Thank you , I was looking for nice stuff on SBERT with decent depth

  • @Han-ve8uh
    @Han-ve8uh 2 ปีที่แล้ว +3

    Could you explain these 2 points in more detail?
    3:21 transformers weren't designed to be language models + 16:35 transformers not complex enough to train a language model
    1. What are language models supposed to do that transformers can't? My interpretation is that transformers do seq-seq tasks like translation, and translation needs a language model, so transformers are language models. Anything wrong with this thinking?
    2. Can I say transformers are only invented to parallelize RNN family of models with attention? Any other obvious general or task specific benefits of transformers?

    • @mizoru_
      @mizoru_ 2 ปีที่แล้ว

      I guess he means that they get better through improved pretraining (thus understand language better)
      From Papers with code: "BERT improves upon standard Transformers by removing the unidirectionality constraint by using a masked language model (MLM) pre-training objective. The masked language model randomly masks some of the tokens from the input, and the objective is to predict the original vocabulary id of the masked word based only on its context."

  • @GeoffLadwig
    @GeoffLadwig 10 หลายเดือนก่อน

    Great stuff. Thanks

  • @PritishMishra
    @PritishMishra 2 ปีที่แล้ว +4

    Great video!!! Can we get some project videos on Transformer? As you showed in this video about the text-similarity with BERT so do you have any plan to create a video to do this with python?

    • @thekarthikbharadwaj
      @thekarthikbharadwaj 2 ปีที่แล้ว +2

      Yes, really needed.
      Internet is lacking with an exact project developed using Transformers with proper backend information

  • @RaghavendraK458
    @RaghavendraK458 2 ปีที่แล้ว

    Great video. Thanks

  • @masteronepiece6559
    @masteronepiece6559 2 ปีที่แล้ว

    Nice overview

  • @miriamramstudio3982
    @miriamramstudio3982 ปีที่แล้ว

    Great video. One part I didn't completely understood is the NLI part. Do you mean that after that NLI step, the mean pooling sentence vector of the newly trained BERT not be "poor" anymore? Thanks.

  • @clairewang8370
    @clairewang8370 2 ปีที่แล้ว

    This is 🔥!!!😍😍😍😍😍

  • @freedmoresidume
    @freedmoresidume 2 ปีที่แล้ว

    Great video, thanks a lot

  • @TheShadyStudios
    @TheShadyStudios 2 ปีที่แล้ว

    Great choice!

  • @Slayer-dan
    @Slayer-dan 2 ปีที่แล้ว

    Thank you sir.

  • @norlesh
    @norlesh ปีที่แล้ว

    5:11 Bidirectional Encoder Representation FROM Transformer (not of Transformers)

  • @kestonsmith1354
    @kestonsmith1354 2 ปีที่แล้ว +1

    My favourite model to train is T5. So much better . I don't like encoder models . I rather use a model that uses both encoder and decoder rather than either/or.

  • @keerthana2354
    @keerthana2354 2 ปีที่แล้ว

    Can we use this for comparing two web articles?

  • @moslehmahamud
    @moslehmahamud 2 ปีที่แล้ว

    this is good!

  • @shoukatali5671
    @shoukatali5671 2 ปีที่แล้ว

    Приятно

  • @roccococolombo2044
    @roccococolombo2044 2 ปีที่แล้ว +3

    It is spelled chien nor chein

  • @InquilineKea
    @InquilineKea 2 ปีที่แล้ว

    QUORAAAAA