Which transformer architecture is best? Encoder-only vs Encoder-decoder vs Decoder-only models

แชร์
ฝัง
  • เผยแพร่เมื่อ 26 ต.ค. 2024

ความคิดเห็น • 29

  • @sp5394
    @sp5394 3 หลายเดือนก่อน

    Thank you very much. Great video! Clear, concise and yet covers most of the necessary details.

  • @sumukhas5418
    @sumukhas5418 ปีที่แล้ว +1

    Great video, learnt a lot on how models work
    Looking forward on more videos like these 😊

  • @groundingtiming
    @groundingtiming ปีที่แล้ว +4

    great video, can you make one with more detail focusing on the why ?

  • @chitranair1105
    @chitranair1105 10 หลายเดือนก่อน +1

    Good explanation. Thanks!

  • @arabindabhattacharjee9774
    @arabindabhattacharjee9774 11 หลายเดือนก่อน +3

    One thing which I still didnot understand was, how decoder only model works, when the encoder is not there? What part ensures that the sequence of inputs are managed and do not get jumbled up for a correct output?

    • @EfficientNLP
      @EfficientNLP  11 หลายเดือนก่อน

      In the decoder-only model, the input is provided as a prompt or prefix, which the model uses to generate subsequent tokens. As for how they don't get jumbled up - they use positional encodings to convey information about word order. I have some videos about how positional encodings work if you're interested.

    • @desrucca
      @desrucca 10 หลายเดือนก่อน +1

      ​@@EfficientNLPIve tried prompting a conversational chatbot in transformers library Python.
      But I found out decoder-only (causal) model is slower by many times compared to (seq2seq) encoder-decoder model. Why is that?

  • @xflory26x
    @xflory26x ปีที่แล้ว +7

    It's still not clear what the difference between the three are - how are they different in terms of the way they process the text? How is the encoder-decoder different to the decoder only - if both of them are autoregressive?

    • @EfficientNLP
      @EfficientNLP  ปีที่แล้ว +4

      Indeed they have a lot in common and both encoder-decoder and decoder-only models do autoregressive decoding. The main difference is encoder-decoder models make an architectural distinction between the input and output, in encoder-decoder models typically there is a cross-attention mechanism in the decoder, which is not present in decoder-only models.

  • @tukarampriyolkar3608
    @tukarampriyolkar3608 3 หลายเดือนก่อน

    Awesome explanation!

  • @nudelsuppenzauberer3367
    @nudelsuppenzauberer3367 8 หลายเดือนก่อน

    I think u safe my exams ty man

  • @kaustuvray5066
    @kaustuvray5066 10 หลายเดือนก่อน +2

    at 3:08 why does the encoder take 4 timesteps? Isnt the encoder supposed to be parallel?

    • @EfficientNLP
      @EfficientNLP  10 หลายเดือนก่อน

      You’re right, transformer encoders process all the input in parallel. However, encoders are not always transformers, and in this case the figure shows an example of the older RNN/LSTM type of encoder.

  • @chrisogonas
    @chrisogonas ปีที่แล้ว

    Well illustrated. Thanks

  • @Monoglossia
    @Monoglossia ปีที่แล้ว +1

    Very clear, thank you!

  • @WhatsAI
    @WhatsAI ปีที่แล้ว

    Great video Bai! :)

  • @ZivShemesh
    @ZivShemesh ปีที่แล้ว

    Thank you very much, very helpful!

  • @kevon217
    @kevon217 ปีที่แล้ว

    Great overview!

  • @MrFromminsk
    @MrFromminsk 10 หลายเดือนก่อน

    If the decoder only models can be used for summarization, translation, etc, why do we even need encoders?

    • @EfficientNLP
      @EfficientNLP  10 หลายเดือนก่อน +2

      For many tasks like summarization, both decoder-only and encoder-decoder architectures are viable. However, encoder-decoder architectures are preferred for certain tasks that are naturally sequence-to-sequence, like machine translation. Furthermore, for tasks involving different modalities, such as speech-to-text, only encoder-decoder models will work; you cannot use a decoder-only model.

  • @saramoeini4286
    @saramoeini4286 5 หลายเดือนก่อน

    Hi. Thanks for your video
    If my encoder produce series of tags for each word in input sentence and I want to use that tags for generating text that is correct based on input and generated tags of encoder, how can i use decoder for this?

    • @EfficientNLP
      @EfficientNLP  5 หลายเดือนก่อน

      I don't know of any model specifically designed for this, but one approach is to use a decoder model, where you can feed the text and tags in as a prompt (you may experiment with different ways of encoding this and see what works best).

    • @saramoeini4286
      @saramoeini4286 5 หลายเดือนก่อน

      @@EfficientNLP Thank you.

  • @Sessrikant
    @Sessrikant 9 หลายเดือนก่อน

    Thanks but not clear. Do you think encoder only or encoder-decoder is a matter of past as chatGPT now takes speech as input means speech to text its able to process?

    • @EfficientNLP
      @EfficientNLP  9 หลายเดือนก่อน

      Speech-to-text models generally use encoder-decoder architectures and cannot be handled by decoder-only model. ChatGPT I believe uses a separate speech model to transcribe before the main text based model.

    • @Sessrikant
      @Sessrikant 9 หลายเดือนก่อน

      @@EfficientNLPOn decoder-only architecture for speech-to-text and large language model integration
      Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shujie Liu, Bo Ren, Linquan Liu, Yu Wu
      Large language models (LLMs) have achieved remarkable success in the field of natural language processing, enabling better human-computer interaction using natural language. However, the seamless integration of speech signals into LLMs has not been explored well. The "decoder-only" architecture has also not been well studied for speech processing tasks. In this research, we introduce Speech-LLaMA, a novel approach that effectively incorporates acoustic information into text-based large language models. Our method leverages Connectionist Temporal Classification and a simple audio encoder to map the compressed acoustic features to the continuous semantic space of the LLM. In addition, we further probe the decoder-only architecture for speech-to-text tasks by training a smaller scale randomly initialized speech-LLaMA model from speech-text paired data alone. We conduct experiments on multilingual speech-to-text translation tasks and demonstrate a significant improvement over strong baselines, highlighting the potential advantages of decoder-only models for speech-to-text conversion. arXiv:2205.01086

  • @MannyBernabe
    @MannyBernabe 8 หลายเดือนก่อน

    thx

  • @prabhakarnimmagadda6599
    @prabhakarnimmagadda6599 ปีที่แล้ว +2

    Good bro