Which transformer architecture is best? Encoder-only vs Encoder-decoder vs Decoder-only models

Поделиться
HTML-код
  • Опубликовано: 2 июл 2024
  • The battle of transformer architectures: Encoder-only vs Encoder-decoder vs Decoder-only models. Discover the architecture and strengths of each model type to make informed decisions for your NLP projects.
    0:00 - Introduction
    0:50 - Encoder-only transformers
    2:40 - Encoder-decoder (seq2seq) transformers
    4:40 - Decoder-only transformers

Комментарии • 28

  • @sp5394
    @sp5394 2 дня назад

    Thank you very much. Great video! Clear, concise and yet covers most of the necessary details.

  • @sumukhas5418
    @sumukhas5418 9 месяцев назад +1

    Great video, learnt a lot on how models work
    Looking forward on more videos like these 😊

  • @chrisogonas
    @chrisogonas 9 месяцев назад

    Well illustrated. Thanks

  • @chitranair1105
    @chitranair1105 6 месяцев назад +1

    Good explanation. Thanks!

  • @kevon217
    @kevon217 Год назад

    Great overview!

  • @Monoglossia
    @Monoglossia Год назад +1

    Very clear, thank you!

  • @WhatsAI
    @WhatsAI 11 месяцев назад

    Great video Bai! :)

  • @groundingtiming
    @groundingtiming 9 месяцев назад +2

    great video, can you make one with more detail focusing on the why ?

  • @ZivShemesh
    @ZivShemesh 9 месяцев назад

    Thank you very much, very helpful!

  • @nudelsuppenzauberer3367
    @nudelsuppenzauberer3367 4 месяца назад

    I think u safe my exams ty man

  • @prabhakarnimmagadda6599
    @prabhakarnimmagadda6599 11 месяцев назад +2

    Good bro

  • @MannyBernabe
    @MannyBernabe 4 месяца назад

    thx

  • @xflory26x
    @xflory26x Год назад +5

    It's still not clear what the difference between the three are - how are they different in terms of the way they process the text? How is the encoder-decoder different to the decoder only - if both of them are autoregressive?

    • @EfficientNLP
      @EfficientNLP  Год назад +1

      Indeed they have a lot in common and both encoder-decoder and decoder-only models do autoregressive decoding. The main difference is encoder-decoder models make an architectural distinction between the input and output, in encoder-decoder models typically there is a cross-attention mechanism in the decoder, which is not present in decoder-only models.

  • @arabindabhattacharjee9774
    @arabindabhattacharjee9774 7 месяцев назад +2

    One thing which I still didnot understand was, how decoder only model works, when the encoder is not there? What part ensures that the sequence of inputs are managed and do not get jumbled up for a correct output?

    • @EfficientNLP
      @EfficientNLP  7 месяцев назад

      In the decoder-only model, the input is provided as a prompt or prefix, which the model uses to generate subsequent tokens. As for how they don't get jumbled up - they use positional encodings to convey information about word order. I have some videos about how positional encodings work if you're interested.

    • @desrucca
      @desrucca 6 месяцев назад +1

      ​@@EfficientNLPIve tried prompting a conversational chatbot in transformers library Python.
      But I found out decoder-only (causal) model is slower by many times compared to (seq2seq) encoder-decoder model. Why is that?

  • @saramoeini4286
    @saramoeini4286 Месяц назад

    Hi. Thanks for your video
    If my encoder produce series of tags for each word in input sentence and I want to use that tags for generating text that is correct based on input and generated tags of encoder, how can i use decoder for this?

    • @EfficientNLP
      @EfficientNLP  Месяц назад

      I don't know of any model specifically designed for this, but one approach is to use a decoder model, where you can feed the text and tags in as a prompt (you may experiment with different ways of encoding this and see what works best).

    • @saramoeini4286
      @saramoeini4286 Месяц назад

      @@EfficientNLP Thank you.

  • @Sessrikant
    @Sessrikant 6 месяцев назад

    Thanks but not clear. Do you think encoder only or encoder-decoder is a matter of past as chatGPT now takes speech as input means speech to text its able to process?

    • @EfficientNLP
      @EfficientNLP  5 месяцев назад

      Speech-to-text models generally use encoder-decoder architectures and cannot be handled by decoder-only model. ChatGPT I believe uses a separate speech model to transcribe before the main text based model.

    • @Sessrikant
      @Sessrikant 5 месяцев назад

      @@EfficientNLPOn decoder-only architecture for speech-to-text and large language model integration
      Jian Wu, Yashesh Gaur, Zhuo Chen, Long Zhou, Yimeng Zhu, Tianrui Wang, Jinyu Li, Shujie Liu, Bo Ren, Linquan Liu, Yu Wu
      Large language models (LLMs) have achieved remarkable success in the field of natural language processing, enabling better human-computer interaction using natural language. However, the seamless integration of speech signals into LLMs has not been explored well. The "decoder-only" architecture has also not been well studied for speech processing tasks. In this research, we introduce Speech-LLaMA, a novel approach that effectively incorporates acoustic information into text-based large language models. Our method leverages Connectionist Temporal Classification and a simple audio encoder to map the compressed acoustic features to the continuous semantic space of the LLM. In addition, we further probe the decoder-only architecture for speech-to-text tasks by training a smaller scale randomly initialized speech-LLaMA model from speech-text paired data alone. We conduct experiments on multilingual speech-to-text translation tasks and demonstrate a significant improvement over strong baselines, highlighting the potential advantages of decoder-only models for speech-to-text conversion. arXiv:2205.01086

  • @kaustuvray5066
    @kaustuvray5066 6 месяцев назад

    at 3:08 why does the encoder take 4 timesteps? Isnt the encoder supposed to be parallel?

    • @EfficientNLP
      @EfficientNLP  6 месяцев назад

      You’re right, transformer encoders process all the input in parallel. However, encoders are not always transformers, and in this case the figure shows an example of the older RNN/LSTM type of encoder.

  • @MrFromminsk
    @MrFromminsk 7 месяцев назад

    If the decoder only models can be used for summarization, translation, etc, why do we even need encoders?

    • @EfficientNLP
      @EfficientNLP  7 месяцев назад +1

      For many tasks like summarization, both decoder-only and encoder-decoder architectures are viable. However, encoder-decoder architectures are preferred for certain tasks that are naturally sequence-to-sequence, like machine translation. Furthermore, for tasks involving different modalities, such as speech-to-text, only encoder-decoder models will work; you cannot use a decoder-only model.