Byte Latent Transformer - BLT explained (Entropy of Next Byte, META)

Поделиться
HTML-код
  • Опубликовано: 10 фев 2025
  • In-depth explanation of the new Byte Latent Transformer architecture, for token-free transformers. Without a tokenizer new methods at the local attention level have to define the byte patching functions via an entropy based prediction for next byte. Explanation of the inner workings of the local Encoder, including its causal local attentions and the cross-attention mechanisms for byte pooling for latent patches.
    All rights w/ authors:
    "Byte Latent Transformer: Patches Scale Better Than Tokens"
    Artidoro Pagnoni, Ram Pasunuru, Pedro Rodriguez, John Nguyen, Benjamin Muller, Margaret Li, Chunting Zhou, Lili Yu, Jason Weston, Luke Zettlemoyer, Gargi Ghosh, Mike Lewis, Ari Holtzman, Srinivasan Iyer
    FAIR at Meta, Paul G. Allen School of Computer Science & Engineering, University of Washington, University of Chicago
    #transformer
    #airesearch
    #meta
    #tokenization
    #languagemodel
  • НаукаНаука

Комментарии • 15

  • @code4AI
    @code4AI  Месяц назад +6

    Please note, with the automatic dubbing from RUclips /Google you hear a synthetic voice in your regional language. To hear my original voice in English, switch to "Default" or "English" in the settings. Thank you.

  • @mrpocock
    @mrpocock Месяц назад +14

    Byte-level LLMs are obviously the way forward for that first round of training where you're predicting 1..n tokens given the prefix, particularly for multi-language models. Tokenization is clearly a hack, like in the dark ages of image neural networks, where we would hand-craft feature detection kernels.

  • @ProgrammingWIthRiley
    @ProgrammingWIthRiley Месяц назад +1

    Brother, you are amazing.
    Thank you for doing this.

  • @williamervin3272
    @williamervin3272 Месяц назад

    I would love to see a follow up paper that explores adding another layer to create patches of patches. Then maybe the "Large Concept Model" idea can finally be realized with good performance. Fun to think about!

  • @wwkk4964
    @wwkk4964 Месяц назад +1

    Thank you so much for covering this paper! I had been thinking about this specific implementation for a year and i believe its a significant step towards having truly general learning architecture that is minimizing hand crafted human priors.

  • @TalsBadKidney
    @TalsBadKidney Месяц назад +2

    very very cool

  • @themax2go
    @themax2go Месяц назад +2

    i'm having a plantbased BLT right now

  • @thanhhuynh1139
    @thanhhuynh1139 Месяц назад

    I think the entropy formula should be p_x*log(1/p_x) = - p_x*log(p_x).
    Where did the ‘-’ go?

  • @King_Deundel
    @King_Deundel Месяц назад

    BLT seems the way to go in an ideal world, but there are definetly problems with it, I think tokenizers have accomplished tremendous work and we are on this state thanks to improving the vocab size and the tokenizations mechanisms, but from this point we may have the technology and resources to try to perform BLT on a model ( I still don't think it would work that much better)

    • @augmentos
      @augmentos Месяц назад

      Can you expand on ‘definitely problems’ with it

  • @davidwynter6856
    @davidwynter6856 Месяц назад +2

    Can you clarify that the pre training will have to use the BLT embeddings. I.e. unless models pre trained using BLT start appearing on huggingface or elsewhere we mere mortals will not be able to take advantage of this new method?

  • @JeomonGeorge
    @JeomonGeorge Месяц назад

    Does the small transformer have bpe then in the H(xi) is it finding the cross entropy. 26:13

  • @ivangoncharuk607
    @ivangoncharuk607 Месяц назад +1

    Bacon Lettuce Tomato