MambaByte: Token-Free Language Modeling

Поделиться
HTML-код
  • Опубликовано: 23 апр 2024
  • Mamba for efficient token-free language modeling - arxiv.org/abs/2401.13660 from Junxiong Wang, Tushaar Gangavarapu, Jing Nathan Yan
    Tutorial on Mamba: • Do we need Attention? ...
  • НаукаНаука

Комментарии • 11

  • @ChaseFreedomMusician
    @ChaseFreedomMusician 15 дней назад +7

    Great presentation!! Thank you!!

  • @JunxiongWang-jxw
    @JunxiongWang-jxw 13 дней назад +1

    Hi, for correction: The number of parameters for the transformer is 361M instead of 261M in this video, as shown in the paper.

  • @BooleanDisorder
    @BooleanDisorder 14 дней назад

    I definitely think attention in some form will survive even into a refined future mamba model due to its powerful ability to capture high dimension representations.

  • @tan-uz4oe
    @tan-uz4oe 9 дней назад +1

    Thank you for a great presentation Sasha@srush_nlp. You mentioned that MambaByte is still behind token-based models. I wonder what makes Mamba still theoretically inferior to token-based transformers or is it just a matter of discovering best practices and tricks?

  • @DewEfresh
    @DewEfresh 14 дней назад +1

    Nice work. I see the models on huggingface. Is there also a github or notebooks to train or run inference on them?

  • @AM-yk5yd
    @AM-yk5yd 15 дней назад +4

    Charformer(which is mentioned in paper, gradient based tokenizers sound like pog) tests itself in multilingual tasks. It is interesting how RNNs would behave there. RWKV can easily jump to wrong language(model card talks about how using space in the end of the prompt can "upset the tokenizer")
    Also can we go lower? MambaBit when. Just imagine. Vocab size of 2. 😱

    • @donnychan1999
      @donnychan1999 15 дней назад +1

      Although it's an interesting idea, I don't understand how that is beneficial. Humans neither understand nor produce outputs in bits. I think it's more reasonable that each token is a smallest semantic/graphical unit (sememe/grapheme), but bit does not hold semantic/graphical information though.

    • @AM-yk5yd
      @AM-yk5yd 14 дней назад +2

      @@donnychan1999 Humans also don't produce output in bytes: There is no reason for 'кот' to take 2 times more "thought unit" comparing to 'cat'
      Well, meanwhile I decided to be change in world I want to see and published Maykeye/MambaBit on HF after torturing my laptop for 10 hours.
      "The cat can never" -> "The cat can never many be my father,
      Or else and the good many be my father,
      In the good many lord, and my father come."
      This is so cursed. Yet it's much better than I've expected.

  • @vibingcat1
    @vibingcat1 15 дней назад +1

    Great work and presentation! Have you also compared MambaByte to the baselines on any downstream tasks/benchmarks?

  • @Khari99
    @Khari99 14 дней назад

    Great work!

  • @marcfruchtman9473
    @marcfruchtman9473 15 дней назад

    Very impressive.