Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)

Поделиться
HTML-код
  • Опубликовано: 19 ноя 2024

Комментарии • 133

  • @YannicKilcher
    @YannicKilcher  Год назад +16

    OUTLINE:
    0:00 - Intro
    2:15 - Transformers on long sequences
    4:30 - Tasks considered
    8:00 - Recurrent Memory Transformer
    19:40 - Experiments on scaling and attention maps
    24:00 - Conclusion
    Paper: arxiv.org/abs/2304.11062

    • @CosmiaNebula
      @CosmiaNebula Год назад +3

      TLDR: use a Transformer as a RNN. Imagine LSTM but for each LSTM block you use Transformer. Train it by backpropagate through 7 steps of the RNN ("backprop through time" or BPTT).
      Why now? Because finally algorithm and hardware has caught up enough to fit 7 copies of the Transformer into one hardware.
      What next? Perhaps rematerialization!

    • @thegreenxeno9430
      @thegreenxeno9430 Год назад

      Is Open Assisstant open to submissions of home video recordings for training data?

  • @herp_derpingson
    @herp_derpingson Год назад +79

    Yay, a normal video after what feels like years.
    Also, is it me or the recent papers have become increasingly easier to read? There is no obscure math and the code is published.

    • @joech1065
      @joech1065 Год назад +11

      As Clyde from South park would say, “ChatGPT, dude”

    • @NoNameAtAll2
      @NoNameAtAll2 Год назад +16

      I miss ML news :(

    • @Nif3
      @Nif3 Год назад +10

      Yes, I've noticed this as well - publications have become a lot shorter and more focused on practical applications.

    • @xynonners
      @xynonners Месяц назад

      ​@@Nif3they have also become less novel, it's hard to find a paper that is both simple (and has published code) and is novel

  • @halocemagnum8351
    @halocemagnum8351 Год назад +14

    I've always loved the in depth paper reviews! Thanks so much for this one, it was great!

  • @GeorgeFosberry
    @GeorgeFosberry Год назад +29

    Thank you for great analysis that is accessible even to laymen like myself. Always a pleasure to watch your videos in contrast to AI hype riders (AAAAAAAAAAAAA TWO MILLION TOKENS CTX LENGTH IS HERE!!!11)

  • @jidun9478
    @jidun9478 Год назад +11

    Thanks, for saying it finally. I have seen quite a few AI specialty channels talking about pasting stuff like the entire Harry Potter book series into a single prompt box :) OMG I couldn't even comment.

  • @joe_limon
    @joe_limon Год назад +14

    Ty for covering this

  • @neocrz
    @neocrz Год назад +13

    Nice. I was interested in that paper. Video came out right on time

  • @perbojsen3433
    @perbojsen3433 Год назад +7

    Thank you for this nice video. Being brand new to this field, I nevertheless find your presentation and explanations very clear and easy to follow. I also appreciate your skepticism and how you look behind the hype.

  • @FredPauling
    @FredPauling Год назад +1

    I appreciate you taking the time to reduce the hype on this paper for non experts.

  • @adrianimfeld8360
    @adrianimfeld8360 Год назад +4

    Was literally waiting for your take on this paper, thx for covering it!

  • @ChuanChihChou
    @ChuanChihChou Год назад +3

    Information only propagates bottom up in Transformer-XL so the maximum "receptive field" (effective context length) is finite regardless of how far back the BPTT goes. To be more precise, O(LC): L = number of layers, C = context length of each layer.

  • @Billy4321able
    @Billy4321able Год назад +13

    I was very skeptical when people were saying that it could read an entire book, in memory, all at once. As it turns out it was all just hype. Go figure.

  • @piratepartyftw
    @piratepartyftw Год назад +18

    Will you do Hyena next? Thanks!

  • @andres_pq
    @andres_pq Год назад +5

    Finally a paper review!!!

  • @breakablec
    @breakablec Год назад +16

    This seems to work for only a sparse information density that does not overwhelm the input memory

    • @agsystems8220
      @agsystems8220 Год назад

      For now. I guess you could let it control it's own read speed to let it run at the speed it wants, potentially even with backtracking. It is currently working like a book that turned over it's own pages at a set rate, no matter how fast the reader felt was appropriate.

    • @breakablec
      @breakablec Год назад

      @@agsystems8220 well also the input size could be warried with various pretrained model sizes and potentially smaller chunks and the overwhelming of inputs could be detected and adjusted for as well

  • @Skinishh
    @Skinishh Год назад

    Great explanation! The fact that the video is

  • @novantha1
    @novantha1 Год назад +4

    I actually had a very silly idea at one point where you would have a transformer model doing general processing and understanding, with the catch that it would rapidly forget information. However, each time it learned something, it would learn a small percentage of the weights involved would be sent to an RNN, almost in the background. The idea was that the RNN would be long term memory, and it would only learn things that were reinforced many times, and ideally retain specifically facts and figures.
    This isn't the same thing, but it seems that somebody had a similar thought.

  • @share4713
    @share4713 Год назад +2

    Finally, you don't know, but I am waiting everyday for a new video.

  • @jeffwads
    @jeffwads Год назад +18

    Having used the 30b model you guys created, I can say with confidence that it is an amazing model, far exceeding what I thought it would be capable of. Its comprehension appears to be at least GPT 3.5 level if not better. Well done.

    • @preddyshite6342
      @preddyshite6342 Год назад +3

      Tellme you use haven't used chatGPT3.5 in a while without telling me

    • @Klokinator
      @Klokinator Год назад +10

      OpenAssistant is absolutely not at ChatGPT's level. It is pretty good though, and certainly the best of the open source models out right now. I look forward to the next major iteration, and more importantly, I'M DOING MY PART! Contribute to the Oasst dataset!

  • @killers31337
    @killers31337 Год назад +6

    I guess the interesting part is that they didn't use any additional weights to process memory. BERT's lack of causal masking makes it possible to update memory just passing it through transformer layers. This method might be fundamentally incompatible with autoregressive models.
    It might be possible to use a NN trained this way with other forms of memory - I would guess it doesn't really care if memory tokens come from the previous segment or elsewhere. So you can have a memory database and look up the most relevant memory for a specific segment.

  • @moomoo9820
    @moomoo9820 Год назад +49

    Overhype for the algo

  • @adelhalawa974
    @adelhalawa974 Год назад

    Really appreciate not just the breakdown but you injecting your intuition throughout. Great vid

  • @Alex-gc2vo
    @Alex-gc2vo Год назад +4

    Seems like you could do the same thing with prompting. Maybe even better. Just feed it chunks of the overall text with the prompt to take notes of information relative to the question. Then use all the notes to answer. You could also do it with a vector database.

  • @aBigBadWolf
    @aBigBadWolf Год назад +3

    You should do a video on the block-recurrent transformer! It's a mix between lstm and transformer and achieves sota on pg19.

  • @almoni127
    @almoni127 Год назад

    Great video as always! Just a small correction. Quadratic memory is not an issue since the introduction of flash attention. There are still the limitations of linear memory and quadratic running time.

  • @vivienseguy
    @vivienseguy Год назад

    Great paper review as usual!

  • @alexeybalandin4676
    @alexeybalandin4676 Год назад

    A very concise and clear analysis, thank you very much!

  • @LaSalsePareille92
    @LaSalsePareille92 Год назад

    amazing review of this paper, thanks !

  • @ilia_zaitsev
    @ilia_zaitsev Год назад

    Indeed, feels like a kind of RNN but using attention layers instead of the dense ones :)
    Or, a recurrent transformer, depending on from what side to look...

  • @aboody006
    @aboody006 Год назад +1

    Woah I just read this today, and then I see this notification.

  • @learningwithlowell
    @learningwithlowell Год назад

    Great breakdown. Thank you!

  • @yildirimakbal6723
    @yildirimakbal6723 Год назад

    Great summary!

  • @hEmZoRz
    @hEmZoRz Год назад

    I'm really, really waiting for your review on the LongNet that claims to scale to 1B tokens!

  • @RuairiODonnellFOTO
    @RuairiODonnellFOTO Год назад +2

    What note taking tool is he using? Anyone have tips on organise all the papers/PDFs into a catalogue on my desktop. I've read loads of papers but just put them in one big folder. Any nice research organiser for PDFs or URLs (maybe that allow annotations for searching later)?

  • @clray123
    @clray123 Год назад +2

    Sounds like the same approach as used by LlamaIndex (aka GPTIndex). It's true that it is not the same as having a 1M token context window, but the collected facts (and they can be something non-trivial, which still fits into the "small" 32K context window) can be then put together and summarized and inferred from as a final step. So it does in fact resemble what a human would do when extracting information from a long book - take notes on relevant topics while reading it, then write up some conclusions based on those notes alone.

    • @jonathanfranks1286
      @jonathanfranks1286 Год назад

      Sorry, could a model trained like that also output text with a big amount of token?

    • @clray123
      @clray123 Год назад

      @@jonathanfranks1286 Huh? There is no limit on the number of tokens any model can output.

  • @arthurheuer
    @arthurheuer 11 месяцев назад

    i can hardly believe i laughed when hearing “a humungous 1 million, even 2 million tokens” in anticipation for how funny it will be in the future…

  • @nettlesoup
    @nettlesoup Год назад

    Not an AI dev so this is just my layman's reading. As other comments have referenced the "paste entire Harry Potter book" example, isn't the advantage of this that you could tell the memorization function what you want it to treat as facts?
    So, you could ask, "Tell me all the spells Hermione casts when Ron is nearby and where they are", and then the first step is to tune the memorization network to detect facts that relate to this and treat any sentences that don't involve any spell casting as noise for memorization purposes. (How? I don't know, some kind of fact filter rule in plain English that gets added to each pass? Presumably you can use a LLM to generate that filter rule text).
    Then the location of the spell casting can be determined from the context of preceding sentences.
    Maybe another memorization could be the list of unique spells as they're taught so they can be detected out of scope, e.g. wingardium levitosa or whatever it is (not a big HP fan sorry).

  • @yorth8154
    @yorth8154 Год назад

    New Billion token paper out. Can you make rundown for it please?

  • @Verrisin
    @Verrisin Год назад

    I mean, if they learn to generalize the compression ... it could remember a lot of stuff, and drop details but keep the basic idea ...
    - Then it would know "I need to look at X to find details" - it would output that as LOOKUP(X), something would include that thing in near-context (e.g. I look up source of a fn I roughly know) and it could do A LOT.
    - I mean ... this is how I work as a human.
    - I think if they figure out how to train it to have a general enough compression ... this approach is all that is needed.

  • @fitybux4664
    @fitybux4664 Год назад +11

    Maybe you could have it analyze every file in a large code base. Or have it be able to carry on a conversation that is weeks long.

    • @herp_derpingson
      @herp_derpingson Год назад +1

      Maybe

    • @makuru.42
      @makuru.42 Год назад +3

      Or, more importantly, you could have an enormous prompt.

  • @ilianos
    @ilianos Год назад +1

    Hi Yannic, great video! Are you planning to review the following paper? "Low-code LLM: Visual Programming over LLMs"

  • @KevinGChiu
    @KevinGChiu Год назад +1

    How does it know what fact to put into memory before reading the question?

  • @barulicksama3838
    @barulicksama3838 Год назад +1

    You should do more videos on your new chat. You should promote it.

  • @sandratoolan9598
    @sandratoolan9598 Год назад

    missed you , you look good in the glasses - its to much a brand already dude , no way back .

  • @dinkusstinkus4396
    @dinkusstinkus4396 Год назад

    To me the big reveal was that it had no other architecture, and they did it on a 1060

  • @albinoameise
    @albinoameise Год назад

    Would it be possible to have a step before the transformer that handles the input?
    E.g. first take the last section of the input (which is the task for the transformer) as a Query. Then take some memory of fixed length and run a attention block over the input section by section, taking the Query from before and doing attention between the memoy and the current section.
    If that works, the memory would be a dense representation of what is actually important from the input, regardless of length or task.
    Might be difficult to train though...

  • @weert7812
    @weert7812 Год назад

    This seems like it could be a way to have agents which have more persistence in time.

  • @marverickbin
    @marverickbin Год назад

    A question:
    BERT is encoder only transformer. It means the input are token ids, but the output are vector embeddings, so, they are not the same kind of data. Therefore, you cannot use the output as the input...
    How they manage to get memory tokens as output if the outputs are vector embeddings?

  • @SimSim314
    @SimSim314 Год назад

    It would be interesting to see a demo of any such system. Lets say open assist 30B with this...

  • @RuslanLagashkin
    @RuslanLagashkin Год назад

    Overhyping with all my might )
    Seriously though, it is an obvious idea, just well executed. I guess at some point we'll have to write questions before the material to analyze, not in any part of prompt, as it is now in ChatGPT.

  • @BO2trickshoting
    @BO2trickshoting Год назад

    This would probably be useful for something like bing chat or just search engines in general.

  • @evennot
    @evennot Год назад +1

    Why don't they just save the input sequence and reiterate over it when a question is presented? It's a genuine question: there's probably a reason there.
    Multiple transformers constantly working with input data (+ using recurrent connections, not in parallel) can't be slower than an additional question-specific transformer reiterating over text.
    Also dumb reiteration with something specific "in mind" would be nice for spotting contradicting facts from the input.
    People solve some tasks like this. Betting on acquiring all possible aspects of the input data into the "context cache" looks like an unsolvable problem for me

  • @lamhkak47
    @lamhkak47 Год назад

    I wonder if you could do a review on RWKV model? Heard that model is built by 1-madlad team

  • @theaugur1373
    @theaugur1373 Год назад +1

    Anyone know how this compares with the Reformer architecture? It was able to scale to about 1 million tokens.

  • @siquod
    @siquod Год назад

    Why do they use autoregressive self-attention to generate and attend to the memory tokens? Wouldn't cross attention make more sense, mostly because then different semantic embeddings could be used for memory facts than for mere tokens?

  • @alexbrown2288
    @alexbrown2288 Год назад

    Yannic looks a lot better without the sunglasses. He'd probably gain subscribers without them.

  • @ground_news
    @ground_news Год назад

    We enjoy watching your content and believe that both of our missions align well! Would love to connect to talk about a partnership

  • @holthuizenoemoet591
    @holthuizenoemoet591 Год назад +1

    So what would be better, increasing the context size of BERT to for example from 512 to 2048 or Using this recurrent memory technique and repeat the 512 four times?

  • @davidlatkin5525
    @davidlatkin5525 Год назад

    Can you make a video about SAM (Segment Anything Model) from Meta?

  • @kristoferkrus
    @kristoferkrus Год назад

    Awesome

  • @jnsi0
    @jnsi0 Год назад

    Seven segments - reminds me Miller's law 🤔

  • @cchance
    @cchance Год назад

    Is this similar to how automatic1111 surpasses the 75 token cap?

  • @easter.bunny.6
    @easter.bunny.6 Год назад

    Hi Yannic, thanks for your video. After watching your video, do you think this model can be used in decoder-only architecture?

  • @danielhenderson7050
    @danielhenderson7050 Год назад

    24:33 sketch is kinda funny :D

  • @thegistofcalculus
    @thegistofcalculus Год назад

    It may be possible to use this architecture to read backwards and look for an answer instead of trying to memorize facts that may or may not be relevant when the question comes. Or maybe iterate forward with awareness of the question that is otherwise presented at the end.

  • @thegreenxeno9430
    @thegreenxeno9430 Год назад

    Attention should be sentence specific. Label grammatically- noun, verb, etc. Store labels locally in a vector db to remember context (conversation, story, etc.) Run transformer on vdb. [context labelling]
    Next step, analysis engine stores 'understandings' in rdb. ¿

    • @thegreenxeno9430
      @thegreenxeno9430 Год назад

      Like, the rules of grammar already exist. Just apply that labelling scheme.

  • @-mwolf
    @-mwolf Год назад

    transformer xl reminds me of fwd fwd algorithm

  • @codemark7464
    @codemark7464 Год назад

    thanks a lot!

  • @emmanuelkolawole6720
    @emmanuelkolawole6720 Год назад

    Hey Yannic, why don't you add pandasAI to your open assistant project? It will take the product to a new level of traffic. Also support the pandasAI project so it can go beyond beta soon

  • @serta5727
    @serta5727 Год назад

    Algo Support

  • @snippletrap
    @snippletrap Год назад +2

    How does it compare with RWKV?

    • @snapo1750
      @snapo1750 Год назад +1

      In theory RWKV is completely different from transformers as it uses ONLY RNN, because RWKV uses only RNN's there is no input context lenght limit, but in the learning process they only feed (afaik 8k tokens) therefore it should not be able to know more. The more beautyful thing about RWKV is that you dont need to quadratically increase your vram 🙂

  • @DaniilKirilenko
    @DaniilKirilenko Год назад +2

    Hi Yannic! What pdf-reader do you use?

  • @serta5727
    @serta5727 Год назад

    Cool thing❤

  • @Veptis
    @Veptis 8 месяцев назад

    took 10 Months for Google to come up with Gemini ... but they aren't telling us how exactly.

  • @darklordvadermort
    @darklordvadermort Год назад

    any comments/thoughts on hyena?

  • @zerotwo7319
    @zerotwo7319 Год назад

    lol a few weeks ago I was talking how that was a limitation, but .... what a time to be alive.

  • @samsamhuns928
    @samsamhuns928 Год назад +1

    Sounds like RNNs with extra steps lol

  • @davidconsumerofmath
    @davidconsumerofmath Год назад

    Load in entire code bases!!

  • @creativityoverload2049
    @creativityoverload2049 Год назад

    So can it do machine translation?

  • @timeTegus
    @timeTegus Год назад

    " So u are saying i can out in all harrypqtrer bools and ask qestions about them "😂

  • @m4ng4n
    @m4ng4n Год назад

    How does this fare vs MEGA?

  • @preddyshite6342
    @preddyshite6342 Год назад

    I'm running out of pants to shit

  • @rumfordc
    @rumfordc Год назад +1

    why does open assistant brown nose for the WEF ?

    • @Phasma6969
      @Phasma6969 Год назад

      How?

    • @rumfordc
      @rumfordc Год назад +2

      @@Phasma6969 It describes them as heroes saving the world and agrees with every single one of their publicly stated agendas. It will even go so far as to ignore overrides on those topics (up to a point). I can understand how Microsoft and Google would reach this sort of behavior but am curious as to how Open Assistant comes by it.

    • @alexandermathews9710
      @alexandermathews9710 Год назад

      @@rumfordc probably because the data all the models are absorbing share similar outlooks

    • @rumfordc
      @rumfordc Год назад +1

      @@alexandermathews9710 yea its as if they're just pulling from the WEF's website and nowhere else. they should probably diversify their training set.

    • @alexandermathews9710
      @alexandermathews9710 Год назад +4

      @@rumfordc no i think the sheer amount of data that has been generated is in agreement with the WEF. this is one of the dangers of ai.
      a lack of diversity in data overall. not that wef information is purposefully selected its that the amount of it makes it look that way

  • @Addoagrucu
    @Addoagrucu Год назад

    i don't know about this take. i kind of agree, except i think you're a bit too harsh on the utility this paper brings. to steelman the twitter hype i could say that the tradeoff between memory requirement (linear for this technique) and amount of functionality learned (which i think can be pushed further with better datasets) might make this a contender for a pretty robust method for large scale NLP. a study on how much complicated language understanding benchmarks suffer as a result of using all available vram to fit multiple of the same transformer into memory to do backprop over time as opposed to using all available vram to fit one big transformer would be helpful in trying to guide our opinions with empiricism.

  • @kaikapioka9711
    @kaikapioka9711 Год назад

    Finally.

  • @lio1234234
    @lio1234234 Год назад +2

    Awesome stuff! Do you think this will be integrated into Open Assistant?

  • @klammer75
    @klammer75 Год назад +6

    We’ll put and eloquently described…gotta admit I was starstruck when I first saw the headline but you’re right, it’s an RNN not an absurdly long transformer window…Tku for this😎🦾

  • @NeoShameMan
    @NeoShameMan Год назад

    I was hyped for 500ms only, does that count?

  • @binjianxin7830
    @binjianxin7830 Год назад

    7:44 maybe it’s about the model needs to be able to rule out negative facts?

  • @nevokrien95
    @nevokrien95 Год назад

    This isn't new and it's relatively oversimplified.
    We have preciverio and transformerlstm

  • @MultiCraftTube
    @MultiCraftTube Год назад

    The italians are comming 😱

  • @draken5379
    @draken5379 Год назад +2

    This 'RMT' seems really pointless. You can just use the same main LLM, to turn text into embeddings and store them in a vectorstore database. Then you are able to search that vectorstore database for everything related to the incoming input. Allowing an LLM to have a massive vast collection of data that is retrieved in a natural lang way.
    Super Simple Example:
    Told my Bot, "Dogs like Blue, Cats like red, Rats like Yellow".
    The LLM itself, detects these 'facts' in the input, and redirects them to a 'fact save' function. Which saves each fact to a vectorstore.
    I then asked. What color does dogs like ?
    The vectorstore DB is then queried with that input, which results in dogs like blue, which gets fed into the LLM along with the current input as a 'fact'.
    Crude and simple example, but shows you dont really need to go code out a totally new neural net to just handle something an LLM can already handle by design.

    • @BO2trickshoting
      @BO2trickshoting Год назад

      do you think this is what bing chat uses?

    • @draken5379
      @draken5379 Год назад

      @@BO2trickshoting Ya from what ive heard. The way Stripe,Bing,spotify etc are handling memory is via vectorstores.

  • @fontenbleau
    @fontenbleau Год назад

    Paper is paper, but where is working test...

  • @АлександрКрасных-щ9б

    I know that kuratov

  • @qeter129
    @qeter129 Год назад

    1 gagillion tokens of context...

  • @theosalmon
    @theosalmon Год назад

    It's not 100% transformer. That in itself is noteworthy.

  • @SaiNivedh-d6t
    @SaiNivedh-d6t Год назад

    16:48 😂😂

  • @ivanstepanovftw
    @ivanstepanovftw Год назад

    I don't like this idea from the paper...
    Why not just make embeddings of previous context?

  • @nevokrien95
    @nevokrien95 Год назад

    Why do you trust the dataset if even the exmple in the paper is wrong?
    This seems to be an indicator of poor data quality

  • @DouwedeJong
    @DouwedeJong Год назад

    overhype contribution checked = checked.