TLDR: use a Transformer as a RNN. Imagine LSTM but for each LSTM block you use Transformer. Train it by backpropagate through 7 steps of the RNN ("backprop through time" or BPTT). Why now? Because finally algorithm and hardware has caught up enough to fit 7 copies of the Transformer into one hardware. What next? Perhaps rematerialization!
Yay, a normal video after what feels like years. Also, is it me or the recent papers have become increasingly easier to read? There is no obscure math and the code is published.
Thank you for great analysis that is accessible even to laymen like myself. Always a pleasure to watch your videos in contrast to AI hype riders (AAAAAAAAAAAAA TWO MILLION TOKENS CTX LENGTH IS HERE!!!11)
Thanks, for saying it finally. I have seen quite a few AI specialty channels talking about pasting stuff like the entire Harry Potter book series into a single prompt box :) OMG I couldn't even comment.
Thank you for this nice video. Being brand new to this field, I nevertheless find your presentation and explanations very clear and easy to follow. I also appreciate your skepticism and how you look behind the hype.
Information only propagates bottom up in Transformer-XL so the maximum "receptive field" (effective context length) is finite regardless of how far back the BPTT goes. To be more precise, O(LC): L = number of layers, C = context length of each layer.
I was very skeptical when people were saying that it could read an entire book, in memory, all at once. As it turns out it was all just hype. Go figure.
For now. I guess you could let it control it's own read speed to let it run at the speed it wants, potentially even with backtracking. It is currently working like a book that turned over it's own pages at a set rate, no matter how fast the reader felt was appropriate.
@@agsystems8220 well also the input size could be warried with various pretrained model sizes and potentially smaller chunks and the overwhelming of inputs could be detected and adjusted for as well
I actually had a very silly idea at one point where you would have a transformer model doing general processing and understanding, with the catch that it would rapidly forget information. However, each time it learned something, it would learn a small percentage of the weights involved would be sent to an RNN, almost in the background. The idea was that the RNN would be long term memory, and it would only learn things that were reinforced many times, and ideally retain specifically facts and figures. This isn't the same thing, but it seems that somebody had a similar thought.
Having used the 30b model you guys created, I can say with confidence that it is an amazing model, far exceeding what I thought it would be capable of. Its comprehension appears to be at least GPT 3.5 level if not better. Well done.
OpenAssistant is absolutely not at ChatGPT's level. It is pretty good though, and certainly the best of the open source models out right now. I look forward to the next major iteration, and more importantly, I'M DOING MY PART! Contribute to the Oasst dataset!
I guess the interesting part is that they didn't use any additional weights to process memory. BERT's lack of causal masking makes it possible to update memory just passing it through transformer layers. This method might be fundamentally incompatible with autoregressive models. It might be possible to use a NN trained this way with other forms of memory - I would guess it doesn't really care if memory tokens come from the previous segment or elsewhere. So you can have a memory database and look up the most relevant memory for a specific segment.
Seems like you could do the same thing with prompting. Maybe even better. Just feed it chunks of the overall text with the prompt to take notes of information relative to the question. Then use all the notes to answer. You could also do it with a vector database.
Great video as always! Just a small correction. Quadratic memory is not an issue since the introduction of flash attention. There are still the limitations of linear memory and quadratic running time.
Indeed, feels like a kind of RNN but using attention layers instead of the dense ones :) Or, a recurrent transformer, depending on from what side to look...
What note taking tool is he using? Anyone have tips on organise all the papers/PDFs into a catalogue on my desktop. I've read loads of papers but just put them in one big folder. Any nice research organiser for PDFs or URLs (maybe that allow annotations for searching later)?
Sounds like the same approach as used by LlamaIndex (aka GPTIndex). It's true that it is not the same as having a 1M token context window, but the collected facts (and they can be something non-trivial, which still fits into the "small" 32K context window) can be then put together and summarized and inferred from as a final step. So it does in fact resemble what a human would do when extracting information from a long book - take notes on relevant topics while reading it, then write up some conclusions based on those notes alone.
Not an AI dev so this is just my layman's reading. As other comments have referenced the "paste entire Harry Potter book" example, isn't the advantage of this that you could tell the memorization function what you want it to treat as facts? So, you could ask, "Tell me all the spells Hermione casts when Ron is nearby and where they are", and then the first step is to tune the memorization network to detect facts that relate to this and treat any sentences that don't involve any spell casting as noise for memorization purposes. (How? I don't know, some kind of fact filter rule in plain English that gets added to each pass? Presumably you can use a LLM to generate that filter rule text). Then the location of the spell casting can be determined from the context of preceding sentences. Maybe another memorization could be the list of unique spells as they're taught so they can be detected out of scope, e.g. wingardium levitosa or whatever it is (not a big HP fan sorry).
I mean, if they learn to generalize the compression ... it could remember a lot of stuff, and drop details but keep the basic idea ... - Then it would know "I need to look at X to find details" - it would output that as LOOKUP(X), something would include that thing in near-context (e.g. I look up source of a fn I roughly know) and it could do A LOT. - I mean ... this is how I work as a human. - I think if they figure out how to train it to have a general enough compression ... this approach is all that is needed.
Would it be possible to have a step before the transformer that handles the input? E.g. first take the last section of the input (which is the task for the transformer) as a Query. Then take some memory of fixed length and run a attention block over the input section by section, taking the Query from before and doing attention between the memoy and the current section. If that works, the memory would be a dense representation of what is actually important from the input, regardless of length or task. Might be difficult to train though...
A question: BERT is encoder only transformer. It means the input are token ids, but the output are vector embeddings, so, they are not the same kind of data. Therefore, you cannot use the output as the input... How they manage to get memory tokens as output if the outputs are vector embeddings?
Overhyping with all my might ) Seriously though, it is an obvious idea, just well executed. I guess at some point we'll have to write questions before the material to analyze, not in any part of prompt, as it is now in ChatGPT.
Why don't they just save the input sequence and reiterate over it when a question is presented? It's a genuine question: there's probably a reason there. Multiple transformers constantly working with input data (+ using recurrent connections, not in parallel) can't be slower than an additional question-specific transformer reiterating over text. Also dumb reiteration with something specific "in mind" would be nice for spotting contradicting facts from the input. People solve some tasks like this. Betting on acquiring all possible aspects of the input data into the "context cache" looks like an unsolvable problem for me
Why do they use autoregressive self-attention to generate and attend to the memory tokens? Wouldn't cross attention make more sense, mostly because then different semantic embeddings could be used for memory facts than for mere tokens?
So what would be better, increasing the context size of BERT to for example from 512 to 2048 or Using this recurrent memory technique and repeat the 512 four times?
It may be possible to use this architecture to read backwards and look for an answer instead of trying to memorize facts that may or may not be relevant when the question comes. Or maybe iterate forward with awareness of the question that is otherwise presented at the end.
Attention should be sentence specific. Label grammatically- noun, verb, etc. Store labels locally in a vector db to remember context (conversation, story, etc.) Run transformer on vdb. [context labelling] Next step, analysis engine stores 'understandings' in rdb. ¿
Hey Yannic, why don't you add pandasAI to your open assistant project? It will take the product to a new level of traffic. Also support the pandasAI project so it can go beyond beta soon
In theory RWKV is completely different from transformers as it uses ONLY RNN, because RWKV uses only RNN's there is no input context lenght limit, but in the learning process they only feed (afaik 8k tokens) therefore it should not be able to know more. The more beautyful thing about RWKV is that you dont need to quadratically increase your vram 🙂
@@Phasma6969 It describes them as heroes saving the world and agrees with every single one of their publicly stated agendas. It will even go so far as to ignore overrides on those topics (up to a point). I can understand how Microsoft and Google would reach this sort of behavior but am curious as to how Open Assistant comes by it.
@@rumfordc no i think the sheer amount of data that has been generated is in agreement with the WEF. this is one of the dangers of ai. a lack of diversity in data overall. not that wef information is purposefully selected its that the amount of it makes it look that way
i don't know about this take. i kind of agree, except i think you're a bit too harsh on the utility this paper brings. to steelman the twitter hype i could say that the tradeoff between memory requirement (linear for this technique) and amount of functionality learned (which i think can be pushed further with better datasets) might make this a contender for a pretty robust method for large scale NLP. a study on how much complicated language understanding benchmarks suffer as a result of using all available vram to fit multiple of the same transformer into memory to do backprop over time as opposed to using all available vram to fit one big transformer would be helpful in trying to guide our opinions with empiricism.
We’ll put and eloquently described…gotta admit I was starstruck when I first saw the headline but you’re right, it’s an RNN not an absurdly long transformer window…Tku for this😎🦾
This 'RMT' seems really pointless. You can just use the same main LLM, to turn text into embeddings and store them in a vectorstore database. Then you are able to search that vectorstore database for everything related to the incoming input. Allowing an LLM to have a massive vast collection of data that is retrieved in a natural lang way. Super Simple Example: Told my Bot, "Dogs like Blue, Cats like red, Rats like Yellow". The LLM itself, detects these 'facts' in the input, and redirects them to a 'fact save' function. Which saves each fact to a vectorstore. I then asked. What color does dogs like ? The vectorstore DB is then queried with that input, which results in dogs like blue, which gets fed into the LLM along with the current input as a 'fact'. Crude and simple example, but shows you dont really need to go code out a totally new neural net to just handle something an LLM can already handle by design.
OUTLINE:
0:00 - Intro
2:15 - Transformers on long sequences
4:30 - Tasks considered
8:00 - Recurrent Memory Transformer
19:40 - Experiments on scaling and attention maps
24:00 - Conclusion
Paper: arxiv.org/abs/2304.11062
TLDR: use a Transformer as a RNN. Imagine LSTM but for each LSTM block you use Transformer. Train it by backpropagate through 7 steps of the RNN ("backprop through time" or BPTT).
Why now? Because finally algorithm and hardware has caught up enough to fit 7 copies of the Transformer into one hardware.
What next? Perhaps rematerialization!
Is Open Assisstant open to submissions of home video recordings for training data?
Yay, a normal video after what feels like years.
Also, is it me or the recent papers have become increasingly easier to read? There is no obscure math and the code is published.
As Clyde from South park would say, “ChatGPT, dude”
I miss ML news :(
Yes, I've noticed this as well - publications have become a lot shorter and more focused on practical applications.
@@Nif3they have also become less novel, it's hard to find a paper that is both simple (and has published code) and is novel
I've always loved the in depth paper reviews! Thanks so much for this one, it was great!
Thank you for great analysis that is accessible even to laymen like myself. Always a pleasure to watch your videos in contrast to AI hype riders (AAAAAAAAAAAAA TWO MILLION TOKENS CTX LENGTH IS HERE!!!11)
Thanks, for saying it finally. I have seen quite a few AI specialty channels talking about pasting stuff like the entire Harry Potter book series into a single prompt box :) OMG I couldn't even comment.
Ty for covering this
Nice. I was interested in that paper. Video came out right on time
Thank you for this nice video. Being brand new to this field, I nevertheless find your presentation and explanations very clear and easy to follow. I also appreciate your skepticism and how you look behind the hype.
I appreciate you taking the time to reduce the hype on this paper for non experts.
Was literally waiting for your take on this paper, thx for covering it!
Information only propagates bottom up in Transformer-XL so the maximum "receptive field" (effective context length) is finite regardless of how far back the BPTT goes. To be more precise, O(LC): L = number of layers, C = context length of each layer.
I was very skeptical when people were saying that it could read an entire book, in memory, all at once. As it turns out it was all just hype. Go figure.
Will you do Hyena next? Thanks!
Finally a paper review!!!
This seems to work for only a sparse information density that does not overwhelm the input memory
For now. I guess you could let it control it's own read speed to let it run at the speed it wants, potentially even with backtracking. It is currently working like a book that turned over it's own pages at a set rate, no matter how fast the reader felt was appropriate.
@@agsystems8220 well also the input size could be warried with various pretrained model sizes and potentially smaller chunks and the overwhelming of inputs could be detected and adjusted for as well
Great explanation! The fact that the video is
I actually had a very silly idea at one point where you would have a transformer model doing general processing and understanding, with the catch that it would rapidly forget information. However, each time it learned something, it would learn a small percentage of the weights involved would be sent to an RNN, almost in the background. The idea was that the RNN would be long term memory, and it would only learn things that were reinforced many times, and ideally retain specifically facts and figures.
This isn't the same thing, but it seems that somebody had a similar thought.
Finally, you don't know, but I am waiting everyday for a new video.
Having used the 30b model you guys created, I can say with confidence that it is an amazing model, far exceeding what I thought it would be capable of. Its comprehension appears to be at least GPT 3.5 level if not better. Well done.
Tellme you use haven't used chatGPT3.5 in a while without telling me
OpenAssistant is absolutely not at ChatGPT's level. It is pretty good though, and certainly the best of the open source models out right now. I look forward to the next major iteration, and more importantly, I'M DOING MY PART! Contribute to the Oasst dataset!
I guess the interesting part is that they didn't use any additional weights to process memory. BERT's lack of causal masking makes it possible to update memory just passing it through transformer layers. This method might be fundamentally incompatible with autoregressive models.
It might be possible to use a NN trained this way with other forms of memory - I would guess it doesn't really care if memory tokens come from the previous segment or elsewhere. So you can have a memory database and look up the most relevant memory for a specific segment.
Overhype for the algo
Really appreciate not just the breakdown but you injecting your intuition throughout. Great vid
Seems like you could do the same thing with prompting. Maybe even better. Just feed it chunks of the overall text with the prompt to take notes of information relative to the question. Then use all the notes to answer. You could also do it with a vector database.
You should do a video on the block-recurrent transformer! It's a mix between lstm and transformer and achieves sota on pg19.
Great video as always! Just a small correction. Quadratic memory is not an issue since the introduction of flash attention. There are still the limitations of linear memory and quadratic running time.
Great paper review as usual!
A very concise and clear analysis, thank you very much!
amazing review of this paper, thanks !
Indeed, feels like a kind of RNN but using attention layers instead of the dense ones :)
Or, a recurrent transformer, depending on from what side to look...
Woah I just read this today, and then I see this notification.
Great breakdown. Thank you!
Great summary!
I'm really, really waiting for your review on the LongNet that claims to scale to 1B tokens!
What note taking tool is he using? Anyone have tips on organise all the papers/PDFs into a catalogue on my desktop. I've read loads of papers but just put them in one big folder. Any nice research organiser for PDFs or URLs (maybe that allow annotations for searching later)?
Sounds like the same approach as used by LlamaIndex (aka GPTIndex). It's true that it is not the same as having a 1M token context window, but the collected facts (and they can be something non-trivial, which still fits into the "small" 32K context window) can be then put together and summarized and inferred from as a final step. So it does in fact resemble what a human would do when extracting information from a long book - take notes on relevant topics while reading it, then write up some conclusions based on those notes alone.
Sorry, could a model trained like that also output text with a big amount of token?
@@jonathanfranks1286 Huh? There is no limit on the number of tokens any model can output.
i can hardly believe i laughed when hearing “a humungous 1 million, even 2 million tokens” in anticipation for how funny it will be in the future…
Not an AI dev so this is just my layman's reading. As other comments have referenced the "paste entire Harry Potter book" example, isn't the advantage of this that you could tell the memorization function what you want it to treat as facts?
So, you could ask, "Tell me all the spells Hermione casts when Ron is nearby and where they are", and then the first step is to tune the memorization network to detect facts that relate to this and treat any sentences that don't involve any spell casting as noise for memorization purposes. (How? I don't know, some kind of fact filter rule in plain English that gets added to each pass? Presumably you can use a LLM to generate that filter rule text).
Then the location of the spell casting can be determined from the context of preceding sentences.
Maybe another memorization could be the list of unique spells as they're taught so they can be detected out of scope, e.g. wingardium levitosa or whatever it is (not a big HP fan sorry).
New Billion token paper out. Can you make rundown for it please?
I mean, if they learn to generalize the compression ... it could remember a lot of stuff, and drop details but keep the basic idea ...
- Then it would know "I need to look at X to find details" - it would output that as LOOKUP(X), something would include that thing in near-context (e.g. I look up source of a fn I roughly know) and it could do A LOT.
- I mean ... this is how I work as a human.
- I think if they figure out how to train it to have a general enough compression ... this approach is all that is needed.
Maybe you could have it analyze every file in a large code base. Or have it be able to carry on a conversation that is weeks long.
Maybe
Or, more importantly, you could have an enormous prompt.
Hi Yannic, great video! Are you planning to review the following paper? "Low-code LLM: Visual Programming over LLMs"
How does it know what fact to put into memory before reading the question?
You should do more videos on your new chat. You should promote it.
missed you , you look good in the glasses - its to much a brand already dude , no way back .
To me the big reveal was that it had no other architecture, and they did it on a 1060
Would it be possible to have a step before the transformer that handles the input?
E.g. first take the last section of the input (which is the task for the transformer) as a Query. Then take some memory of fixed length and run a attention block over the input section by section, taking the Query from before and doing attention between the memoy and the current section.
If that works, the memory would be a dense representation of what is actually important from the input, regardless of length or task.
Might be difficult to train though...
This seems like it could be a way to have agents which have more persistence in time.
A question:
BERT is encoder only transformer. It means the input are token ids, but the output are vector embeddings, so, they are not the same kind of data. Therefore, you cannot use the output as the input...
How they manage to get memory tokens as output if the outputs are vector embeddings?
It would be interesting to see a demo of any such system. Lets say open assist 30B with this...
Overhyping with all my might )
Seriously though, it is an obvious idea, just well executed. I guess at some point we'll have to write questions before the material to analyze, not in any part of prompt, as it is now in ChatGPT.
This would probably be useful for something like bing chat or just search engines in general.
Why don't they just save the input sequence and reiterate over it when a question is presented? It's a genuine question: there's probably a reason there.
Multiple transformers constantly working with input data (+ using recurrent connections, not in parallel) can't be slower than an additional question-specific transformer reiterating over text.
Also dumb reiteration with something specific "in mind" would be nice for spotting contradicting facts from the input.
People solve some tasks like this. Betting on acquiring all possible aspects of the input data into the "context cache" looks like an unsolvable problem for me
I wonder if you could do a review on RWKV model? Heard that model is built by 1-madlad team
Anyone know how this compares with the Reformer architecture? It was able to scale to about 1 million tokens.
Why do they use autoregressive self-attention to generate and attend to the memory tokens? Wouldn't cross attention make more sense, mostly because then different semantic embeddings could be used for memory facts than for mere tokens?
Yannic looks a lot better without the sunglasses. He'd probably gain subscribers without them.
We enjoy watching your content and believe that both of our missions align well! Would love to connect to talk about a partnership
So what would be better, increasing the context size of BERT to for example from 512 to 2048 or Using this recurrent memory technique and repeat the 512 four times?
obviously increasing bert context size
Can you make a video about SAM (Segment Anything Model) from Meta?
Awesome
Seven segments - reminds me Miller's law 🤔
Is this similar to how automatic1111 surpasses the 75 token cap?
Hi Yannic, thanks for your video. After watching your video, do you think this model can be used in decoder-only architecture?
24:33 sketch is kinda funny :D
It may be possible to use this architecture to read backwards and look for an answer instead of trying to memorize facts that may or may not be relevant when the question comes. Or maybe iterate forward with awareness of the question that is otherwise presented at the end.
Attention should be sentence specific. Label grammatically- noun, verb, etc. Store labels locally in a vector db to remember context (conversation, story, etc.) Run transformer on vdb. [context labelling]
Next step, analysis engine stores 'understandings' in rdb. ¿
Like, the rules of grammar already exist. Just apply that labelling scheme.
transformer xl reminds me of fwd fwd algorithm
thanks a lot!
Hey Yannic, why don't you add pandasAI to your open assistant project? It will take the product to a new level of traffic. Also support the pandasAI project so it can go beyond beta soon
Algo Support
How does it compare with RWKV?
In theory RWKV is completely different from transformers as it uses ONLY RNN, because RWKV uses only RNN's there is no input context lenght limit, but in the learning process they only feed (afaik 8k tokens) therefore it should not be able to know more. The more beautyful thing about RWKV is that you dont need to quadratically increase your vram 🙂
Hi Yannic! What pdf-reader do you use?
onenote
Cool thing❤
took 10 Months for Google to come up with Gemini ... but they aren't telling us how exactly.
any comments/thoughts on hyena?
lol a few weeks ago I was talking how that was a limitation, but .... what a time to be alive.
Sounds like RNNs with extra steps lol
Load in entire code bases!!
So can it do machine translation?
" So u are saying i can out in all harrypqtrer bools and ask qestions about them "😂
How does this fare vs MEGA?
I'm running out of pants to shit
why does open assistant brown nose for the WEF ?
How?
@@Phasma6969 It describes them as heroes saving the world and agrees with every single one of their publicly stated agendas. It will even go so far as to ignore overrides on those topics (up to a point). I can understand how Microsoft and Google would reach this sort of behavior but am curious as to how Open Assistant comes by it.
@@rumfordc probably because the data all the models are absorbing share similar outlooks
@@alexandermathews9710 yea its as if they're just pulling from the WEF's website and nowhere else. they should probably diversify their training set.
@@rumfordc no i think the sheer amount of data that has been generated is in agreement with the WEF. this is one of the dangers of ai.
a lack of diversity in data overall. not that wef information is purposefully selected its that the amount of it makes it look that way
i don't know about this take. i kind of agree, except i think you're a bit too harsh on the utility this paper brings. to steelman the twitter hype i could say that the tradeoff between memory requirement (linear for this technique) and amount of functionality learned (which i think can be pushed further with better datasets) might make this a contender for a pretty robust method for large scale NLP. a study on how much complicated language understanding benchmarks suffer as a result of using all available vram to fit multiple of the same transformer into memory to do backprop over time as opposed to using all available vram to fit one big transformer would be helpful in trying to guide our opinions with empiricism.
Finally.
Awesome stuff! Do you think this will be integrated into Open Assistant?
We’ll put and eloquently described…gotta admit I was starstruck when I first saw the headline but you’re right, it’s an RNN not an absurdly long transformer window…Tku for this😎🦾
I was hyped for 500ms only, does that count?
7:44 maybe it’s about the model needs to be able to rule out negative facts?
This isn't new and it's relatively oversimplified.
We have preciverio and transformerlstm
The italians are comming 😱
This 'RMT' seems really pointless. You can just use the same main LLM, to turn text into embeddings and store them in a vectorstore database. Then you are able to search that vectorstore database for everything related to the incoming input. Allowing an LLM to have a massive vast collection of data that is retrieved in a natural lang way.
Super Simple Example:
Told my Bot, "Dogs like Blue, Cats like red, Rats like Yellow".
The LLM itself, detects these 'facts' in the input, and redirects them to a 'fact save' function. Which saves each fact to a vectorstore.
I then asked. What color does dogs like ?
The vectorstore DB is then queried with that input, which results in dogs like blue, which gets fed into the LLM along with the current input as a 'fact'.
Crude and simple example, but shows you dont really need to go code out a totally new neural net to just handle something an LLM can already handle by design.
do you think this is what bing chat uses?
@@BO2trickshoting Ya from what ive heard. The way Stripe,Bing,spotify etc are handling memory is via vectorstores.
Paper is paper, but where is working test...
I know that kuratov
1 gagillion tokens of context...
It's not 100% transformer. That in itself is noteworthy.
16:48 😂😂
I don't like this idea from the paper...
Why not just make embeddings of previous context?
Why do you trust the dataset if even the exmple in the paper is wrong?
This seems to be an indicator of poor data quality
overhype contribution checked = checked.