As someone actively working on this stuff, this channel has the best explanations on the internet, and the 'tuber actually understands what is going on.
About peer review: As one comment noted, there could be many more candidate papers presented than could be accommodated at the venue. However, this video argues, the rejection justification for this paper is inadequate at best. Some comments ask whether the rejection is important; for academics, the answer is yes, because presentations and publications count for tenure, promotions, and raises plus continued funding of the research. Since several comments plus the video indicate that the algorithm had already received a lot of publicity, for the sake of the project it may not matter if it can continue to be funded, especially if commercial implementations are successful. What is interesting in any case is that the paper exists; in effect it has been published; the authors may not get the desired credit for formal publication, but their work and the reviewer comments are out there now. A couple of decades ago that would not have been the case; most people in the field would be unaware of the algorithm. In terms of peer review, in general (outside of AI), in my field, one of the natural sciences, a paper I submitted for publication encountered an editor plus two reviewers who were well qualified in the field; after asking for two revisions to the manuscript, the third version was rejected. Interestingly, all three scientists had published research which my paper undermined; they may well have lost funding for their research or even their position had that manuscript of mine been published (I speculate here). Peer review cuts both ways. While iterating with the editor and reviewers I continued to expand my research project and made some additional discoveries. Following the rejection I wrote a completely different paper which incorporated my initial work supplemented by the new discoveries; happily it was published a few months ago (in a different journal). I'm formally retired now, but continue to do research. To young researchers -- never give up. Learn from rejection, refine your work, be humble, exercise integrity and honesty, and take pride in your accomplishments, even if only a few know about them. Peer review (by humans) is a necessity and will continue to be. There is no such thing as a perfect filter, but science and technology would be overwhelmed by irrelevancy, dishonesty, and duplication of effort without it. AI may become a useful filtering tool, but science is a human endeavor.
During my Ph.D times a paper of mine got rejected at ICASSP for not having quoted a certain paper (I guess the reviewer was one of the authors) which had absolutely NOTHING to do with what my paper was about... So yes, a lot in the reviewing process seems to be a) personal and b) must do this and that even if it is not related to your paper at all. Since years...
It's all not as difficult as one might think. I'm currently in my PhD and I quickly realized that most of the difficulty comes from people trying to look smart instead of trying to properly explain stuff. It is very hard to come up with a good solution to a problem while it is significantly easier to explain the solution once it is understood. Hence, if you are of average or slightly above average intelligence you should be able to learn almost anything if you have someone that is willing to actually provide a good explanation.
One small note on RNN's, reservoir computing is a very high dimensional random RNN with linear regression readout, therefore there is no exploding nor vanishing gradient. Reservoir computing is currently the standard for non-linear dynamic time series prediction
Yes but does it support backpropagation? Remember you have to propagate an error from the output layer through every RNN up to the inputs. Reservoirs/EchoStateMachines don't support this. There only the delta layer (linear regression layer) gets trained while the reservoir stays fixed. So you could get the error up to the first delta layer but not further.
I do hope you'll soon get at least 6 figures subscribers count. The quality of your videos (both in terms of education and presentation) is top notch, people need you to become popular (at least within our small tech bubble).
I finally understand MAMBA! I've been trying to get my head around it for months, but now see that approaching the way the original paper stated wasn't the best way. Thank you.
Wow this is a great video. I've been having a lot of trouble understanding and getting an intuition of how Mamba works, and this video just made it make sense. The visuals were a massive help and the explanations are super simple and easy to understand.
Nice video! I just wanted to point out that the parallel scan algorithm can be also implemented in O(n) time (instead of the O(n log(n)) version peresented in the video. and this is the version that the MAMBA uses.
Peer reviews are highly motivated by the reviewers protecting their existing work extending previously state-of-the-art methodologies. If you have an actually new innovation that goes against the grain, you need to publish regardless of whether the venue is highly regarded or not.
This just shows how RNNs are way too natural of an architecture to ignore. Maybe solution to a gradient descent problem is to not use gradient descent at all. There has to be a different way to update parameters than this bizarre hack and slash let ||x_0|| = 1 for RNNs.
Meta-learning could potentially be one way. Like a neural "module" in the model that looks how changes in the first layers affect the representation space deeper and vice versa. It would have to have some goal and reward itself
Thanks for the clear explanation. This gives me enough understanding to not only implement it myself, but to also have some ideas for sensible architecture modifications.
Wow, excellent explaination. It covers all the essense of the paper with just enough math/algo. Thank you so much ! If you dont mind, plz make a video for RWKV (v6 has some new modifications), which is another strong linear RNN model. I am curious how does it compares to mamba.
Another beautiful exposition. Further points: (1) HiPPO itself comes from attempting to approximate a spiking net with a SSM (Voelker 2017/8), (2) we do have O(NlogN) transformer hacks now, (3) RWKV is a promising arch that deserves a place in this arena.
I haven't heard of any O(NlogN) transformer hacks that preserve performance, got any links? And yeah RWKV is promising, I would've loved to talk about it as well but the video was getting long lol.
@@algorithmicsimplicity we need video series in math for linear algebra, calculus, probability and statistics seperately for ml perspective and then after that we would like to learn more on basic concepts like regression, classification, clustering, etc. we would also like to learn more on the types of learning unsuperwised, semi- superwised and self-superwised. some basic architectures like rnn types (lstm, gru, hybrids) , basic ann , mlp and even the recent kan, ntk.
@@harshvardhanv3873 Got it. I am definitely planning to do videos on calculus and probability for ML soon. After that I can do videos on the types of ML.
Peer review is broken nowadays because people have little time to actually read through a manuscript with attention to details given the amount of pressure to publish their own papers. So when you have more papers out there than the time people can spend on reviewing, you get low quality peer review.
Thanks for this explanation! Phrasing mamba in terms of a Linear RNN makes it much easier to understand. You've done a lot already with this video, but I just want to ask for a little bit more. Since the original Mamba paper presented the model in terms of SSM, many, many implementations of Mamba also use that language. And I have difficulty wrapping my head around trying to map their code back to the concepts in this video. I wish you can explain how concepts in the Mamba paper (∆ A B C D, discretization, etc) maps back to the parameters of a Linear RNN, which would help a lot.
Sure, for the state space terminology A in ℂ^d is the learnable parameter that is used to make the recurrent weight vector, the equivalent in my video is a+bi, with a, b in R^d as learnable parameters, i is the imaginary unit. B, C in ℂ^{d x d } are the complex matrices applied before and after the recurrence respectively, equivalent to P and Q matrices in my video, also learnable parameters. SSM performs discretization of the parameters, which creates A^bar = e^{ΔA} and B^bar = (ΔA^-1)(exp(ΔA)-I)ΔB. Note A^bar and B^bar are what are actually used in the computation. This discretization is equivalent to the stable reparameterization outlined in my video. In the SSM formulation, they phrase the discretization as modifying B into B^bar, but note that B is the matrix which is applied to the input, so multiplying B with Δ is equivalent to multiplying the input x with Δ and leaving B unchanged, which is how it is described in my video. One last thing to be aware of is that in the state space literature, the models are often described as having another "state dimension" N in addition to the model dimension d. This state dimension is equivalent to the factor by which the output vector's dimension is expanded, so for example Mamba uses N=16, i.e. expands outputs by a factor of 16. Let me know if you still have any questions!
Here’s an idea that probably wouldn’t work: What if instead of algebraically guaranteeing that some operation is a monoid so that one can use the parallelizing thing that combines n inputs in O(log(n)) steps in n processors, what if you just had some operation, learned by a NN, which has “how much it deviates from being a monoid operation” as part of the loss? Like, suppose you randomly selected some pair of consecutive applications of the operation, and also computed it in the opposite order, and took the L^2 norm of the difference between the results, and multiplied that by some weighting, and made that a term in the loss? Like, within the family of continuous and piecewise-smooth monoidal operations, perhaps some of them would be better at selective remembering?
@@algorithmicsimplicity Thanks! Unfortunately I am lazy... And, there’s already another “what if I did [X]?” machine learning project I barely started (“what if I tried to add a simple approximation to what copying heads do to an n-gram model”, which seems like it should be much easier, but I’ve barely written the n-gram model part of it (and ChatGPT honestly wrote most of that). Haven’t even started on the “compute statistics about whether copying a word from previously in the current text, or go based on the corpus as a whole, is more accurate in this context” part...
It's only yet another silly experiment to do the seemingly impossible in the hottest meme area, picking your nose seems like a more productive waste of time. But imagine, if you found something really cool and nobody would listen. That would be funny, that would be cool.
@@TheDoomerBloxif you would be able to build a RecNN that outperforms current state of the art models and put it on hugging face, people will care about that 🤷🏼♂️
I appreciate the soothing piano music. Currently the words are only slightly better than Charlie Brown listening to adults talk, but I hope to dive in.
I really wish that when you're talking about things happening in parallel, your animations happened in parallel. Like 8:30. I think it would really improve the comprehensibility of your explanation
@@harrysvensson2610that means that, basically, transformers scale x² in compute needed for prompting. Also called square or quadratic since x² is a square if you would make a geometric figure. So if you write a prompt of 5 words, that's 25 compute since 5*5=25. You can see how this gets really crazy at high tokens counts. Mamba scales differently, so you need much less compute per prompt.
I believe that the transformer does have a quadratic cost in memory (specifically self attention (SA)). The attention matrix in SA is n by n, thus n^2 (n being the number of tokens). Probably the reviewers is referring to that bit. Anyway, rejecting mamba was hecking stupid. Great video!
The matrix is indeed n^2, but you never need to materialize the full matrix at the same time. You can materialize one column at a time, which is exactly what FlashAttention does, resulting in O(n) memory (still O(n^2) compute though).
I have no idea how flash attention manages to be faster and more memory friendly. Are you sure that the attention matrix is never fully in memory (regardless of the type of memory)?. However the classical implementation didn't use flash attention so I believe that the reviewer is referring to that.
RNNs are constrained by having to hold all their information in a single embedding space, so this space needs to be extremely large. It needs to hold every piece of information in the context that might come in useful at some point. Transformers can distribute information between many tokens, so can operate with a much smaller embedding space, at least in theory. The memory complexity of a RNN with a given structure is quadratic on the size of the embedding space, meaning we really pay big time for that increased embedding size. I wonder if that is what the reviewer was getting at. The results were impressive, but they haven't been followed up by success at larger model sizes which I would have expected to have already happened if it was going to. It is a cool mathematical trick to make it work, and demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions I would expect it to stop improving. Overhyped IMO.
if you stack multiple linear rnn layers they can handle non-linear dependencies across time, so "demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions" is not true as mamba model as a whole (multiple layers) is nonlinear rnn
The really cool thing about linear RNNs is that increasing the size the embedding space only has linear cost, not quadratic. The recurrence operator only performs elementwise multiplication with the embedding vector. This is why Mamba is able to increase the size of the embedding vector by a factor of 16 at essentially no cost. If you were willing to incur some additional cost, you could easily make the embedding vectors even larger. When you expand the embedding vector by a factor of a few thousand, now your talking about as much memory as a transformer with a few thousand tokens of the original size. Works are currently in progress to train larger model sizes, it takes about a year from start to finish to train a full sized model. Mamba already achieves state of the art performance for ~3b sized language modelling, this is HIGHLY HIGHLY non-linear. And finally, while there are some aspects in which transformers are still superior to dynamic linear RNNs, hybrid architectures such as Griffin (arxiv.org/abs/2402.19427 ) appear to give the best of both worlds, handily outperforming both.
great video. That trick around the 26 minute mark of doing 16x compute almost for free (in terms of time) because of memory bottlenecks is really neat. I wonder how many other architectures would benefit from that kind of design optimisation?
It appears that it is only useful for linear recurrent layers, because the main computation is just performing elementwise multiplication between the previous output vector and the recurrent weight vector, which means you have O(d) parameters and you do O(d) compute, and transferring one parameter takes longer than doing one operation. For other kinds of layers, such as fully connected layers, you are doing at least a matrix-vector multiplication, which means you are doing O(d^2) compute, and that usually takes much longer than transferring O(d) parameters.
Just watched the lecture by mohit, then watching your video. Feel like this make me understand this architecture better than reading those papers for months 😂
If you ever get the time I would love to see another video on mamba implementation but dumded down even more. Like to the level of statquest videos. They need to make you feel special while also showing the math step by step like its 9th grade.
Thanks for the suggestion, there will probably be improved versions of Mamba coming out soon, I will make a more basic explanation video for them when they do.
State-space models are not necessarily from ML, they're used a lot in control systems actually. Not surprised by their relationship considering both are strongly based on linear algebra.
@nias2631 I have no particular opinion on transformers or MAMBA since, for my work, I never use these. But as for peer review I think that Open Review itself is a great "filter for the filter". The research community can actively review the reasoning for accept/reject as you did in this video. For most journals not using Open Review the process is fairly opaque.
Fascinating video. I've always found state space model papers a little bit dense and self-referential to understand coming from other areas of ML but this video is a really great reparameterization of the issue. I'm not sure if it would be in line with previous videos (covering generally useful industry standard models with wide applications), but is there any possibility of getting a video on liquid neural networks or spiking neural networks?
Thanks for the feedback. I probably won't get around to making videos on spiking and liquid neural networks for a while, I have lots of other stuff I'm planning to cover, but they are definitely on my todo list!
Nice video! What I didn't understand is what happens to the stable weights during training. Particularly: - How are they kept stable? - How can the model learn while being so restricted? What I'm guessing is that some form of the Delta is also used in training to keep the weights in those ranges + rely a lot more on the accuracy to carry the information. Is this correct? Does it imply that using double instead of float gives it a better ability to learn?
Great question. The answer is it's really complicated and no-one knows for sure. There is nothing explicitly keeping the weights stable during training. They can (and probably do) become unstable. The thing is, there are actually thousands of different weights in the vector. At initialization, all of the weights are essentially one, so information from anywhere in the input can influence the gradient, but the model is incredibly restricted (cannot perform meaningful transformations in the recurrence). Then SOME of those weights change and enter the un-stable regime, so they can no longer carry information long distance but can do more interesting computations, while others remain stable. And in the fully-connected layers between recurrences, all weights can communicate information with each-other. So you have this complicated system where weights are changing at different rates, some remain stable, some become unstable, and that allows for interesting computation to be done and information to be propagated long distances.
@@algorithmicsimplicity Thanks for the reply! That's quite interesting, different propagation lengths didn't even cross my mind. It'd be really funny if after all this work the model learned unstable weights and became forgetful :))
Extremely noob question but, at 13:52 why aren't the input vectors x multplied by P^-1 instead of P? Don't you need to convert them to the eigenbasis before applying the D transformation (or, equivalently, taking the hadamard product with the diag(D) vector)?
Yes, I should have applied P^-1 first to be consistent with my earlier notation W=PDP^-1. Of course, the naming is just a matter of preference, you can equivalently call the first matrix which is applied P or P^-1, so long as the two matrices are inverse of each other it doesn't matter which is called which.
Appreciate the breakdown. I think there are a few more things at play here for the reject that is somewhat overlooked in the discussion at the end. Specifically, there are issues with anonymity and using "hype" to push a paper through an academic conference. I speculate that this was the underlying reason for rejecting the paper.
Cool, if that was the reason for the reject they should have said that in the rationale for the reject. Instead they made up a bunch of a criticisms which are either 1) irrelevant or 2) blatantly untrue. That's a bad look for the conference, as it makes it seem like their reviewers are un-qualified to judge academic works.
@@algorithmicsimplicityAbsolutely agree. In my experience, the quality of conference reviewers are extremely variable. Almost all researchers I know have horror stories about how incompetent and outright adversarial reviewers can be. Many great papers are rejected without sufficient basis, and mediocre papers are included for seemingly no good reason. Many experienced researchers don't want to review anymore. Just a comment on the reject; it might have been a conscious decision to not actually bring the anonymity issues up in the rebuttal to avoid further disputation. But, I am just speculating here with little to no factual basis.
It could very well have been a conscious decision, but I think it was the wrong decision. From an outside perspective, it looks like a fantastic paper was rejected because of clueless reviewers. That's far more damaging to the conference's integrity than what ever conflicts might arise from anonymity violation disputes.
@@algorithmicsimplicity Independently of what one may think of the paper, I agree that the justification for the reject was weak. Unfortunately, I don't think it matters much for the integrity of the conference in the long run, as this has happened in all the other big conferences in the past. Authors generally adapt and move on. What makes this unique is the hype around Mamba. Previously, no single member of the general public would have been interested in the review decision of a single paper in AI / ML. Now, the community extends far beyond academics, for better or worse. All in all, I hope it serves to incentivise stronger review processes for the future.
Your videos are so good man keep it up, seriously. Although that is probably beneath you, but could you maby make a video on how neural networks are computed on machines in general or maby on GPUs? As someone who did not learn computer science in uni, this would be an interesting topic for me to learn and maby fundamentally understand nn better.
That's an interesting topic, I was planning on making videos about how CPUs and GPUs work at the physical level (e.g. logical gates are built out of transistors, addition and multiplication are built out of logical gates). Neural nets are just implemented as a bunch of matrix multiplications (you put all the neuron weights in one matrix and multiply it with the input). Is that what you are asking about?
@algorithmicsimplicity yeah that sounds about right, thank you. Maby you could use matrix multiplication as a case example on those inner workings :) anyways, thanks for making awesome videos
So how close is the weight estimator to the MMSE (minimal mean square error) estimator? Can the MAMBA arch be improved even more, using a sparse covariance matrix and an application of a 'true' Kalman filter? Or is it already as close as it can get?
Since this is also used to make long connections in the state space, might also mamba applied not just for language models but for gradient-optimising reinforcement learning models?
Yes, absolutely. Mamba has been applied to some other areas now, such as protein sequence modelling. I haven't heard of anyone applying it to reinforcement learning, but I imagine it would work very well.
Thanks for the video. Why do you use matrix diagonalization instead of SVD in 13:00? SVD can decompose any matrix and you do not need to introduce complex numbers. The power trick also works with SVD wrt the singular values.
With SVD you get W=USV for a diagonal matrix S, but the U and V are not necessarily inverse of each other, so when you take W^2=USVUSV you can't cancel out the inner VU.
What about LSTMs? You've shortly showed the paper, but didn't mention them, even though they were supposed to be the solution to the vanishing and exploding gradients problem.
LSTMs do better than regular RNNs at remembering. A regular RNN will forget what it sees 20 tokens ago, LSTMs can remember for a few hundred tokens, maaaybe up to 1000, but after that they forget as well. This is because LSTMs don't completely fix vanishing and exploding gradients, they just make them vanish slower (basically because the sigmoid gates it uses saturate and they can't output values extremely close to 0 or 1). When people say LSTMs fix vanishing and exploding gradients they mean it has less vanishing and exploding gradients compared to regular RNNs. Mamba on the other hand can remember for at least hundreds of thousands of tokens. Also LSTMs aren't parallelizable, so it isn't practical to train large-scale LSTMs on modern hardware. Recently the author of LSTMs put out a new paper with new versions of LSTMs to fix these issues (called LSTMx), but from what I can tell LSTMx just performs worse than Mamba in every way.
Enjoyed this. Given that its performance is comparable to or better than transformers as verified independently in several papers, is Mamba gaining a foothold among practitioners?
Definitely, lots of open source language models are switching to Mamba. Mamba is also being used for other tasks as well, e.g. arxiv.org/abs/2401.09417 Also, recently google deepmind released this paper ( arxiv.org/abs/2402.19427 ) on hybrid dynamic linear RNN and transformers which achieves really good results. Dynamic linear RNNs are definitely going to become mainstream.
so the transformation applied to the weights does not concern purely with initialization? instead, in the expression w=exp(-exp(a)*exp(ib)) numbers a and b are the learned parameters and not w, right?
People keep making things that they say are "better than transformers", but none of them are actually getting used. At this point, hearing people say that has sort of become meaningless from the number of false alarms. Feels like every few months we have something "better than transformers", like RetNets were claimed to be. We'll have to wait and see which actually turn out to be better with time.
investor money is generally spent conservatively. it will take at least a few months for them to see the upside in divesting from super large transformers and moving on to MAMBA (or upcoming derivatives). Remember, Transformer was first published in 2017, and it took until at least 2020 for any "large" (> 3B) model to come out.
I haven't found any significant evidence suggesting that Mamba models outperform Transformers, except that their attention mechanism does not scale quadratically with the context length. Am I missing something?
@@ilonachan Sure, but as far as I'm concerned, there is not much evidence it can qualitatively perform the same tasks either. Some people reported that Mamba's state space doesn't perform as well as true attention for long contexts.
I do not, I'd recommend checking out the latest papers for each (Mamba: arxiv.org/pdf/2405.21060 , RWKV: arxiv.org/pdf/2404.05892 ) and seeing which performs better on tasks that are similar to your use case.
Quick question: I guess if you want a true linear recurrence from real-valued to real-valued, you could use the Hermitian of P for P^-1? That would also eliminate optimizing for Q...
You could, but there isn't really any need to. The complex version performs the same as strictly real recurrences (actually, in some cases better). And optimizing for Q doesn't really have much cost, even if you used the Hermitian of P in place of Q you would still need to back-prop through it.
@@algorithmicsimplicity Although I still don't get the backprop argument... If you backpropagate through P, computing the Hermitian has a closed-form solution... It's the complex version of a matrix transpose.
@@drjenschn Sure, say we compute the output of a layer as y=P^TDPx. When we are backpropagating we need to compute the gradient of y w.r.t x, which means computing (P^TDP)^T y`. If you use a completely separate Q instead of P^T, computing this gradient still has the same cost. The only advantage of reusing P is you don't have to update the Q matrix as well, but updating weights is a relatively small computation compared to calculating (QDP)^T y`.
@@algorithmicsimplicity Got it now. I was originally talking about "optimizing" for P^-1 (learning the matrix weights). Back-prop is still necessary, correct. Thx!
[6:28]: While that sound somewhat good in practice it doesn't work like that Alternating between linear recurrent and non linear dense doesn't give that much of context in advantage :( The gradients vanishes or explodes after a while and requires some sort sigmoid transformation + some value Say for example an architecture like this: ```plaintext Dense -> Sigmoid -> Recurrent -> Dense -> Sigmoid -> Recurrent -> Dense -> Softmax ``` Until the gradients reach the first Recurrent the gradients loses most of it's value :(
Love the video but I have the question: Shouldn't be the approximation at 17:00 be something like n*w^(n-1)*0.001*x, so isn't there an n missing? Or how was the approximation done?
Ahh yes you're right, there should be an n out the front, the gradient is proportional to nw^(n-1)x. The vanishing/exploding gradient arguments are still the same though, the linear scaling factor doesn't matter compared to the exponential scaling for large n.
At 31:02, I agree that Mamba has linear O(n) memory requirements. However, why don't transformers have quadratic O(n^2) memory requirements? They need to store the attention matrices that are n x n. I'm surely missing something.
You don't need to materialize the full nxn matrix in memory at the same time. You can instead materialize only a chunk of it, sum over that chunk, and then materialize the next chunk in the same memory slot. This is how, for example, FlashAttention and FlashAttention2 work. When you do this the memory requirement is O(n).
It depends on the algorithm used for the parallel scan, in this video I described an O(nlog(n)) algorithm, in practice there are actually O(n) parallel scan algorithms and Mamba uses one of them.
As someone actively working on this stuff, this channel has the best explanations on the internet, and the 'tuber actually understands what is going on.
3blue1brown of deep learning?
I’d love feed back on Reddit if ur working on this as well as on cosmo knowledge RUclips channel I threw up some concepts
About peer review: As one comment noted, there could be many more candidate papers presented than could be accommodated at the venue. However, this video argues, the rejection justification for this paper is inadequate at best. Some comments ask whether the rejection is important; for academics, the answer is yes, because presentations and publications count for tenure, promotions, and raises plus continued funding of the research. Since several comments plus the video indicate that the algorithm had already received a lot of publicity, for the sake of the project it may not matter if it can continue to be funded, especially if commercial implementations are successful. What is interesting in any case is that the paper exists; in effect it has been published; the authors may not get the desired credit for formal publication, but their work and the reviewer comments are out there now. A couple of decades ago that would not have been the case; most people in the field would be unaware of the algorithm. In terms of peer review, in general (outside of AI), in my field, one of the natural sciences, a paper I submitted for publication encountered an editor plus two reviewers who were well qualified in the field; after asking for two revisions to the manuscript, the third version was rejected. Interestingly, all three scientists had published research which my paper undermined; they may well have lost funding for their research or even their position had that manuscript of mine been published (I speculate here). Peer review cuts both ways. While iterating with the editor and reviewers I continued to expand my research project and made some additional discoveries. Following the rejection I wrote a completely different paper which incorporated my initial work supplemented by the new discoveries; happily it was published a few months ago (in a different journal). I'm formally retired now, but continue to do research. To young researchers -- never give up. Learn from rejection, refine your work, be humble, exercise integrity and honesty, and take pride in your accomplishments, even if only a few know about them. Peer review (by humans) is a necessity and will continue to be. There is no such thing as a perfect filter, but science and technology would be overwhelmed by irrelevancy, dishonesty, and duplication of effort without it. AI may become a useful filtering tool, but science is a human endeavor.
nice one rex
During my Ph.D times a paper of mine got rejected at ICASSP for not having quoted a certain paper (I guess the reviewer was one of the authors) which had absolutely NOTHING to do with what my paper was about... So yes, a lot in the reviewing process seems to be a) personal and b) must do this and that even if it is not related to your paper at all. Since years...
wow, you've made some difficult i mean extremely difficult algorithms look easy. thank you.
It's all not as difficult as one might think. I'm currently in my PhD and I quickly realized that most of the difficulty comes from people trying to look smart instead of trying to properly explain stuff. It is very hard to come up with a good solution to a problem while it is significantly easier to explain the solution once it is understood. Hence, if you are of average or slightly above average intelligence you should be able to learn almost anything if you have someone that is willing to actually provide a good explanation.
I like how we now call 1 billion parameters small.
Will we ever scale up and reach a point where 1 trillion is small?
i hope so
One small note on RNN's, reservoir computing is a very high dimensional random RNN with linear regression readout, therefore there is no exploding nor vanishing gradient. Reservoir computing is currently the standard for non-linear dynamic time series prediction
Yes but does it support backpropagation? Remember you have to propagate an error from the output layer through every RNN up to the inputs. Reservoirs/EchoStateMachines don't support this. There only the delta layer (linear regression layer) gets trained while the reservoir stays fixed. So you could get the error up to the first delta layer but not further.
Hi, can you recommend a paper about that
@@zzador The wonder of it is that you don't need it to go further...
@@terrortinus paper please
please open your community tab
your content is incredible
Brutal. I'm going to have to watch this about 30 times. Love it.
Currently testing it on molecular generation, so excited to see where these strengths hold and where they falter :)
I do hope you'll soon get at least 6 figures subscribers count. The quality of your videos (both in terms of education and presentation) is top notch, people need you to become popular (at least within our small tech bubble).
I finally understand MAMBA! I've been trying to get my head around it for months, but now see that approaching the way the original paper stated wasn't the best way. Thank you.
Wow this is a great video. I've been having a lot of trouble understanding and getting an intuition of how Mamba works, and this video just made it make sense. The visuals were a massive help and the explanations are super simple and easy to understand.
Nice video! I just wanted to point out that the parallel scan algorithm can be also implemented in O(n) time (instead of the O(n log(n)) version peresented in the video. and this is the version that the MAMBA uses.
Peer reviews are highly motivated by the reviewers protecting their existing work extending previously state-of-the-art methodologies. If you have an actually new innovation that goes against the grain, you need to publish regardless of whether the venue is highly regarded or not.
This just shows how RNNs are way too natural of an architecture to ignore. Maybe solution to a gradient descent problem is to not use gradient descent at all. There has to be a different way to update parameters than this bizarre hack and slash let ||x_0|| = 1 for RNNs.
Meta-learning could potentially be one way. Like a neural "module" in the model that looks how changes in the first layers affect the representation space deeper and vice versa. It would have to have some goal and reward itself
But gradient descent is too natural of an algorithm to ignore >.
@@tempname8263 it's actually not natural at all, gradient decent itself is the one big difference between a human brain and any neural networks.
@@tempname8263no
@BooleanDisorder you have 10 missed calls from Juergen Schmidhuber 🧏♂️
Crazy how two separate ideas ended up converging into one nearly identical solution.
Totally agree. I feel like that's pretty common in math, robotics, and computer science, but it just shows how every field in stem is interconnected.
tmw you realize humanity is just being trained with gradient descent and we always converge to these local minima
Kind of how biology always optimizes being into a crab (or crab-like) entity.
A+++ for OpenReview. Transparency is so valuable !
Also, many thanks for the excellent video !
I love how you nail the level of detail in the explanations. Perfect for me at least.
Actually best explanation channel on youtube, rivaling 3B1B!
Very good explanation, and kudos for exposing the broken peer review system. Subscribed
I honestly found the "boring technical details" the most interesting of the video.
Thanks for the clear explanation. This gives me enough understanding to not only implement it myself, but to also have some ideas for sensible architecture modifications.
Wow, excellent explaination. It covers all the essense of the paper with just enough math/algo. Thank you so much ! If you dont mind, plz make a video for RWKV (v6 has some new modifications), which is another strong linear RNN model. I am curious how does it compares to mamba.
Another beautiful exposition. Further points: (1) HiPPO itself comes from attempting to approximate a spiking net with a SSM (Voelker 2017/8), (2) we do have O(NlogN) transformer hacks now, (3) RWKV is a promising arch that deserves a place in this arena.
I haven't heard of any O(NlogN) transformer hacks that preserve performance, got any links?
And yeah RWKV is promising, I would've loved to talk about it as well but the video was getting long lol.
This was really concise and easy to understand.
Absolutely amazing vid. Just subbed after getting recommended to this channel. Never stop making videos dude
The level of details and intuition you dig into are excellent 💯🔥
Please make more videos. They’re fantastic!
absolutely love the quality and information of this video!!! please keep up the good work this is amazing
we need more videos from you, especially one from basics
Any topics in particular you'd like to see?
@@algorithmicsimplicity we need video series in math for linear algebra, calculus, probability and statistics seperately for ml perspective and then after that we would like to learn more on basic concepts like regression, classification, clustering, etc. we would also like to learn more on the types of learning unsuperwised, semi- superwised and self-superwised. some basic architectures like rnn types (lstm, gru, hybrids) , basic ann , mlp and even the recent kan, ntk.
@@harshvardhanv3873 Got it. I am definitely planning to do videos on calculus and probability for ML soon. After that I can do videos on the types of ML.
@@algorithmicsimplicity sure waiting for your videos ✌
well, maybe 3b1b's video already fullfills what your need on prerequisites of ml.
Underrated ML channel ❤
Peer review is broken nowadays because people have little time to actually read through a manuscript with attention to details given the amount of pressure to publish their own papers. So when you have more papers out there than the time people can spend on reviewing, you get low quality peer review.
Thanks for this explanation! Phrasing mamba in terms of a Linear RNN makes it much easier to understand.
You've done a lot already with this video, but I just want to ask for a little bit more. Since the original Mamba paper presented the model in terms of SSM, many, many implementations of Mamba also use that language. And I have difficulty wrapping my head around trying to map their code back to the concepts in this video. I wish you can explain how concepts in the Mamba paper (∆ A B C D, discretization, etc) maps back to the parameters of a Linear RNN, which would help a lot.
Sure, for the state space terminology A in ℂ^d is the learnable parameter that is used to make the recurrent weight vector, the equivalent in my video is a+bi, with a, b in R^d as learnable parameters, i is the imaginary unit. B, C in ℂ^{d x d } are the complex matrices applied before and after the recurrence respectively, equivalent to P and Q matrices in my video, also learnable parameters. SSM performs discretization of the parameters, which creates A^bar = e^{ΔA} and B^bar = (ΔA^-1)(exp(ΔA)-I)ΔB. Note A^bar and B^bar are what are actually used in the computation. This discretization is equivalent to the stable reparameterization outlined in my video. In the SSM formulation, they phrase the discretization as modifying B into B^bar, but note that B is the matrix which is applied to the input, so multiplying B with Δ is equivalent to multiplying the input x with Δ and leaving B unchanged, which is how it is described in my video. One last thing to be aware of is that in the state space literature, the models are often described as having another "state dimension" N in addition to the model dimension d. This state dimension is equivalent to the factor by which the output vector's dimension is expanded, so for example Mamba uses N=16, i.e. expands outputs by a factor of 16. Let me know if you still have any questions!
@@algorithmicsimplicity Thank you so much!
Thank you! Your channel is an invaluable resource on here. Hope you keep making these videos!
This was very good, and I hope you make more videos like this!
Here’s an idea that probably wouldn’t work:
What if instead of algebraically guaranteeing that some operation is a monoid so that one can use the parallelizing thing that combines n inputs in O(log(n)) steps in n processors,
what if you just had some operation, learned by a NN, which has “how much it deviates from being a monoid operation” as part of the loss?
Like, suppose you randomly selected some pair of consecutive applications of the operation, and also computed it in the opposite order, and took the L^2 norm of the difference between the results, and multiplied that by some weighting, and made that a term in the loss?
Like, within the family of continuous and piecewise-smooth monoidal operations, perhaps some of them would be better at selective remembering?
That sounds really interesting, you should try it out!
@@algorithmicsimplicity Thanks! Unfortunately I am lazy...
And, there’s already another “what if I did [X]?” machine learning project I barely started (“what if I tried to add a simple approximation to what copying heads do to an n-gram model”, which seems like it should be much easier, but I’ve barely written the n-gram model part of it (and ChatGPT honestly wrote most of that). Haven’t even started on the “compute statistics about whether copying a word from previously in the current text, or go based on the corpus as a whole, is more accurate in this context” part...
@@drdca8263thats a lame response. Try it. Make something in this world
It's only yet another silly experiment to do the seemingly impossible in the hottest meme area, picking your nose seems like a more productive waste of time.
But imagine, if you found something really cool and nobody would listen. That would be funny, that would be cool.
@@TheDoomerBloxif you would be able to build a RecNN that outperforms current state of the art models and put it on hugging face, people will care about that 🤷🏼♂️
You have such a pleasant voice 😊
Thanks for helping me understand better.
Please keep making videos. ❤
in para-lllelll :-D
Awesome video. I love the speed and the depth of this, it's perfect
I appreciate the soothing piano music. Currently the words are only slightly better than Charlie Brown listening to adults talk, but I hope to dive in.
I really wish that when you're talking about things happening in parallel, your animations happened in parallel. Like 8:30. I think it would really improve the comprehensibility of your explanation
Good video to explain mamba : I understand something
You see, it's O(n log(n)) instead of O(n^2) without any penalties. Okay?
100% crystal clear, right? //end of joke
@@harrysvensson2610that means that, basically, transformers scale x² in compute needed for prompting. Also called square or quadratic since x² is a square if you would make a geometric figure. So if you write a prompt of 5 words, that's 25 compute since 5*5=25. You can see how this gets really crazy at high tokens counts. Mamba scales differently, so you need much less compute per prompt.
Incredible explanation
I believe that the transformer does have a quadratic cost in memory (specifically self attention (SA)). The attention matrix in SA is n by n, thus n^2 (n being the number of tokens). Probably the reviewers is referring to that bit. Anyway, rejecting mamba was hecking stupid. Great video!
The matrix is indeed n^2, but you never need to materialize the full matrix at the same time. You can materialize one column at a time, which is exactly what FlashAttention does, resulting in O(n) memory (still O(n^2) compute though).
I have no idea how flash attention manages to be faster and more memory friendly. Are you sure that the attention matrix is never fully in memory (regardless of the type of memory)?. However the classical implementation didn't use flash attention so I believe that the reviewer is referring to that.
I have rechecked the paper and it appears that flash attention is linear wrt the memory. The work of Tri Dao Is magic to me.
Amazing explanation, thank you!
Just wondering if you can make a video on how GNN works? There's not really many videos about GNN on youtube.
Thanks for the suggestion, I will put it on the list!
RNNs are constrained by having to hold all their information in a single embedding space, so this space needs to be extremely large. It needs to hold every piece of information in the context that might come in useful at some point. Transformers can distribute information between many tokens, so can operate with a much smaller embedding space, at least in theory. The memory complexity of a RNN with a given structure is quadratic on the size of the embedding space, meaning we really pay big time for that increased embedding size. I wonder if that is what the reviewer was getting at.
The results were impressive, but they haven't been followed up by success at larger model sizes which I would have expected to have already happened if it was going to. It is a cool mathematical trick to make it work, and demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions I would expect it to stop improving. Overhyped IMO.
if you stack multiple linear rnn layers they can handle non-linear dependencies across time, so "demonstrates that language is surprisingly linear, but once you start to hit truly non linear questions" is not true as mamba model as a whole (multiple layers) is nonlinear rnn
The really cool thing about linear RNNs is that increasing the size the embedding space only has linear cost, not quadratic. The recurrence operator only performs elementwise multiplication with the embedding vector. This is why Mamba is able to increase the size of the embedding vector by a factor of 16 at essentially no cost. If you were willing to incur some additional cost, you could easily make the embedding vectors even larger. When you expand the embedding vector by a factor of a few thousand, now your talking about as much memory as a transformer with a few thousand tokens of the original size.
Works are currently in progress to train larger model sizes, it takes about a year from start to finish to train a full sized model. Mamba already achieves state of the art performance for ~3b sized language modelling, this is HIGHLY HIGHLY non-linear.
And finally, while there are some aspects in which transformers are still superior to dynamic linear RNNs, hybrid architectures such as Griffin (arxiv.org/abs/2402.19427 ) appear to give the best of both worlds, handily outperforming both.
underrated channel
Subscribed! Thats some 3Blue1Brown level stuff! Amazing!
Great video, would prefer no music but that’s me
great video. That trick around the 26 minute mark of doing 16x compute almost for free (in terms of time) because of memory bottlenecks is really neat. I wonder how many other architectures would benefit from that kind of design optimisation?
It appears that it is only useful for linear recurrent layers, because the main computation is just performing elementwise multiplication between the previous output vector and the recurrent weight vector, which means you have O(d) parameters and you do O(d) compute, and transferring one parameter takes longer than doing one operation. For other kinds of layers, such as fully connected layers, you are doing at least a matrix-vector multiplication, which means you are doing O(d^2) compute, and that usually takes much longer than transferring O(d) parameters.
Just watched the lecture by mohit, then watching your video. Feel like this make me understand this architecture better than reading those papers for months 😂
Incredible work. I mean REALLY incredible
Great explanation, do one for Mamba 2 as well, if possible
Great job! Your channel is a treasure.
very well explained
If you ever get the time I would love to see another video on mamba implementation but dumded down even more. Like to the level of statquest videos. They need to make you feel special while also showing the math step by step like its 9th grade.
Thanks for the suggestion, there will probably be improved versions of Mamba coming out soon, I will make a more basic explanation video for them when they do.
State-space models are not necessarily from ML, they're used a lot in control systems actually. Not surprised by their relationship considering both are strongly based on linear algebra.
Hey man! Really appreciate the technical detail in your videos
Thanks for the suggestion, I will add them to the TODO list.
@nias2631
I have no particular opinion on transformers or MAMBA since, for my work, I never use these. But as for peer review I think that Open Review itself is a great "filter for the filter". The research community can actively review the reasoning for accept/reject as you did in this video. For most journals not using Open Review the process is fairly opaque.
Absolutely agree, the transparent review process is definitely a net benefit for the community as a whole.
Nicely explained
Fascinating video. I've always found state space model papers a little bit dense and self-referential to understand coming from other areas of ML but this video is a really great reparameterization of the issue. I'm not sure if it would be in line with previous videos (covering generally useful industry standard models with wide applications), but is there any possibility of getting a video on liquid neural networks or spiking neural networks?
Thanks for the feedback. I probably won't get around to making videos on spiking and liquid neural networks for a while, I have lots of other stuff I'm planning to cover, but they are definitely on my todo list!
the channel is great and the material is awesome! the only catch is: the piano in the background makes it hard to focus..
"i can do eleventy kajillion computations every second"
"okay, what's your memory throughput"
Nice video! What I didn't understand is what happens to the stable weights during training. Particularly:
- How are they kept stable?
- How can the model learn while being so restricted?
What I'm guessing is that some form of the Delta is also used in training to keep the weights in those ranges + rely a lot more on the accuracy to carry the information.
Is this correct? Does it imply that using double instead of float gives it a better ability to learn?
Great question. The answer is it's really complicated and no-one knows for sure.
There is nothing explicitly keeping the weights stable during training. They can (and probably do) become unstable. The thing is, there are actually thousands of different weights in the vector. At initialization, all of the weights are essentially one, so information from anywhere in the input can influence the gradient, but the model is incredibly restricted (cannot perform meaningful transformations in the recurrence). Then SOME of those weights change and enter the un-stable regime, so they can no longer carry information long distance but can do more interesting computations, while others remain stable. And in the fully-connected layers between recurrences, all weights can communicate information with each-other. So you have this complicated system where weights are changing at different rates, some remain stable, some become unstable, and that allows for interesting computation to be done and information to be propagated long distances.
@@algorithmicsimplicity Thanks for the reply! That's quite interesting, different propagation lengths didn't even cross my mind.
It'd be really funny if after all this work the model learned unstable weights and became forgetful :))
That was very well explained. Could you please also do a video on RWKV.
Thank you!
This algo is new and you made a video about it I love you I will subscribe your channel keep going
Damn. Amazing piece
Extremely noob question but, at 13:52 why aren't the input vectors x multplied by P^-1 instead of P? Don't you need to convert them to the eigenbasis before applying the D transformation (or, equivalently, taking the hadamard product with the diag(D) vector)?
Yes, I should have applied P^-1 first to be consistent with my earlier notation W=PDP^-1. Of course, the naming is just a matter of preference, you can equivalently call the first matrix which is applied P or P^-1, so long as the two matrices are inverse of each other it doesn't matter which is called which.
@@algorithmicsimplicity Oh ok, that makes sense now! Thanks a lot for your answer and this amazing video ^^
Appreciate the breakdown. I think there are a few more things at play here for the reject that is somewhat overlooked in the discussion at the end. Specifically, there are issues with anonymity and using "hype" to push a paper through an academic conference. I speculate that this was the underlying reason for rejecting the paper.
Cool, if that was the reason for the reject they should have said that in the rationale for the reject. Instead they made up a bunch of a criticisms which are either 1) irrelevant or 2) blatantly untrue. That's a bad look for the conference, as it makes it seem like their reviewers are un-qualified to judge academic works.
@@algorithmicsimplicityAbsolutely agree. In my experience, the quality of conference reviewers are extremely variable. Almost all researchers I know have horror stories about how incompetent and outright adversarial reviewers can be. Many great papers are rejected without sufficient basis, and mediocre papers are included for seemingly no good reason. Many experienced researchers don't want to review anymore.
Just a comment on the reject; it might have been a conscious decision to not actually bring the anonymity issues up in the rebuttal to avoid further disputation. But, I am just speculating here with little to no factual basis.
It could very well have been a conscious decision, but I think it was the wrong decision. From an outside perspective, it looks like a fantastic paper was rejected because of clueless reviewers. That's far more damaging to the conference's integrity than what ever conflicts might arise from anonymity violation disputes.
@@algorithmicsimplicity Independently of what one may think of the paper, I agree that the justification for the reject was weak. Unfortunately, I don't think it matters much for the integrity of the conference in the long run, as this has happened in all the other big conferences in the past. Authors generally adapt and move on. What makes this unique is the hype around Mamba. Previously, no single member of the general public would have been interested in the review decision of a single paper in AI / ML. Now, the community extends far beyond academics, for better or worse. All in all, I hope it serves to incentivise stronger review processes for the future.
On a side note, I really enjoy your content, keep up the good work 👏
This is amazing!
Woah big claim! I’m excited
A guy who actually understands this stuff
Can you make an explanation video like this one on Liquid Time Constant Networks 🙏
GPT mafia 😞 Probably just can't lose their faces and title of "the best LLM tech" (and, perhaps, contracts as well).
How about RWKV ?
Your videos are so good man keep it up, seriously. Although that is probably beneath you, but could you maby make a video on how neural networks are computed on machines in general or maby on GPUs? As someone who did not learn computer science in uni, this would be an interesting topic for me to learn and maby fundamentally understand nn better.
That's an interesting topic, I was planning on making videos about how CPUs and GPUs work at the physical level (e.g. logical gates are built out of transistors, addition and multiplication are built out of logical gates). Neural nets are just implemented as a bunch of matrix multiplications (you put all the neuron weights in one matrix and multiply it with the input). Is that what you are asking about?
@algorithmicsimplicity yeah that sounds about right, thank you. Maby you could use matrix multiplication as a case example on those inner workings :) anyways, thanks for making awesome videos
3b1b has this covered pretty well already@@maximilianchrzon4545
Great video! It's not critical, but
13:05, the calculation has error (?).
It should be ((1,-1),(2,3)) on the left hand side
Yes! Well spotted, I think you're the first person to notice.
Thx a lot for the interesting video! 💛💙
Great video! Thanks!
So how close is the weight estimator to the MMSE (minimal mean square error) estimator? Can the MAMBA arch be improved even more, using a sparse covariance matrix and an application of a 'true' Kalman filter? Or is it already as close as it can get?
Since this is also used to make long connections in the state space, might also mamba applied not just for language models but for gradient-optimising reinforcement learning models?
Yes, absolutely. Mamba has been applied to some other areas now, such as protein sequence modelling. I haven't heard of anyone applying it to reinforcement learning, but I imagine it would work very well.
Thanks for the video. Why do you use matrix diagonalization instead of SVD in 13:00? SVD can decompose any matrix and you do not need to introduce complex numbers. The power trick also works with SVD wrt the singular values.
With SVD you get W=USV for a diagonal matrix S, but the U and V are not necessarily inverse of each other, so when you take W^2=USVUSV you can't cancel out the inner VU.
@@algorithmicsimplicity you are right, in my mind i was assuming W to be symmetric
What about LSTMs? You've shortly showed the paper, but didn't mention them, even though they were supposed to be the solution to the vanishing and exploding gradients problem.
LSTMs do better than regular RNNs at remembering. A regular RNN will forget what it sees 20 tokens ago, LSTMs can remember for a few hundred tokens, maaaybe up to 1000, but after that they forget as well. This is because LSTMs don't completely fix vanishing and exploding gradients, they just make them vanish slower (basically because the sigmoid gates it uses saturate and they can't output values extremely close to 0 or 1). When people say LSTMs fix vanishing and exploding gradients they mean it has less vanishing and exploding gradients compared to regular RNNs. Mamba on the other hand can remember for at least hundreds of thousands of tokens.
Also LSTMs aren't parallelizable, so it isn't practical to train large-scale LSTMs on modern hardware.
Recently the author of LSTMs put out a new paper with new versions of LSTMs to fix these issues (called LSTMx), but from what I can tell LSTMx just performs worse than Mamba in every way.
Good job 👏
Loved your videos. Which software or library do you use to make these animations? Is it manim?
It is a combination of Manim (for rendering latex) and my own renderer written in Pytorch (for 3d stuff).
Enjoyed this. Given that its performance is comparable to or better than transformers as verified independently in several papers, is Mamba gaining a foothold among practitioners?
It does : ruclips.net/video/9s-9aSobky8/видео.html
Definitely, lots of open source language models are switching to Mamba. Mamba is also being used for other tasks as well, e.g. arxiv.org/abs/2401.09417
Also, recently google deepmind released this paper ( arxiv.org/abs/2402.19427 ) on hybrid dynamic linear RNN and transformers which achieves really good results. Dynamic linear RNNs are definitely going to become mainstream.
One thing I don't understand is the HIPPO matrix, and what they mean by a structured matrix in the context of differential equations.
so the transformation applied to the weights does not concern purely with initialization? instead, in the expression w=exp(-exp(a)*exp(ib)) numbers a and b are the learned parameters and not w, right?
Yes a and b are the learned parameters.
People keep making things that they say are "better than transformers", but none of them are actually getting used. At this point, hearing people say that has sort of become meaningless from the number of false alarms. Feels like every few months we have something "better than transformers", like RetNets were claimed to be. We'll have to wait and see which actually turn out to be better with time.
Yep, but Mamba is different, it is already being used in open source language model projects.
investor money is generally spent conservatively. it will take at least a few months for them to see the upside in divesting from super large transformers and moving on to MAMBA (or upcoming derivatives). Remember, Transformer was first published in 2017, and it took until at least 2020 for any "large" (> 3B) model to come out.
Thanks!
Thanks for your support!
You are a golden channel
I haven't found any significant evidence suggesting that Mamba models outperform Transformers, except that their attention mechanism does not scale quadratically with the context length. Am I missing something?
I mean, even if it just accomplished the tasks about as good as transformers qualitatively, the better compute scaling alone is pretty significant.
@@ilonachan Sure, but as far as I'm concerned, there is not much evidence it can qualitatively perform the same tasks either. Some people reported that Mamba's state space doesn't perform as well as true attention for long contexts.
Do you have a video comparing mamba to rwkv with benefits of each over the other?
I do not, I'd recommend checking out the latest papers for each (Mamba: arxiv.org/pdf/2405.21060 , RWKV: arxiv.org/pdf/2404.05892 ) and seeing which performs better on tasks that are similar to your use case.
Quick question: I guess if you want a true linear recurrence from real-valued to real-valued, you could use the Hermitian of P for P^-1? That would also eliminate optimizing for Q...
You could, but there isn't really any need to. The complex version performs the same as strictly real recurrences (actually, in some cases better). And optimizing for Q doesn't really have much cost, even if you used the Hermitian of P in place of Q you would still need to back-prop through it.
@@algorithmicsimplicity Although I still don't get the backprop argument... If you backpropagate through P, computing the Hermitian has a closed-form solution... It's the complex version of a matrix transpose.
@@drjenschn Sure, say we compute the output of a layer as y=P^TDPx. When we are backpropagating we need to compute the gradient of y w.r.t x, which means computing (P^TDP)^T y`. If you use a completely separate Q instead of P^T, computing this gradient still has the same cost. The only advantage of reusing P is you don't have to update the Q matrix as well, but updating weights is a relatively small computation compared to calculating (QDP)^T y`.
@@algorithmicsimplicity Got it now. I was originally talking about "optimizing" for P^-1 (learning the matrix weights). Back-prop is still necessary, correct. Thx!
[6:28]: While that sound somewhat good in practice it doesn't work like that
Alternating between linear recurrent and non linear dense doesn't give that much of context in advantage :(
The gradients vanishes or explodes after a while and requires some sort sigmoid transformation + some value
Say for example an architecture like this:
```plaintext
Dense -> Sigmoid -> Recurrent -> Dense -> Sigmoid -> Recurrent -> Dense -> Softmax
```
Until the gradients reach the first Recurrent the gradients loses most of it's value :(
Love the video but I have the question: Shouldn't be the approximation at 17:00 be something like n*w^(n-1)*0.001*x, so isn't there an n missing? Or how was the approximation done?
Ahh yes you're right, there should be an n out the front, the gradient is proportional to nw^(n-1)x. The vanishing/exploding gradient arguments are still the same though, the linear scaling factor doesn't matter compared to the exponential scaling for large n.
At 31:02, I agree that Mamba has linear O(n) memory requirements. However, why don't transformers have quadratic O(n^2) memory requirements? They need to store the attention matrices that are n x n. I'm surely missing something.
You don't need to materialize the full nxn matrix in memory at the same time. You can instead materialize only a chunk of it, sum over that chunk, and then materialize the next chunk in the same memory slot. This is how, for example, FlashAttention and FlashAttention2 work. When you do this the memory requirement is O(n).
@@algorithmicsimplicity very clear, thanks a lot!
Amazing video, insta-sub!
At 27:30, why do we get sub-linear O(n*log(n)) time complexity? Shouldn't it be linear O(n)? I'm surely missing something.
It depends on the algorithm used for the parallel scan, in this video I described an O(nlog(n)) algorithm, in practice there are actually O(n) parallel scan algorithms and Mamba uses one of them.
@@algorithmicsimplicity I see, thanks a lot!