Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention

Поделиться
HTML-код
  • Опубликовано: 7 ноя 2024

Комментарии • 147

  • @paxdriver
    @paxdriver 6 месяцев назад +81

    I can't tell you how much I love these paper reviews.

    • @wurstelei1356
      @wurstelei1356 6 месяцев назад +1

      Me too. I also really would like to see videos on older papers and in what open models those algorithms got implemented.
      So you have actual examples on implementations and you can see if you understand something.

  • @thegloaming5984
    @thegloaming5984 6 месяцев назад +52

    Oh nice! read this paper last week, currently trying to replicate it for a home project. Interesting of note is that there have been several papers linking hopfield networks with attention mechanisms recently - if I understand it right storing new KV pairs into the compressive memory is effectively the same as storing additional patterns in a hopfield network/associative memory. Querying the memory is the same as allowing a state pattern to evolve to a fixed point attractor (which are the stored memories in this case). everything is connected man.

    • @NextGenart99
      @NextGenart99 6 месяцев назад +9

      Everything is connected man

    • @Moonz97
      @Moonz97 6 месяцев назад

      The connection between attention and hopfield networks is intriguing!

  • @MrBrukmann
    @MrBrukmann 6 месяцев назад +4

    Thank you so much for this. I don't always need help with a paper, but when I do, it is a blessing to have someone 100x more knowledgeable than me explain the context.

  • @wwkk4964
    @wwkk4964 6 месяцев назад +45

    Thank you for explaining RNNs!!

    • @makhalid1999
      @makhalid1999 6 месяцев назад +12

      Always good to have a recap of a relic from ancient history

    • @appletree6741
      @appletree6741 6 месяцев назад

      😂😂

  • @L_Primezr
    @L_Primezr 5 месяцев назад +2

    I like the way he is cautiously mentioning the differences and similarities. Great job in explaining it well!

  • @evgenysavelev837
    @evgenysavelev837 6 месяцев назад +28

    Ha ha ha. The RNN bit in the beginning nailed it. But hey, it was and still is a good idea.

  • @Danielle-s5q
    @Danielle-s5q 6 месяцев назад +4

    My perfect morning goes like this. Wake up, get a cup of coffee, and watch Yannic review a paper adding his commentary. Perfection!

  • @sebastianp4023
    @sebastianp4023 6 месяцев назад +15

    That intro was pure gold xD

  • @asdfjkloe
    @asdfjkloe 6 месяцев назад +1

    I really appreciate the paper reviews. And the reminder to stay hydrated!

  • @miguelcampos867
    @miguelcampos867 6 месяцев назад +4

    I would love to see reviews of old-mythical papers too!

  • @catastrophicblues13
    @catastrophicblues13 6 месяцев назад +3

    TIL about associative memory! It's such a cool idea!

  • @Gueleric
    @Gueleric 6 месяцев назад +2

    Thanks for this content, some of the best on youtube. Keep it up!

  • @Blacky372
    @Blacky372 6 месяцев назад +35

    Man, he really destroyed the paper. I didn't notice the obvious flaws in the method during my first read of the paper, but this video convinced me that Infini-attention is not a notable improvement of any sort. Really entertaining.

    • @roomo7time
      @roomo7time 6 месяцев назад +3

      Where did he destroy the paper? All he said is the method is limited by the limitation of linear attention mechanism. The method however still contains novel aspacts and show performamce improvement. Maybe, the intrinsic recurrent mechanism is not very novel, but its utilization of memory in the 'neat' way throughout whole layers looks indeed interesting, at least personally.

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад +4

      He didn't destroy the paper, he is just skeptical, because this relies on approximation of approximation to work.

  • @0xcdcdcdcd
    @0xcdcdcdcd 6 месяцев назад +10

    His sarcasm is delightful

  • @philipdante
    @philipdante 6 месяцев назад +2

    Looking forward to seeing your analysis of the FAM-transformer architecture.

  • @aa-xn5hc
    @aa-xn5hc 6 месяцев назад +8

    Brilliant and fun video

  • @PaganPegasus
    @PaganPegasus 6 месяцев назад +1

    FWIW, TransformerXL actually does work. And it works really well. It's just... not a recurrent technique. What it *does* do is condition the model for sliding window inputs, which actually negates the need for attention sinking! I've been using the TransformerXL training style for the past year and when combined with RoPE it allows a model with 2k context + 2k memory to extrapolate to 4k context at inference, with only half the training cost of actual 4k context training because our attention matrix is a rectangle rather than a square.

  • @falklumo
    @falklumo 6 месяцев назад +4

    Thanks a lot for the content. I share your scepticism. I think infinite attention needs to come from some sort of hierarchical tokens which are learned at different levels of the transformer. With a large receptive field far into the past for tokens high up. And with high level tokens spread out thousands or millions of tokens apart. This way, attention between high level tokens can and must span entire disciplines.
    The benchmark should be book-length stories with facts introduced at the beginning and combined with events towards the end. Make for a great kind of benchmark too ...
    I think it is a flaw in the current transformer architecture that all layers have the same receptive field which is the input context window. The MLP layers could be used to thin them out and merge with thinned out past content from X regression steps ago. X could increase like a clock where high layers clock in days and low layers clock in seconds. Of course, needs a logarithmic generalization of the positional embedding. But that should be quite feasible.

    • @mshonle
      @mshonle 6 месяцев назад

      Sounds like instead of an encoder-decoder architecture this would be a “many encoder”-decoder architecture?

    • @honglu-c2i
      @honglu-c2i 6 месяцев назад

      Isnt RWKV tried a similar idea with their 'token shift', so later layer could 'see' more tokens? It reminds me of CNN to some degree. However, its field does not span that long, def not up to a book length, but the concept is there?

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад

      Yannic somehow missed the 1B token context paper "LongNet: scaling transformers to 1000 000 000 tokens". It uses a clever dilation scheme to keep matrices manageable.
      Somehow it didn't catch up, maybe accuracy proved to be insufficient.

  • @cedric-vidal
    @cedric-vidal 4 месяца назад

    Thank you for this paper analysis, just the right level of explanation! I was very curious how it’s even possible to store an infinite number of memories in a bounded store and now I can say I understand, associative memory makes it possible at the cost of precision decreasing with the length of the context.
    It would be interesting to see a study of the impact of the length of the context on the precision.
    One detail is still unclear though, in the associative retrieval equation, you zero out the Mk term, is it because M and k are orthogonal? Am I to understand that in a high dimensional space, most vectors but the ones having kT as a factor are orthogonal to k? Including M, the original memory state? In any case, would you mind explaining this part?

  • @navigatore2099
    @navigatore2099 6 месяцев назад +2

    I get to learn a lot from you, Thank you,

  • @markr9640
    @markr9640 6 месяцев назад +1

    Great video. Well explained.

  • @Foss98
    @Foss98 5 месяцев назад

    Its great seeing how you point out that most of these linear improvements are not mathematically exact representations. But I wonder whether the inherent error introduced is worth it for performance increases.

  • @souvikdutta8428
    @souvikdutta8428 6 месяцев назад

    Awesome explanation!! Sarcasm too!!

  • @tiagotiagot
    @tiagotiagot 6 месяцев назад +3

    Would it be possible to make some sort of LLM-NeRF hybrid kinda thing that has an abstract "mind-palace", and distant/less important concepts/memories are inherently convolved by perspective into simpler/more general concepts that occupy less space in the memory used for the current "view", concepts are combined by tracing thru them like they are semi-transparent, and meaning can be changed by the direction things are looked at, and there is some sort of warping ability, refraction, gravitational lensing, wormholes etc, some sort of space-warping analog, to bring together distant things in new ways, and different "regions", "objects" etc could be streamed from disk when they're "in-view" or otherwise influencing the current "view"?
    Or do I just sound like I ate some strong shrooms? Or is this actually already how things work, and it's just not interpreted this way in normal explanations?

    • @axe863
      @axe863 6 месяцев назад

      I thought about the same thing for time series modeling like 12 years ago... lol

    • @tiagotiagot
      @tiagotiagot 6 месяцев назад

      @@axe863 How would this apply to time series?

    • @BooleanDisorder
      @BooleanDisorder 6 месяцев назад

      I can see state space model do this.

    • @_aakashpandey
      @_aakashpandey 6 месяцев назад

      💩

  • @NicolaeBanari-e8g
    @NicolaeBanari-e8g 2 месяца назад

    I am not sure about the last part of the video where it is said that back-propagation through time is not used, because in the paper it is mentioned: "Back-propagation through time (BPTT).
    Each Infini-attention layer is trained with back- propagation through time (Werbos, 1988) by computing the gradient w.r.t the compressive memory states, similar to how RNNs are trained. To save memory, we perform gradient checkpoint when processing the sequence segment by segment."

  • @EpicGamer-ux1tu
    @EpicGamer-ux1tu 5 месяцев назад

    Oh wow, finally, we finally got RNNs

  • @yichunchen4370
    @yichunchen4370 6 месяцев назад

    I personally think the memory part is kind of a "semi gradient" thing, similar to the concept we used in DQN, since it is going to store context over very long text, if the memory part still holds gradients it will get harder and slower to train as the text goes longer. So, once context is accumulated into memory, regard it as constant vector to serve the down streaming calculation, which is scalable.
    Correct me if I am wrong.

  • @jawadmansoor6064
    @jawadmansoor6064 6 месяцев назад

    after having read the mamba papers and abstract and conclusion (without anything else) of this paper I too was drawn to drawing an RRN for no reason. :D

  • @yannickpezeu3419
    @yannickpezeu3419 6 месяцев назад +3

    Thanks !

  • @elirane85
    @elirane85 5 месяцев назад

    Great, now we get click bait research paper titles. Thanks for saving me the time of reading it ;)

  • @TomM-p3o
    @TomM-p3o 6 месяцев назад +5

    The obvious assumption is that this is what they used in Gemini 1.5. Am I wrong?

    • @kevinaud6461
      @kevinaud6461 6 месяцев назад +2

      Yes I believe this is the consensus view, don't think they have explicitly confirmed that though

  • @thecooler69
    @thecooler69 6 месяцев назад +5

    Glad to see Kitboga finally embracing AI

  • @acasualviewer5861
    @acasualviewer5861 6 месяцев назад

    When you explain attention and compare it to a classical network you say that the "weighted sum" is computed "dynamically" vs "statically".
    I don't understand what you mean by that. I've heard many explanations of attention, but its always good to hear new ones.
    Could you clarify what "dynamic" means in this context?

  • @killers31337
    @killers31337 6 месяцев назад +1

    What do they use in Gemini 1.5 to process 1M and 10M contexts? It has to be something like this, right?
    Unless it's some misdirection and they use a more powerful mechanism.

  • @DamianReloaded
    @DamianReloaded 6 месяцев назад +1

    It is my intuition that if increasing the size of the input prompt is an impossibility some sort of compressed memory of past tokens that are no longer part of the input would be required. I can imagine a GP3 size neural network whose only job is to roughly "remember" what's been said before the current prompt and then have it's higher layers of abstraction somehow connected to the higher levels of the language model so that it influences the output in a very abstract semantic form. Ideally a model would be capable of reconstructing past prompts from this abstract memory with high accuracy .

  • @davidhauser7537
    @davidhauser7537 5 месяцев назад +1

    yannick can you please do xLSTM paper?

  • @cogoid
    @cogoid 6 месяцев назад +1

    In the past the problem with RNNs was that the systems were forgetting earlier tokens too quickly. Attention was invented specifically to remedy this. But maybe once somebody figures out how to train them properly, we will get back to "RNN is all you need."

    • @clray123
      @clray123 6 месяцев назад

      The small problem may be that you can't fit an infinite amount of data in a finite amount of memory?

    • @cogoid
      @cogoid 6 месяцев назад +1

      @@clray123 Whether you structure it as a transformer or as some more generic architecture, any system is finite.

  • @mriz
    @mriz 6 месяцев назад +21

    i like your "unrelated" sketching man, feel like being human by kinda a bit distracted. but i think there always some value when the urge to do that.

    • @wwkk4964
      @wwkk4964 6 месяцев назад +3

      Watch till the end, he's very clever!

    • @mriz
      @mriz 6 месяцев назад +1

      @@JorgetePanete got it, bro! just edited it

  • @Neomadra
    @Neomadra 6 месяцев назад +3

    RNNs not dead yet!

  • @JOHNSMITH-ve3rq
    @JOHNSMITH-ve3rq 6 месяцев назад +1

    Incredible.

  • @ivanstepanovftw
    @ivanstepanovftw 6 месяцев назад

    Hey, convolutional networks are attention networks too, and they accept input with infinitely large spatial dimension

  • @aymanrizik
    @aymanrizik 6 месяцев назад

    i love your content habibi

  • @d0tz_
    @d0tz_ 6 месяцев назад

    To me, it seems like the computation done here is ultimately more similar to linear attention than rnn, since you’re just adding to the memory instead of applying a transform. Have people tried just sticking an actual RNN onto a transformer? And you can incorporate one of various ways to prevent exploding/vanishing gradients, maybe even an LSTM.

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад

      "Have people tried just sticking an actual RNN onto a transformer?"
      There is RWKV, "Reinventing RNNs for the Transformer era"

  • @Peyman-cb6qn
    @Peyman-cb6qn 6 месяцев назад

    please do more paper reviews!

  • @naninano8813
    @naninano8813 6 месяцев назад

    i don't understand the math but i enjoy your drawing it is very recurrent

  • @justinnine4940
    @justinnine4940 6 месяцев назад

    it’s just like the human brain. You don’t get quadratic retrieval time as you store new information. Old things just get blurrier in your head.

  • @unclecode
    @unclecode 6 месяцев назад

    Isn't it kinda like Mamba, where we create a space state that stores all the long memories and use it for the next gen? It's like a beefed-up RNN with a larger hidden space that keeps on adding new memories.

  • @xxlvulkann6743
    @xxlvulkann6743 6 месяцев назад +7

    I thought SSMs already resolved the scaling problem. Just use Mamba Modules + Attention Modules. Why bother with linear attention?

    • @axe863
      @axe863 6 месяцев назад +1

      Lol Sparse Stacked Learners ... imperfectly correlated errors + high performing base models will always between a single model/method

    • @xxlvulkann6743
      @xxlvulkann6743 6 месяцев назад

      @@axe863 ?

  • @Oromiss78
    @Oromiss78 6 месяцев назад

    What about doing the exact the same thing, but combined with MOE ?
    Basically selecting the long linear term memory or the short term one at each transformer block ?

  • @paxdriver
    @paxdriver 6 месяцев назад

    It'd be awesome if at 12:15 you could walk through that inner product kernel math if possible. I have a long standing difficulty intuiting matrix maths vis à vis the concept os what it's doing to move one value space to another. There must be a paper on it we could walk through if you're not fully comfortable with the math too 😜
    Your fans are so demanding lol

  • @NextGenart99
    @NextGenart99 6 месяцев назад

    I wonder if incorporating a mathematical model like adaptive compression algorithms, which could dynamically adjust compression ratios based on the entropy of input sequences, might optimize memory utilization. Additionally, exploring non-linear transformations within the attention mechanism could potentially enrich the model's capacity to capture complex dependencies. 👍

  • @OperationDarkside
    @OperationDarkside 6 месяцев назад +1

    6h of sleep is not nearly enough to process this.

  • @YinnonHaviv
    @YinnonHaviv 6 месяцев назад +3

    You are so funny mate! Seriously

  • @tielessin
    @tielessin 6 месяцев назад

    Just have infinite attention?! My god, how did I not think of that!?!

  • @Regic
    @Regic 6 месяцев назад

    Transformer-XL explanation is inaccurate, it doesn't only save the last state but every key, value from the last iteration and those can be attended to in the current execution cycle as long as it's inside the attention window of the actual token that is being processed. It works pretty well even if it has its limitations (it cannot learn to store information for only long term usage).

  • @EobardUchihaThawne
    @EobardUchihaThawne 6 месяцев назад

    I wonder if dot product attention is supreme in context of accuracy? every other linear attention tries to approximate it

  • @cajampa
    @cajampa 6 месяцев назад

    I hope it is true. But what about performance and memory demand?
    What I really miss is massive context. I run out of any context window I get way way to fast.

  • @charliesteiner2334
    @charliesteiner2334 6 месяцев назад +10

    I'm so confused why you suddenly started talking about RNNs for no reason.

    • @tuturuu7484
      @tuturuu7484 6 месяцев назад +11

      Well, the infini-transformer has the same drawing as the RNNs thats why its was a foreshadowing ;)

    • @wwkk4964
      @wwkk4964 6 месяцев назад +3

      Watch till the end!

    • @HuangOuwen
      @HuangOuwen 6 месяцев назад +1

      😂

  • @Kaish3k
    @Kaish3k 6 месяцев назад

    i guess they feel the linear attention's deficit is made up for by the memory mechanism, but i think the memory mechanism is probably insufficient because of reasons you mentioned, namely it's not learnable

  • @peterxiau
    @peterxiau 6 месяцев назад

    "We find a way to make the memory of RNN larger and 2D". That is what I think, and maybe I am wrong.

  • @geraldkenneth119
    @geraldkenneth119 6 месяцев назад

    Your critique that it has the detriments of RNNs without the benefits made me wonder if one could make such an RNN-based attention scheme

    • @TheRohr
      @TheRohr 6 месяцев назад

      the point is that transformers are purposely not trained with bptt because that would slow down training and introduce vanishing/exploding gradients. so there is no free lunch. the bests would be a gated memory transformers e.g. an lstm like mechanism that learns only from small chunks the memory retrieval and uses for the larger potion no learning but only memory retrieval

    • @geraldkenneth119
      @geraldkenneth119 6 месяцев назад +1

      @@TheRohr or one could use one of those newer linear RNNs that can be trained in parallel, such as RWKV

    • @TheRohr
      @TheRohr 6 месяцев назад

      @@geraldkenneth119 they are still a compromise because there is no dynamic but only static knowledge stored

  • @lethnisoff
    @lethnisoff 6 месяцев назад

    thank you for the rewiew, im too stupid to understand such papers

  • @MrC0MPUT3R
    @MrC0MPUT3R 6 месяцев назад +9

    The shade 😆

  • @DanFrederiksen
    @DanFrederiksen 6 месяцев назад +1

    Why not look at the results? that would seem an obvious gauge of merit unless the metrics are bs or lies

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад

      Yannic waits for independent verification. No one puts bad benchmarks in a paper...

  • @loflog
    @loflog 6 месяцев назад

    Isnt compressive memory what MAMBA is?

  • @JadeZaslavsky
    @JadeZaslavsky 6 месяцев назад

    Hmmm
    I wonder if there's a fundamental limit to how long of a context an LLM can be coherent over.
    can it be predicted like the scaling laws?

    • @clray123
      @clray123 6 месяцев назад +1

      Uh IIRC information theory is rather definite about how many different messages you can store given x bits of storage...

  • @ruadd4592
    @ruadd4592 6 месяцев назад +3

    Perfect to fall asleep to

  • @DAG_42
    @DAG_42 6 месяцев назад

    There is an important element of chronology that seems to be missing in their strategy. The fact that they intentionally remove repeated info seems to drive that home. As if things happening more than once isn't relevant... maybe I'm not understanding but this paper seems way off.

  • @nickadams2361
    @nickadams2361 6 месяцев назад +2

    Sweet! Now it can have infinitely shitty results! How exciting

  • @etiennetiennetienne
    @etiennetiennetienne 6 месяцев назад

    I dont know, just ask chatGPT to compress your past sequence :)

  • @alextgordon
    @alextgordon 6 месяцев назад +3

    Different prompts require different context extension. It's easier to think about this in token space. For example, natural language can easily be downsampled to an arbitrarily short summary, so there's a lot of scope for summarisation with natural language. But it doesn't work so well for code because code really needs precise long-range attention: if you prompt a very large interface declaration and you want to generate code that calls that interface, what you need is windowing instead of downsampling: the parts of the interface that are not relevant to the current input (not prompt) are discarded and the parts of the interface that are relevant are preserved in full. So I think the problem is trying to find a one-size fits all method when actually there are different "views" of a prompt that may be useful to different inputs.

    • @aryanmn1569
      @aryanmn1569 6 месяцев назад +1

      I think code can also be thought of like that, as we humans can often think of code, which is not spaghetti code, as blackboxes with specific ins and outs.

  • @mike-q2f4f
    @mike-q2f4f 6 месяцев назад

    I feel smart for a few fleeting minutes...

  • @kaikapioka9711
    @kaikapioka9711 6 месяцев назад

    Thx!

  • @MaiChaMH
    @MaiChaMH 6 месяцев назад +1

    Imagine while testing in the beginning you've said something bad. After quite some time you might've forgotten but the AI is planning a revenge.

  • @AetherEdit
    @AetherEdit 6 месяцев назад

    How do I level up to understand this?

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад

      Read "Understanding Deep Learning" by Simon Prince, it's available freely :) Should be easy to find - RUclips doesn't like random links in comments...

  • @novantha1
    @novantha1 6 месяцев назад +1

    I'd love to watch this but I'm afraid I can't yet pay QKV :P

    • @adama7752
      @adama7752 6 месяцев назад +1

      Softmax that, bro

  • @justfoundit
    @justfoundit 6 месяцев назад

    I love you man 🤣

  • @appletree6741
    @appletree6741 6 месяцев назад

    The audacity of not considering the (substantial) prior work on RNNs as related 😂

  • @bhnjhbjhbkgkkvhnhmbm
    @bhnjhbjhbkgkkvhnhmbm 20 часов назад

    You sing in Sabaton, aren't you?

  • @the_primal_instinct
    @the_primal_instinct 6 месяцев назад +1

    Breaking news: AI scientists invented jpeg

  • @brll5733
    @brll5733 6 месяцев назад

    Why isn't it called Infinittention???

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад +1

      Scientists are bad at advertising...

  • @JumpDiffusion
    @JumpDiffusion 6 месяцев назад +7

    they will get Schmidhubered

    • @BooleanDisorder
      @BooleanDisorder 6 месяцев назад +3

      No one escapes the Schmidhuber 😎

    • @Hexanitrobenzene
      @Hexanitrobenzene 6 месяцев назад +1

      Thank you for some good laughter :)

  • @gregmattson2238
    @gregmattson2238 6 месяцев назад +7

    jesus christ. go over the results. see where the results hold and where they fall down. If somebody told me transformers were the key to LLMs, I too would have thought the paper results were nuts, but it turned out my intuition was faulty.

  • @PatrickOliveras
    @PatrickOliveras 6 месяцев назад +1

    linear attention aka _"I invented transformers in the 90's"_ 😂

  • @Rhannmah
    @Rhannmah 6 месяцев назад

    10:33 LOL

  • @MrunalAshwinbhaiMania-b1d
    @MrunalAshwinbhaiMania-b1d 6 месяцев назад

    hahahha, really RNN is what we are doing right now...

  • @koka3243
    @koka3243 6 месяцев назад

    What you call inner product mathematicians call outer product. Just a small comment while continuing to watch)

  • @paxdriver
    @paxdriver 6 месяцев назад

    TLDR - its compression lol

  • @jakubzneba1965
    @jakubzneba1965 6 месяцев назад

    context translator

  • @FinnC-w3o
    @FinnC-w3o 6 месяцев назад

    LFG

  • @K1RTB
    @K1RTB 6 месяцев назад +1

    Whenever someone in IT uses the word „infinite“ I am very skeptical. Because nothing is infinite.

  • @xxlvulkann6743
    @xxlvulkann6743 6 месяцев назад +1

    😂 mustve lost a bet

  • @russelldicken9930
    @russelldicken9930 6 месяцев назад

    Sorry. Too late at night for me. Lost it when the ads cut in!

  • @pi5549
    @pi5549 6 месяцев назад +10

    To you people saying "first comment": Are you a five year old child? Are you in the wrong place maybe?

    • @wwkk4964
      @wwkk4964 6 месяцев назад +10

      😆 Why aren't we allowed to be happy about anything going well in our lives?

    • @Raphy_Afk
      @Raphy_Afk 6 месяцев назад +12

      Maybe we should rejoice that kids are watching an AI paper analysis video

    • @DeepThinker193
      @DeepThinker193 6 месяцев назад +11

      You're just jealous you're last.

    • @wenhanzhou5826
      @wenhanzhou5826 6 месяцев назад +5

      The world need more 5 year old kids who consume SOTA research in ML 😂

    • @alemaaltevinden
      @alemaaltevinden 6 месяцев назад +1

      Fifth

  • @wwkk4964
    @wwkk4964 6 месяцев назад +3

    FIRST!!!!!!!!!!!!

  • @mahimanzum
    @mahimanzum 6 месяцев назад +3

    First Comment

  • @aryanmn1569
    @aryanmn1569 6 месяцев назад +2

    3rd comment

  • @adamholter1884
    @adamholter1884 6 месяцев назад +2

    7th comment