LLM Attention That Expands At Inference? Test Time Training Explained

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 155

  • @bycloudAI
    @bycloudAI  3 месяца назад +11

    Take your personal data back with Incogni! Use code bycloud at the link below and get 60% off an annual plan: incogni.com/bycloud
    maybe we are all bots and the dead internet theory is true

    • @mine.moment
      @mine.moment 3 месяца назад

      Please create your own style of thumbnails and stop trying to mimick Fireship lol... I'm being honest rn but the contents in your videos don't feel as interesting/ funny/ easy-to-understand as his. Hope you take that as constructive criticism because you do cover lots of cool topics that Fireship doesn't.

  • @Miaumiau3333
    @Miaumiau3333 3 месяца назад +352

    I disagree with the comments complaining that the video is too technical. I really like that you provide enough detail to roughly understand the technique, awesome video!

    • @FireOfGott
      @FireOfGott 3 месяца назад +16

      Agreed, this is very approachable to someone who knows some architecture fundamentals!

    • @seriousbusiness2293
      @seriousbusiness2293 3 месяца назад +10

      I found the pacing a bit off. In general very well eddited and summarized information.
      But its hard to keep track with all the vocabulary, personally id ether need to linger longer on these details or get an even shorter overview on those aspects.
      I really like the style of Yannic Kilcher Paper reviews but his videos are also 3 times as long, so in any case its a tradeoff what one prefers.

    • @xClairy
      @xClairy 3 месяца назад +8

      ​​​​@@seriousbusiness2293Honestly, I feel like it's because his target audience was different, and now it's more technical, so he'd need more time to explain those concepts instead of expecting a baseline understanding. But going more in detail would scale logarithmically with video length, which would also hurt his YT channel, considering we all expect at best 5~15 minute videos from this channel. So, yea, it's a trade-off.

    • @w花b
      @w花b 3 месяца назад +3

      Yeah they might as well just watch Fireship because that's what they're asking.

    • @mine.moment
      @mine.moment 3 месяца назад +1

      @@w花b But the problem is that bycloud tries to mimick Fireship's thumbnail style to lure in Fireship viewers then throw them off with 10+ minutes videos of overly technical stuffs, who prefer ~5 mins of mixed interesting, meme-y, simplified contents instead.

  • @MrJaggy123
    @MrJaggy123 3 месяца назад +77

    Turn all the hidden states into ML models? That scream of pain you all just heard was from the interpretability researchers ;)

    • @QuantumConundrum
      @QuantumConundrum 3 месяца назад +12

      OK, but then their employment is secured forever LOL

    • @anthonybustamante5736
      @anthonybustamante5736 3 месяца назад +12

      We need black boxes for the black boxes!

    • @koktszfung
      @koktszfung 3 месяца назад

      But imagine if those ML models are CNNs and you can see how the kernel adapt to the context of the input in real time, wouldn't that be actually easier to interpret?

    • @naumbtothepaine0
      @naumbtothepaine0 3 месяца назад

      @@koktszfung CNNs are more like DL, ML models are simpler

    • @revimfadli4666
      @revimfadli4666 3 месяца назад

      ​@@naumbtothepaine0which simpler ML models? XGBoost? SVM? Because CNNs are ML models too

  • @Eianex
    @Eianex 3 месяца назад +92

    In conclusion, Trouble in Terrorist Town is cooler than some transformers and some snakes.

  • @papakamirneron2514
    @papakamirneron2514 3 месяца назад +18

    Please make a video explaining all of these terms, apart from that, keep the technical videos coming!

  • @flamakespark
    @flamakespark 3 месяца назад +100

    Another day, another attempt to re-invent LSTMs

    • @babyjvadakkan5300
      @babyjvadakkan5300 3 месяца назад

      Whats that now?

    • @zyansheep
      @zyansheep 3 месяца назад

      ​@@babyjvadakkan5300type of rnn that google used to use (or still does?) for language translation before we got transformers

    • @Bencurlis
      @Bencurlis 3 месяца назад +12

      It is more of a generalization of both LSTMs and Attention, it is theoretically much more powerful IMO

    • @keypey8256
      @keypey8256 3 месяца назад +3

      It's definitely an interesting idea

  • @heavenrvne888
    @heavenrvne888 3 месяца назад +28

    holy shit this method is so interesting. and the way they encapsulated the entire idea into the title LOL!

  • @karlkastor
    @karlkastor 3 месяца назад +2

    I love that you tell us how the method in the paper roughly works. A lot of RUclips channels just say this new technique is better without any explanation and just show results, so I have to skim the paper to get the gist of it.

    • @krollo8953
      @krollo8953 3 месяца назад

      Yup makes you feel like you're learning something rather than information without enough context

  • @mikairu2944
    @mikairu2944 3 месяца назад +168

    "too technical for this video"
    man you lost me at the thumbnail

    • @cdkw2
      @cdkw2 3 месяца назад +2

      me to bro yet I still watch he entire video 💀

    • @Dannydrinkbottom
      @Dannydrinkbottom 3 месяца назад +1

      My brother speaking greek

  • @OumarDicko-c5i
    @OumarDicko-c5i 3 месяца назад +27

    As an IA, thanks you for teaching me this, i will use it to train myself

    • @ginqus
      @ginqus 3 месяца назад +12

      intelligently artificial

    • @IN-pr3lw
      @IN-pr3lw 3 месяца назад +6

      ​@@ginqusinteligencia artificial

    • @truongao5425
      @truongao5425 3 месяца назад

      intelligent anti-africa

    • @TheRealUsername
      @TheRealUsername 3 месяца назад

      ​@@truongao5425😂 troll

  • @FunBotan
    @FunBotan 3 месяца назад +2

    I would never suspect that this video would help me write my PhD, but the "compression heuristic" is exactly the term I needed but didn't know to express my idea. Thank you!

  • @manuelburghartz5263
    @manuelburghartz5263 3 месяца назад +2

    This channel explaining AI and using anime references in the visuals is exactly what I needed. Great video!

  • @TheNewton
    @TheNewton 3 месяца назад +2

    Good short dense overview of an even super denser subject matter.
    Still waiting for the paper that modularizes all these component processes and flows then runs training against all the permutations to bootstrap itself.

  • @divandrey-u3q
    @divandrey-u3q 3 месяца назад +2

    As always, thank you for the video! I really appreciate the amount of technical details here. Don't know why other people complain but I love it!

  • @OperationDarkside
    @OperationDarkside 3 месяца назад +25

    Let's put transformers into transformers. Maybe we end up with baby transformers.

    • @revimfadli4666
      @revimfadli4666 3 месяца назад +1

      Ah yes, hot transformers in transformers action

  • @FaultyTwo
    @FaultyTwo 3 месяца назад +9

    "Mom! They are adding more weights to the models again!"

  • @DarrenReidAu
    @DarrenReidAu 3 месяца назад

    It’s trainable models all the way down! Great video, thanks!

  • @cdkw2
    @cdkw2 3 месяца назад +1

    2:32 Waiting for bycloud to be on that page like others!

  • @marshallodom1388
    @marshallodom1388 3 месяца назад

    I got up to 6 minutes and loved the ride! Gonna have a snack and p and dive right back in!

  • @XenoCrimson-uv8uz
    @XenoCrimson-uv8uz 3 месяца назад +29

    How do we know the ones complaining about the bots in youtube chat aren't bots themselves?

    • @Alice_Fumo
      @Alice_Fumo 3 месяца назад +11

      I have definitely seen bots complain about bots before. In fact, you could also be a bot. Who knows at this point?

    • @picmotion442
      @picmotion442 3 месяца назад +1

      I might be a bot

    • @leftybot7846
      @leftybot7846 3 месяца назад

      I'm definitley not a bot, what a stupid idea.

    • @turgor127
      @turgor127 3 месяца назад

      Ban both then. Spamming is annoying ether way.

    • @Cloudruler_
      @Cloudruler_ 3 месяца назад +1

      The interesting thing is it's probably cheaper for a bot to spam "bot" than create LLM comments.

  • @athul_c1375
    @athul_c1375 3 месяца назад +10

    It's some mamba jamba

  • @QuantumConundrum
    @QuantumConundrum 3 месяца назад

    More videos like this, please.

  • @SimGunther
    @SimGunther 3 месяца назад +2

    Audience: Less reading, more technical content!
    Also audience: AAAAAAHH, MY EYES! TOO TECHNICAL FOR MY EYES AND EARS! 😢

  • @fnytnqsladcgqlefzcqxlzlcgj9220
    @fnytnqsladcgqlefzcqxlzlcgj9220 3 месяца назад +1

    Perfect amount of complexity, please do not make your longer videos like this more simple, im not involved in any form of computer science but ive kept up with ai since tensor flow was brand new and i understood almost everything first try

  • @HarperChisari
    @HarperChisari 3 месяца назад

    TTT is literally short term memory. Wild.

  • @heavenrvne888
    @heavenrvne888 3 месяца назад

    that intro was amazing

  • @JorgetePanete
    @JorgetePanete 3 месяца назад +3

    6:08 it resembles Trouble in Terrorist Town

  • @registered_dodo1743
    @registered_dodo1743 3 месяца назад +1

    I love words.

  • @ismailnejjar
    @ismailnejjar 3 месяца назад

    Love the video!!

  • @guilhermecastro3671
    @guilhermecastro3671 3 месяца назад

    Cool video, for a beginner all these terms together seem very technical, can someone suggest a playlist to learn more in depth about these topics ?

  • @StefanReich
    @StefanReich 3 месяца назад +6

    Super well explained. And full of memes

  • @CraftMine1000
    @CraftMine1000 3 месяца назад

    Training on test data,,, unless I severely miss-understand this I'm just going say; "jikes, nope, get out, and don't come back"

  • @samarthpatel8377
    @samarthpatel8377 3 месяца назад +27

    Sooooo many bot comments!

    • @bolon667
      @bolon667 3 месяца назад +6

      Putting innocent comments to change it into ads later

    • @samarthpatel8377
      @samarthpatel8377 3 месяца назад

      @@bolon667 I think you are right. The comments which I noticed earlier have gone?

  • @spoonikle
    @spoonikle 3 месяца назад

    Earth shattering.

  • @fra4897
    @fra4897 3 месяца назад +1

    great video but transformers in practices do not have quadratic complexity, only if u implement it in the vanilla way

  • @jondo7680
    @jondo7680 3 месяца назад

    I want a TTT-Linear (T) with TTT-MLP (M) as it's inner loop.

  • @Ryu-ix8qs
    @Ryu-ix8qs 3 месяца назад

    Good video, thanks

  • @dhillaz
    @dhillaz 3 месяца назад +1

    I know some of these words!

  • @quocanhnguyen7275
    @quocanhnguyen7275 3 месяца назад +1

    I tried to read this when u wrote about in your newsletter. But it was not an easy paper

  • @Dom-zy1qy
    @Dom-zy1qy 3 месяца назад

    Whenever a new architecture takes over, the tech companies heavily invested into developing hardware specifically optimized for the transformer architecture are gonna be sad.

  • @noctarin1516
    @noctarin1516 3 месяца назад

    Nahh they actually cooking with this architecture though

  • @flinkstiff
    @flinkstiff 3 месяца назад

    Bumblebee is my favorite

  • @sh4ny1
    @sh4ny1 3 месяца назад

    4:11 Why not use wavelet transform for this?
    I think it would be useful here since

  • @bloomp7999
    @bloomp7999 3 месяца назад +1

    did i understand this

  • @koktszfung
    @koktszfung 3 месяца назад

    Wouldn't this model be slow in operation if it has to train on the context?

  • @bobsoup2319
    @bobsoup2319 3 месяца назад +4

    Bro this model is too complicated to be simplified more. Keep up the complexity it’s what makes it interestijg

  • @ricardocosta9336
    @ricardocosta9336 3 месяца назад +6

    Dude no kidding, I came up with something similar a month ago. In concept. I'm afraid I have a limited num of insights in my life time. And without timento persue them I will never make any diference in the world. 😢. But hey that also proves, to me at least, that my math intuition is on point. 😅

    • @ccash3290
      @ccash3290 3 месяца назад +2

      A lot of people have zero insights.
      Its important to work on your ideas to test them in reality

    • @anywallsocket
      @anywallsocket 3 месяца назад +2

      If you thought of it other people thought of it or will, so don’t worry about not being the one who gets credit, what matters is that the idea is in the memosphere

  • @pladselsker8340
    @pladselsker8340 3 месяца назад +1

    Imagine giving money to a service for a sense of security because it is now the status quo to let every substential company out there infringe on your privacy rights.
    Just a thought. What parallel universe is this?

  • @Vagabundo96
    @Vagabundo96 3 месяца назад

    This is crazy

  • @scientificaly_restful_one
    @scientificaly_restful_one 3 месяца назад +1

    Well, some year ago or so I had thoughts about going into ML, but you have lost me on this one. 👍
    I guess it's only gonna get more complicated from now on.

    • @kamilbxl6
      @kamilbxl6 3 месяца назад

      Nowadays its easier to learn ML than ever. You should start with something simple enough that you understand around 80% and only actually doing 20% as smth new.
      There are lots of free shared classes like MIT, Stanford etc.. lots of tutorials, examples, code documentation.
      First get a general yet simple view bout NN, then chose what you'd like to specialize: image recognition, text or smth else

  • @LukasNitzsche
    @LukasNitzsche 3 месяца назад

    Does this relate in anyway to liquid time constant neural networks?

  • @someonetrustme161
    @someonetrustme161 3 месяца назад

    so nobody gonna talk about how we just got rickrolled? at 3:43

  • @pedrogorilla483
    @pedrogorilla483 3 месяца назад +3

    I watched half of the video and this is too technical for me. I’m skipping this one. Congrats to everyone who understands this video!

    • @bycloudAI
      @bycloudAI  3 месяца назад +7

      it's like RNN's hidden states are just ML models
      thanks for watching till half way tho

    • @MuhammadakbarAK47
      @MuhammadakbarAK47 3 месяца назад +3

      Just watch it 3 times

    • @sashank224
      @sashank224 3 месяца назад

      ​@bycloudAI ili bro I explain hold up, I'm getting what hes saying. You need break it down in simple terms that relate to real world apps. Visualize.

    • @homeyworkey
      @homeyworkey 3 месяца назад

      @@bycloudAI btw this was posted on r/singularity where there are more normies - obv u need normies if you want growth though, but any technical video is automatically going to have a very niche audience understandably so, so you probably dont mind that aswell.
      i mean i watch ur stuff and most of it goes over my head but interesting regardless, but just letting u know the feedback here is kind of skewed.

  • @narpwa
    @narpwa 3 месяца назад

    my brain is exploding send help

  • @BooleanDisorder
    @BooleanDisorder 3 месяца назад +9

    Next up is cisformers

  • @David-lp3qy
    @David-lp3qy 3 месяца назад

    MAMBA IF YOU CAN HEAR ME PLEASE SAVE US

  • @PhilsArtVibes
    @PhilsArtVibes 3 месяца назад +1

    No, no, no, I do not want to add neural networks to recursion, I JUST BEGAN TO UNDERSTAND RECURSION DON'T DO THIS TO ME!!!

  • @Acceleratedpayloads
    @Acceleratedpayloads 3 месяца назад

    This looks block recurrent transformers by DL Hutchens

  • @simonesborrinpz
    @simonesborrinpz 3 месяца назад

    good videos👍

  • @-mwolf
    @-mwolf 3 месяца назад

    tell me the current paradigm is hitting a dead end without telling me

  • @Wobbothe3rd
    @Wobbothe3rd 3 месяца назад +3

    The human brain is a recurrent neural network, not a transformer. Eventually, recurrent will win.

    • @athul_c1375
      @athul_c1375 3 месяца назад +8

      But who said the human brain is better than the transformer

  • @krollo8953
    @krollo8953 3 месяца назад

    Lol thats an intense amount of memeage

  • @Guedez1
    @Guedez1 3 месяца назад +1

    Yeah, if you made up everything you said in the video I wouldn't be able to tell at all. Stuff is getting harder and harder to understand.

  • @FenrirRobu
    @FenrirRobu 3 месяца назад

    Tho didn't they warn us against meta-optimizers due to the alignment becoming impossible?

  • @donson3326
    @donson3326 3 месяца назад

    Short answer: no

  • @notnotandrew
    @notnotandrew 3 месяца назад

    Yo dawg, I heard you like ML models...

  • @4.0.4
    @4.0.4 3 месяца назад

    It seems very convoluted, but I guess it should learn with less data? That could be good for startups that don't have big datasets.

  • @ONDANOTA
    @ONDANOTA 3 месяца назад

    why is every llm's OUTPUT context window fixed to 4096?

    • @geli95us
      @geli95us 3 месяца назад +2

      AFAIK, output context windows are not a thing for the models themselves, the model is just called once for every token it has to generate, you can perform that process a million times if you want, however, it's not useful if the LLM outputs text up to a point where its prompt gets out of its context window, so in the early days the "output window" was just set to whatever the model's context window was, nowadays, it's probably capped for economic reasons, LLMs get more expensive the longer the input is, so by limiting the output window, they force you to pay for tokens several times, once as the model's output, and subsequent times as input to the next outputs

    • @spoonikle
      @spoonikle 3 месяца назад +1

      To stop it. While still giving enough space to make “satisfying” answers.

  • @DanielJoyce
    @DanielJoyce 3 месяца назад

    A single brain neuron needs something like 5 layers or so to encode its behavior. So this kinda maps each node now to somethibg like a neuron.
    I know biological features map poorly to neural nets but neurons in the brain change how and when they fire as the brain learns.

  • @jymcaballero5748
    @jymcaballero5748 3 месяца назад

    just give them more memory!

  • @GodbornNoven
    @GodbornNoven 3 месяца назад +1

    Nice explanations but go easy on the vocabulary. I don't reckon every joe out there knows will understand all the terms. The pacing is too quick too.

  • @anywallsocket
    @anywallsocket 3 месяца назад

    Wouldn’t that take forever to train??

  • @punk3900
    @punk3900 3 месяца назад

    I hate such advertisment shockers that are not separated adwautely form the main material. Not gonna subscribe to a channel that does that.😢

  • @pmosg9649
    @pmosg9649 3 месяца назад

    很棒😀

  • @kingki1953
    @kingki1953 3 месяца назад +4

    You should consider to ban bot in your channel.

    • @kingki1953
      @kingki1953 3 месяца назад +2

      You just upload and 3 bots already comment, dark internet is scary 😢

    • @StefanReich
      @StefanReich 3 месяца назад

      @@kingki1953 Actually dark internet is really lame right now. You can spot these comments from a mile away
      Your videos are always so informative and interesting! Thank you for that!
      Thank you for your work! Your videos are always top notch!
      Always a pleasure to watch your videos! I will be looking forward to new episodes!

  • @falsechord
    @falsechord 3 месяца назад

    fractal ai models

  • @boricuaxflow9669
    @boricuaxflow9669 3 месяца назад

    Are we all botted comments?

  • @pauljones9150
    @pauljones9150 3 месяца назад

    I'm here for the waifu memes
    Good video tho

  • @algorithmblessedboy4831
    @algorithmblessedboy4831 3 месяца назад

    guys I'm in high school and I'm trying to choose a career path. my no.1 choice considering the things I like and that I'm good at is becoming an AI reaearcher, can anyone in the academic world tell me if it would be a fun job or not?

    • @user-vg2ui3wg8n
      @user-vg2ui3wg8n 3 месяца назад

      It definitely is. But the field is getting increasingly complex, fast-paced, and hyper-competitive. I'd recommend studying computer science and mathematics, since you will not be able to compete in this field without a very strong mathematical background. Except for that, go for it. I'm a researcher in parallel processing and numerical high-performance computing. It is definitely fun and rewarding, but be prepared for a painful journey.

  • @AlphaProto
    @AlphaProto 3 месяца назад +1

    This video was too much for me.

  • @multipurposepaperbox
    @multipurposepaperbox 3 месяца назад

    damn yeah that's AI stuff right hahaaa? tbh I understand a quarter of this, but I really enjoy a lot of your videos

  • @muscifede
    @muscifede 3 месяца назад +11

    look at the amount of bots lol

    • @StefanReich
      @StefanReich 3 месяца назад

      This is nothing. Check out any popular video about trading

  • @themultiverse5447
    @themultiverse5447 3 месяца назад

    what?

  • @mariusj.2192
    @mariusj.2192 3 месяца назад

    The quadratic complexity is not the main problem of current LLMs. It's that they are dog sh*t at reasoning (and tasks that depend on it) and a better scaling with context length won't solve that.

  • @09jake12
    @09jake12 2 месяца назад

    leenear

  • @mikemaldanado6015
    @mikemaldanado6015 3 месяца назад

    dude are all your videos infomercials for half the video????????

  • @mitulsolanki6066
    @mitulsolanki6066 3 месяца назад

    would love to collaborate and learn with you

  • @mautkajuari
    @mautkajuari 3 месяца назад +2

    First!

    • @Ps5GamerUk
      @Ps5GamerUk 3 месяца назад

      Nah, the bot beat you to first bro

  • @lynx_pinata
    @lynx_pinata 3 месяца назад

    Bro your thumbnail and fireship's thumbnail are looking similar. Someone has to change/alter their thumbnail

  • @aspenlog7484
    @aspenlog7484 3 месяца назад

    Yeah i did not understand shit. Basically better archetecture

  • @alexijohansen
    @alexijohansen 3 месяца назад +1

    “unlock linear complexity having expressive memory bla bla bla bla bla bla” was this written by chatGPT?

    • @Miaumiau3333
      @Miaumiau3333 3 месяца назад +4

      It sounds human to me, even if it contains some technical jargon. ChatGPT writes differently

  • @Sculptoroid
    @Sculptoroid 3 месяца назад

    what a load of bollocks