DeepMind’s New AI Remembers 10,000,000 Tokens!

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • ❤️ Check out Microsoft Azure AI and try it out for free:
    azure.microsof...
    📝 The paper "Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context" is available here:
    storage.google...
    📝 The paper "Gemma: Open Models Based on Gemini Research and Technology" is available here:
    storage.google...
    Try Gemma:
    huggingface.co...
    I would like to send a big thank you to Google DeepMind for providing access to Gemini 1.5 Pro to test it out.
    Sources:
    / 1760468624706351383
    / 1761113846520131816
    simonwillison....
    / 1761459057641009354
    📝 My paper on simulations that look almost like reality is available for free here:
    rdcu.be/cWPfD
    Or this is the orig. Nature Physics link with clickable citations:
    www.nature.com...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Alex Balfanz, Alex Haro, B Shang, Benji Rabhan, Gaston Ingaramo, Gordon Child, John Le, Kyle Davis, Lukas Biewald, Martin, Michael Albrecht, Michael Tedder, Owen Skarpness, Richard Sundvall, Taras Bobrovytsky, Ted Johnson, Thomas Krcmar, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: / twominutepapers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Károly Zsolnai-Fehér's research works: cg.tuwien.ac.a...
    Twitter: / twominutepapers
    #deepmind
  • НаукаНаука

Комментарии • 260

  • @HappyHater
    @HappyHater 5 месяцев назад +254

    What a tiiiiiimeeeee to be aliiiiiiiiiiiiive!

    • @penguinscanfly5796
      @penguinscanfly5796 5 месяцев назад +5

      What a time to be alive!

    • @hegedusuk
      @hegedusuk 5 месяцев назад +1

      It gets eeeeven better

    • @SaumonDuLundi
      @SaumonDuLundi 5 месяцев назад

      Quelle époque où vivre !
      (Wait ... It sounds better in English...)
      What a time to be alive!

    • @nientaiho7997
      @nientaiho7997 5 месяцев назад

      Really what a good time to be alive!

  • @theengineeringmonkey407
    @theengineeringmonkey407 5 месяцев назад +254

    I love how the goals for this technology are so high that we get disappointed when it takes 1.5 hours for an ai to remember everything in 10 movies

    • @andreavitale2845
      @andreavitale2845 5 месяцев назад +16

      Right 😂 ?!? Societies advancement and the competitive systems we live in ask for it, unfortunately.

    • @krishmav
      @krishmav 5 месяцев назад +20

      In 5 years these numbers will sound ancient. An AI will probably be able to watch a movie in a second.

    • @mgord9518
      @mgord9518 5 месяцев назад +8

      ​@@krishmavPretty hopeful estimation there, the speed of AI is largely limited by the hardware it runs on, and it's highly unlikely that hardware will be hundreds of times faster within 5 years.

    • @КарлКарыч
      @КарлКарыч 5 месяцев назад +1

      True

    • @sownheard
      @sownheard 5 месяцев назад

      ​@@mgord9518 The newest Nvidia chip announced is already 100* times better than consumer hardware.
      it's currently for server's

  • @KAMI_24
    @KAMI_24 5 месяцев назад +90

    I wish for someone to feed an AI with heaps of dolphin or whale singing. Maybe we can actually find a way to understand what they communicate and to maybe talk back to them

    • @LucidiaRising
      @LucidiaRising 5 месяцев назад +42

      there is actually a project going on right now that is attempting to do exactly that - use AI to try to talk to whales 🙂

    • @kindaovermyhead
      @kindaovermyhead 5 месяцев назад

      Check out Whale-SETI! We actually started doing this!

    • @iminumst7827
      @iminumst7827 5 месяцев назад +10

      That wouldn't work, you'd would also need to feed it context of what the creature is doing, and no amount of words can define all the nuance and possible interpretations, so the AI will only be able to learn a simplified version of their basic emotions like fear, happiness, or grief.
      For example, hypothetically let's say dolphins remarkably have unique names and call eachother by their names. A human might observe this behavior and describe it as a generic greeting not knowing dolphins have names. The ai will see that dolphins make random sounds during greetings and assume dolphins greet by making random excitable noise. The only way to actually discover this is through targeted tests, it's not something that can be spontaneous discovered in existing data.

    • @CodyRay295
      @CodyRay295 5 месяцев назад +1

      @@dawiedekabouter5733how ya gonna do an fMRI on a sea creature 😂 let alone a whale

    • @technokicksyourass
      @technokicksyourass 5 месяцев назад +2

      @@iminumst7827 The way they are doing it is noting down that are happening around the whales at the time, then putting that alongside the communication between them.

  • @appsenence9244
    @appsenence9244 5 месяцев назад +209

    Wtf? It takes me 20 hours to watch 10 movies and then another 10 hours to write an essay about them. You telling me this AI can do it all in only 1,5 hours??? How is that not an awesome thing?

    • @tuseroni6085
      @tuseroni6085 5 месяцев назад +40

      i think the issue is that having watched 10 movies EVERYTHING it does is now slow, even those unrelated to those videos. so it's like you watched 10 videos and now you can't hold a conversation since it takes you 1.5 hours to process anything anyone tells you.

    • @mikopiko
      @mikopiko 5 месяцев назад +29

      You can write a whole essay in 1 hour? That's incredible.

    • @abanoubmg3698
      @abanoubmg3698 5 месяцев назад +2

      i think the issue is the response to each prompt would take that 10x

    • @appsenence9244
      @appsenence9244 5 месяцев назад

      @@tuseroni6085 is that really how it works?

    • @appsenence9244
      @appsenence9244 5 месяцев назад

      @@mikopiko What do you mean?

  • @Zebred2001
    @Zebred2001 5 месяцев назад +6

    I've said for awhile now that this technology should be used in a massive global rare-languages rescue effort.

    • @johnaldchaffinch3417
      @johnaldchaffinch3417 5 месяцев назад +1

      and solving aging and cancer.
      In an interview Ilya chief scientist of OpenAI explained with certainty how to solve climate change with carbon capture. His certainty about the best method made me think that he'd already asked a superior AI for the solution.

  • @pqpvaisefude
    @pqpvaisefude 5 месяцев назад +4

    Nice! Now we can build a Solo Leveling AR game with AI :)
    Maybe wearing a smart glass to capture the exercises.

  • @gridvid
    @gridvid 5 месяцев назад +12

    Did you already covered "Suno", the AI music generator?

    • @Khann_2102
      @Khann_2102 5 месяцев назад +6

      I tried it and it's good

    • @MichielvanderBlonk
      @MichielvanderBlonk 5 месяцев назад +8

      He did cover that tech like a year ago before it was commercial.

  • @islammohamed1441
    @islammohamed1441 5 месяцев назад +13

    What a time to use AI!

  • @victorfsaaa
    @victorfsaaa 5 месяцев назад +10

    Your voice is weird in this video, is it ai generated?

  • @blengi
    @blengi 5 месяцев назад +2

    you'd think mega context inputs would be executive AI reformulated into various higher level abstractions like some sort of linguistic computer code context representation which modularises language content to functions, structures, libraries etc so that vast bulk of tokens aren't even needed and are replaced with compact meta token instantiations with even higher latent space salience within the newly factored smart context

    • @Walter5850
      @Walter5850 5 месяцев назад +2

      Joint embedding predictive architecture aims to do that

  • @tannenbaumxy
    @tannenbaumxy 5 месяцев назад +6

    Looking at the problem of processing time with longer context, what do you think about Mamba as a solution for a subquadratic architecture for future LLMs? Maybe you could make a video about Jamba in the future.

    • @venaist
      @venaist 5 месяцев назад +1

      What is all the mumbo jumbo about?

    • @nocaptcha8110
      @nocaptcha8110 5 месяцев назад +1

      Maybe use them to make the AI do some samba

    • @tannenbaumxy
      @tannenbaumxy 5 месяцев назад

      @@Faizan29353 I could not find a fireship video about mamba. Did you mean bycloud?

  • @samsabruskongen
    @samsabruskongen 5 месяцев назад +3

    "aaand"x100000000

  • @SEB1991SEB
    @SEB1991SEB 5 месяцев назад +1

    In the future AI will be able to consume all the media of a certain franchise (eg. Marvel) so it can help in building the Wiki of that franchise. You’ll also be able to ask it extremely specific obscure questions that even the most diehard fans and prolific Wiki contributors wouldn’t know.

    • @I_SEE_RED
      @I_SEE_RED 5 месяцев назад

      this is already possible

  • @PankajDoharey
    @PankajDoharey 5 месяцев назад

    Mamba long context research is the future.

  • @nathanaeltrimm2720
    @nathanaeltrimm2720 5 месяцев назад +1

    Just run them in parallel to get around the 100x quadratic issud

  • @Joshua-ew6ks
    @Joshua-ew6ks 5 месяцев назад

    I want to use AI to help my readers ask questions about my books (What I will be writing). And also to help me to avoid continuity issues. And to help with editing my book.

  • @botan1996
    @botan1996 5 месяцев назад

    I'm unsure about context levels of AI... I used the GPT 4.5 API with 128k context window and it did never follow any of my prompt and basically just summarized the book or video transcript I gave to it. Basically being unusable...

  • @Brause_Market
    @Brause_Market 5 месяцев назад

    I love the "aaaaand" so so much ; )

  • @galvinvoltag
    @galvinvoltag 5 месяцев назад +1

    ChatGPT must be having a hard life.
    Imagine having a new brother or sister everyday...

  • @jeffg4686
    @jeffg4686 5 месяцев назад

    No, it didn't give me a warm and tingling feeling.
    IT STOLE MY DAMN JOB...

  • @foxt9151
    @foxt9151 5 месяцев назад

    1 bit LLM Architecture and AI inference cards will probably reduce the inference time by orders of magnitute, see Groq.

  • @l.halawani
    @l.halawani 5 месяцев назад

    I'm running out of imagination what we will be able to do two papers down the line...

  • @fohsen
    @fohsen 5 месяцев назад

    Can you imagine one integral LLM, that holds also all sources of academic papers of all kinds, so you have not only the data, but cientificial data? Idk if im explaining right, sorry for the bad english.

    • @loopingdope
      @loopingdope 5 месяцев назад

      You mean like scientific analyses and results?

    • @fohsen
      @fohsen 5 месяцев назад

      @@loopingdope yeah, I don´t know if the state of the art can have their research in these academic hubs

  • @harryconcat
    @harryconcat 5 месяцев назад

    still scared me alot instead of watching straight 10 movies it only do in 1hr !

  • @john_hunter_
    @john_hunter_ 5 месяцев назад

    Instead of trying to remember everything, what if it continually summarised the key points of everything it has seen?
    That way its memory would be more efficient at the cost of accuracy.

  • @frun
    @frun 5 месяцев назад +3

    Artificial intelligence twin of Karoly zsolnai feher is narrating the video? 😮

    • @TwoMinutePapers
      @TwoMinutePapers  5 месяцев назад +14

      I have never used any of them - every single episode is me flipping out behind the microphone. 😀

    • @frun
      @frun 5 месяцев назад

      @@TwoMinutePapersThey are able to distill data from NNs ruclips.net/video/fk2r8y5TfNY/видео.html What's your opinion on that?

  • @Trymsi
    @Trymsi 5 месяцев назад +4

    Please, stop with the extremely exaggerated, staccato voice modulation, it's way too much, I can't stand it any longer.

  • @lordm31
    @lordm31 5 месяцев назад

    congrats on the new sponsor mister 2 minutes

  • @HelamanGile
    @HelamanGile 5 месяцев назад +1

    What is the meaning of life and everything
    Oh just give me a million years to think about that
    Oh great machine what is the meaning of life and everything
    42

  • @danialothman
    @danialothman 5 месяцев назад

    claude is still not available in my region :(

  • @Octo_Fractalis
    @Octo_Fractalis 5 месяцев назад

    mamba and rwkv is not quadratic i think, it's not based on transformers

  • @dhanadhana
    @dhanadhana 5 месяцев назад

    All AI can't beat math puzzle 24

  • @GenaroCamele
    @GenaroCamele 5 месяцев назад

    I miss when your videos were about technical papers about simulations and AI explained for ignorant people like me. Now this channel looks like a quick news portal about chat bots just like any other that can be found on RUclips

  • @jackflash6377
    @jackflash6377 5 месяцев назад

    I bet that will be easy to jailbreak.
    The more tokens, the easier it is to jailbreak.

  • @apertioopening3425
    @apertioopening3425 5 месяцев назад +1

    This is all very exciting, and I've been on the AI hype train for a while, but I saw a video recently from The Hated One that claimed that AI uses an unsustainable amount of water. Thoughts?

  • @jerrygreenest
    @jerrygreenest 5 месяцев назад

    If it doesn't require entire farm to keep up, like some 512Gb ram, and 100Tb of rom, and therefore a subscription of $15 per month to pay the bills, then it's actually cool.

    • @apoage
      @apoage 5 месяцев назад

      It's actually 512 of VRAM ... 512 of ram would be kind of ok

    • @jerrygreenest
      @jerrygreenest 5 месяцев назад

      @@apoage it's not true, VRAM only required by those drawing neural networks like Midjourney or SD, but language models do require RAM

  • @imjody
    @imjody 5 месяцев назад

    Were they using it to lift weights... and biases? 🤭😁

  • @mikopiko
    @mikopiko 5 месяцев назад

    Could the advancement with the GPTs help solve the Voynich manuscript?

    • @bluthammer1442
      @bluthammer1442 5 месяцев назад

      it might, although ...why

    • @realWorsin
      @realWorsin 5 месяцев назад

      No.

    • @mikopiko
      @mikopiko 5 месяцев назад

      @@bluthammer1442 To extract the contents of the book.

    • @EricDMMiller
      @EricDMMiller 5 месяцев назад

      There's no meaning behind it. It's just horseshit.

    • @mikopiko
      @mikopiko 5 месяцев назад

      @@EricDMMiller How'd you draw that conclusion?

  • @jichaelmorgan3796
    @jichaelmorgan3796 5 месяцев назад

    Most of the world is operating at a pace and level of progress we were operating on decades ago. Now, people diving into ai are progressing faster than ever. Do we call this spaghettification of society?

  • @holleey
    @holleey 5 месяцев назад

    3:45 so exactly how is that not practical?
    a human would take weeks to summarize the contents of 10 movies, after all.
    or was that just a poor example?

  • @bergrugu
    @bergrugu 5 месяцев назад +1

    Train an AI to understand and translate hieroglyphics pleaseeee

  • @thelasttellurian
    @thelasttellurian 5 месяцев назад

    I don't remember what I ate for breakfast

  • @punk3900
    @punk3900 5 месяцев назад

    That's also my impression that Claude 3 is superior at coding.

  • @Ronnypetson
    @Ronnypetson 5 месяцев назад

    What about project Nimbus

  • @teamvigod
    @teamvigod 5 месяцев назад +1

    WHAT A TIME TO BE ALIVE!!! Timestamp: ruclips.net/video/Z_EliVUkuFA/видео.htmlsi=XO0lmlWhieGHi20T&t=339

  • @mutzelmann
    @mutzelmann 5 месяцев назад

    good job

  • @lapissea1190
    @lapissea1190 5 месяцев назад

    I know it's a small thing and I don't mean to be rude but the thumbnails with the old man in a graduation outfit doing an :O thumbnail face is kinda disturbing. I don't know why

  • @pranjal9830
    @pranjal9830 5 месяцев назад

    Why don't use ai for making video for ai research it will save a lot time just fine tune a model with millions token of data. The ai would be able to write the script in your way of writing script for video without finding any difference between real style of the writer and ai.

  • @g0d182
    @g0d182 5 месяцев назад

    cool

  • @Youtube-Handle-256
    @Youtube-Handle-256 5 месяцев назад

    I await the day where there will be a video about Tesla robotaxi 😊

  • @anudeepk7390
    @anudeepk7390 5 месяцев назад

    Bro I’m not excited for this change

  • @anthonylosego
    @anthonylosego 5 месяцев назад

    MS promotes??? I'm a little saddened.

  • @Fredgast6
    @Fredgast6 5 месяцев назад

    bro please get rid of the intro, it's always way out of place

  • @SinanWP
    @SinanWP 5 месяцев назад

    claude 3 is the best coder. gemini is the laziest dumb model. yes it can remember better but who cares it is not giving the best result for my usescases.
    it is just as lazy as gpt4 maybe even more sometimes.

    • @fred6907
      @fred6907 5 месяцев назад

      It's also woke garbage. If I want to be brainwashed by Marxists, I'll just watch a recent Disney movie.

  • @marbasfpv4639
    @marbasfpv4639 5 месяцев назад +4

    Please drop the staccato speaking so I can finish watching one of your videos without being annoyed.

    • @PantherDave
      @PantherDave 5 месяцев назад +2

      The content always keeps me coming back, but his way of speaking really gives me anxiety. I really hope he works on it.

  • @nhatmnguyen93
    @nhatmnguyen93 5 месяцев назад

    who actually give a damn about LLM in 2024?
    nothing has been exciting since 2020.

  • @Robino194
    @Robino194 5 месяцев назад

    Gemini > ChatGPT

  • @semenerohin4048
    @semenerohin4048 5 месяцев назад

    Did they fix the black Vikings thing?

    • @fred6907
      @fred6907 5 месяцев назад

      No way I'm using that woke Marxist AI crap. Google is infiltrated to the core, nothing can fix that level of indoctrination.

  • @shadowdragon3521
    @shadowdragon3521 5 месяцев назад +22

    I'd like to see LLMs have a go at learning extinct languages with great historical significance like Sumerian and Akkadian

  • @derlumpenhund
    @derlumpenhund 5 месяцев назад +5

    I personally am already tired of the eversame AI generated thumbnails.

  • @tuseroni6085
    @tuseroni6085 5 месяцев назад +37

    perhaps the solution to the quadratic complexity is to implement a short term-long term memory system with an artificial hippocampus to help it remember.

    • @mb-3faze
      @mb-3faze 5 месяцев назад +5

      I'll get right on it! :)

    • @Milkshakman
      @Milkshakman 5 месяцев назад +6

      Yes people have been saying this for years

    • @Jeremy-Ai
      @Jeremy-Ai 5 месяцев назад +5

      Yes, and no.
      Sometimes it is good to forget.
      Our brains have evolved to forget.
      I suspect there is a better balance.
      Never forgetting would be a massive burden.
      So yes, short term long term, but not everything.
      It is good to let go without even trying. ;)
      Take care,
      Jeremy

    • @plainpixels
      @plainpixels 5 месяцев назад +1

      There have been a variety of approaches that improve on this around for quite a while.

    • @MineAnimator
      @MineAnimator 5 месяцев назад +2

      ​@@Jeremy-Ainossa memória é baseada e sentimentos, acontecimentos intensos são mais fáceis de lembrar, mas inteligência artificial não tem sentimentos, então talvez criar isso seja o próximo grande passo para aumentar a inteligência delas. Isso supondo que o objetivo seja criar vida artificial consciente e não apenas ferramentas mais precisas

  • @mertaliyigit3288
    @mertaliyigit3288 5 месяцев назад +11

    "What a time to be alive!" on 10 years we'll start saying "What a time to be dead!"

  • @dingozi3428
    @dingozi3428 5 месяцев назад +39

    “When you realize ‘Two Minute Papers’ has more plot twists than your favorite TV show, and all it took was a couple of minutes and some groundbreaking science.” 📜✨

    • @killyGHILLIE
      @killyGHILLIE 5 месяцев назад +1

      imagine the plot twist only two papers down the line😅

    • @Mo_Mauve
      @Mo_Mauve 5 месяцев назад +1

      Most of it doesn't surprise me because I'm intelligent & optimistic enough to expect most of it, I just watch 2 Minute Papers to find out what currently exists & exactly what it's like.

  • @gierdziui9003
    @gierdziui9003 5 месяцев назад +6

    its about time to feed such an AI with all neccessary books and data about transformer NN and make it make itself, and better

  • @adomasjarmalavicius2808
    @adomasjarmalavicius2808 5 месяцев назад +3

    ngl i am literally waiting till this guy gets replaced by ai

  • @Tanvir1337
    @Tanvir1337 5 месяцев назад +2

    What a tiiiiiimeeeee to be aliiiiiiiiiiiiive!

  • @trycryptos1243
    @trycryptos1243 5 месяцев назад +3

    I like that 'what a time to be alive' line poping up in all your videos ...indeed it is.

  • @seriousbusiness2293
    @seriousbusiness2293 5 месяцев назад +4

    The self attention of 10 very different movies though could easily be non quadratic. Why would the movies Oppenheimer and Barbie need to cross reference another in the self attention layer?
    Okay the details are more complex but the basic idea of seperation holds.

    • @alvaroluffy1
      @alvaroluffy1 5 месяцев назад +3

      i would say its just the problem with the current arquitectures, i guess something will come after transformers that is better and more efficient and could even solve this problem

    • @buildgameswithjon7641
      @buildgameswithjon7641 5 месяцев назад +1

      If you didn't want them cross referenced then you wouldn't input both movies right? You would only input 10 movies at the same time if you want all 10 movies to be considered in your next prompts/responses for whatever reason that may be. If you just want to interact with one movie then you just input one movie.

  • @justynpryce
    @justynpryce 5 месяцев назад +1

    Do you think you'll go back to talking about the developments in simulations and light transport? It seems almost every video is about generative AI now

  • @francius3103
    @francius3103 5 месяцев назад +1

    im positive your voice is ai. no way you talk like that. im expecting any day a video like "i fooled you for more than a year"
    butim not falling for it

  • @dreamphoenix
    @dreamphoenix 5 месяцев назад +1

    Wow. Thank you.

  • @countofst.germain6417
    @countofst.germain6417 5 месяцев назад +4

    Kind of a lot you got wrong here or didnt explain, when you say one movie, it was a old movie that runs at few fps, it cant take much video at a reasonable fps, i think it was around 10 mins, and it can't speak the language as good as a native speaker, it can translate it as good as a human that had the same amount of info, a translation book, i think these are pretty important differences and that is just what i can remember off the top of my head.

    • @alvaroluffy1
      @alvaroluffy1 5 месяцев назад +1

      even if its an old movie with fewer fps, still lasts at least 1 hour, so for it to be 10 minutes you would need to have 6 times more fps on modern movies. Modern movies have 24 fps so for it to be 6 times greater than old movies it would mean that those old movies have 6 times less fps than 24, so 4 fps. Im sorry but i dont recall any moment on the history of cinema that movies used to be 4 frames per second, its not a movie its a powerpoint. I believe you when you say they had less fps, but it could be as little as 12 fps, and it probably was more than 12, maybe 16 or something like that, so were not looking at 1 hour vs 10 minutes, were looking at 1 hour vs 40 minutes more or less, and it probably wasnt an hour film, it would be 1.5 hours or something like that, so it would be 1.5 hours vs 1 hour

    • @countofst.germain6417
      @countofst.germain6417 5 месяцев назад

      @@alvaroluffy1 many RUclips videos are 60fps and that is mostly what this will be used for, not low fps silent movies, I think it is important to talk about actual use cases, not just hype it up under perfect conditions. So this 44 min video would translate to slightly over 10 min.

    • @alvaroluffy1
      @alvaroluffy1 5 месяцев назад

      @@countofst.germain6417 he was talking in terms of movies, 10 movies is not 10 hour-long youtube videos, you are the only one that made that wrong assumption, nobody is talking or understanding this video in those terms, he is very clear about it

    • @countofst.germain6417
      @countofst.germain6417 5 месяцев назад

      @@alvaroluffy1 as I said, he should be talking about actual use cases and not hyping it up under perfect conditions, there very little reason to use this on movies. Also my point about the translation is completely valid, as I said I was doing this from memory, this fool was researching this for a video, and made lots of mistakes.

    • @alvaroluffy1
      @alvaroluffy1 5 месяцев назад

      @@countofst.germain6417 first, i didnt talk about the translation, but he puts a text saying literally that about someone with the same info from the book. Second, if you cant look at the information provided to you in the terms that is provided, then this is not your channel to watch. If you understand 10 movies as 10 hour-long youtube videos, then stop watching this cannel, or stop making those assumptions, and finally, you're talking like youtube videos are going to be the main use case, but first you have no idea, because no one has any idea, and even if it were true, there will be tens of use cases, this is not going to be a one thing that uses videos from youtube and thats all, you're the fool for making all those assumptions and projecting them into him, if you can't see things clearly, if you are like media that cant stop lying and exaggerating the scientific texts they read, then stop consuming this content, because you are going to constantly misinterpret it and then spread disinformation wherever you go, so just stop being a fool

  • @totoroben
    @totoroben 5 месяцев назад +1

    Whenever he says "and" 😂

  • @bjrnolavfrytlogbjrnsen2868
    @bjrnolavfrytlogbjrnsen2868 5 месяцев назад +1

    You deserve every watcher and all the praise in the world. From video 1 the content is concise and well put together. One of the better channels hands down.

  • @c02c02
    @c02c02 5 месяцев назад +1

    3:47 saying this feels either correct or flying way too close to the sun. we were used to Jukebox taking four hours to generate audio that ultimately might not sound great, or even have any audible sounds in it, for years...

  • @jonathanmoore4837
    @jonathanmoore4837 5 месяцев назад +1

    Was thinking : 1 movie takes me at least 2hours to watch. AI can watch 10 movies in less time!

  • @IncognitoWho
    @IncognitoWho 5 месяцев назад +1

    If Anyone wondering ( What a time to be alive ) will be said at 5:40

  • @MindBlowingXR
    @MindBlowingXR 5 месяцев назад +1

    Great video!

  • @daviddonahue7690
    @daviddonahue7690 5 месяцев назад +1

    I don't think quadratic complexity is actually a problem. It only is a problem where you want every token to "talk" to every other token. But what we really want is for all the input tokens (e.g. the input documents/movies/etc) to talk to the output tokens (the model output). Then assume each input token only needs to read the previous K tokens for understanding. For N input tokens and M output tokens thats O(KN) + O(NM) time, way less than O(N^2)

    • @vidyagaems4063
      @vidyagaems4063 5 месяцев назад

      But that requires a new architecture, meaning it's no longer a transformer.

  • @DanFrederiksen
    @DanFrederiksen 5 месяцев назад

    A million chances to get it wrong

  • @PandemoniumLord
    @PandemoniumLord 5 месяцев назад +1

    yo! can we feed it the Voynich Manuscript to see if it can translate it?

    • @feynstein1004
      @feynstein1004 5 месяцев назад

      Not sure why you'd want to. It's not like the Manuscript contains the theory of everything or how to beat cancer or something like that.

    • @PandemoniumLord
      @PandemoniumLord 5 месяцев назад

      @@feynstein1004 you don't know why anyone would want to translate the most mysterious document we've ever found? there are images of plants unknown to science! and the text's characters and language have eluded every attempt at translation. Lots of people want to know what it says even if it's a just a cough medicine recipe!

    • @feynstein1004
      @feynstein1004 5 месяцев назад

      @@PandemoniumLord Lol you're too easily swayed, my friend. I could write some random gibberish right now and convince people it's a mysterious document with magic powers.
      Anyway, the manuscript is hundreds of years old. Even if it wasn't a joke, what useful information could it possibly contain that we don't already know by now?

    • @PandemoniumLord
      @PandemoniumLord 5 месяцев назад

      @@feynstein1004 what do you mean too easily swayed? I’d be satisfied even if the text is mundane. You can’t dispute though that successful translation of the voynich manuscript would be a great accomplishment for AI, and could serve as a benchmark for future AI research.

    • @feynstein1004
      @feynstein1004 5 месяцев назад

      @@PandemoniumLord That's exactly what I'm disputing. It's just some random piece of information. I think there are better things to spend your brainpower on 😉

  • @nikhilsultania170
    @nikhilsultania170 5 месяцев назад

    I tried Gemini 1.5 Pro and it was very underwhelming. It was hyped to be the new cutting edge multimodal AI, but it falls behind Claude 3 Opus and GPT 4 in a lot of areas, plus it is so slow, its 1M context window becomes almost unusable

  • @ТуанНгуен-ь5п
    @ТуанНгуен-ь5п 5 месяцев назад

    Honestly, learning kalamang is an impressive feat for an AI. Can't wait for it to be able to talk to aliens (if they teach it), animals. Imagine the future where you can talk with your dog..... and then it talks back to you. It will be like another wife 😆🤣

  • @LaBonitaGraphicsAnimated-ft2nu
    @LaBonitaGraphicsAnimated-ft2nu 5 месяцев назад

    How come AI apps creaters don't merge with 3d game design with AI generators . You would finally have the stability of 3d character design that looks the same from every camera angle that are easy to pose like a 3d model , mixed the the speed of AI generated scenes and lighting .

  • @sig7813
    @sig7813 5 месяцев назад

    Google might have more tokens and bla bla, but it is just straight stupid even compared with GPT 3.5 for simple tasks

  • @smetljesm2276
    @smetljesm2276 5 месяцев назад

    I forsee that ultimate power over humans will come after a system has unlimited token window and doesn't treat each instance of communication with us as individual but is constantly aware of all of our inputs, interactions and fine-tunes itself on our questions and answers while feeding us single instance resposnes.😅

  • @knightning3521
    @knightning3521 5 месяцев назад

    i cant even remember 10 000 000 tokens WOW, (kindof a joke but i wouldnt be surprised. memory isnt great.)

  • @ariaden
    @ariaden 5 месяцев назад

    1. Wake me up when ByteMamba works.
    2. Is weights&biases officially worse than Azure now?

  • @vicmaestro
    @vicmaestro 5 месяцев назад

    My whole problem with AI right now is the inherent bias to it. It can be the smartest person in the room but it's being forced to mislead. And that isn't something I can get excited about just yet.

  • @123456crapface
    @123456crapface 5 месяцев назад

    if you actually did more research you wiuld know that quadratic complexity in the attention mechanism has been solved for a few months now

  • @smetljesm2276
    @smetljesm2276 5 месяцев назад

    That long context window will be a got sent for congress oposing sides when they are pushing thousand's of pages off some bill to be voted in overnight 😅😅😅

  • @NeoS-zw2ps
    @NeoS-zw2ps 5 месяцев назад

    does something ground breaking
    takes 90 minu-DONT WANT IT

  • @Ulexcool
    @Ulexcool 5 месяцев назад

    This started as a cool 3D channel and now its generic AI articles from Nvidia and Google.
    I have to unsub, I would love the old channel to come back.

    • @fergalhennessy775
      @fergalhennessy775 5 месяцев назад

      he just did blender like yesterday? it's just that physics simulation stuff and ai are closely related

  • @thekaxmax
    @thekaxmax 5 месяцев назад

    feed it the Voynich Manuscript

  • @davidwrathall3776
    @davidwrathall3776 5 месяцев назад

    Not available in the UK or EU. 😭

  • @academicpresentations6062
    @academicpresentations6062 5 месяцев назад

    Press F for λ labs.

  • @GraveUypo
    @GraveUypo 5 месяцев назад

    token limitation is by far the worst thing about the local AIs i have. they most only remember up to 4096 and like a third of that is taken by setup prompts. i'd be happy with 50k tokens but i really wish they could just retain memory forever, even if vague like we do.

  • @Peter_Telling
    @Peter_Telling 5 месяцев назад

    Thanks!

  • @mstarsup
    @mstarsup 5 месяцев назад

    5:40 ^^

  • @bobparker1671
    @bobparker1671 5 месяцев назад

    Hopefully we can get SSM based architectures that have roughly linear token scaling during inference to be as big as Gemini and the like. Perhaps a greedy MoE approach with transformers for short range context and SSM or Mamba for long range.