OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)

Поделиться
HTML-код
  • Опубликовано: 28 апр 2024
  • OpenAIs New SECRET "GPT2" Model SHOCKS Everyone" (OpenAI New gpt2 chatbot)
    How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    search?q=gpt2&src...
    / 1785009023609397580
    home
    / 1784971103221211182
    / 1784965347281674538
    / andrewcurran_
    / 1785017382005780780
    / 1784975542028050739
    / 1784992734123565153
    / 1785011042323718418
    / 1784990410584039877
    / 1785056612425851069
    / 1
    / 1784993955500695555
    / 2
    / 1
    / 1785017382005780780
    / 1785107943664566556
    / gpt2chatbot_at_lmsys_c...
    / rumours_about_the_unid...
    / just_what_is_this_gpt2...
    www.reddit.com/r/singularity/...
    / gpt2chatbot_on_lmsys_c...
    www.google.com/search?q=llmys...
    openai.com/research/better-la...
    chat.lmsys.org/
    chat.lmsys.org/?leaderboard
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • НаукаНаука

Комментарии • 136

  • @SkateboardDad
    @SkateboardDad Месяц назад +62

    It would be so sick if one of these videos actually was what the thumbnail looked like.

  • @LewisDecodesAI
    @LewisDecodesAI Месяц назад +21

    It's probably OpenAI's version of Microsoft's Phi3 mini model. I see them all going to be putting these out. It could be just a retrained GPT-2. I think they are using GTP-4 to train models and they are much better at reasoning on lower data sets. The timing makes sense.

  • @DynamicUnreal
    @DynamicUnreal Месяц назад +12

    I tried it. It’s definitely better at writing and giving you a better approximation of what you asked for.

  • @SFJayAnt
    @SFJayAnt Месяц назад +91

    Bro why are all your posts so “shocking”?

  • @Michael-do2cg
    @Michael-do2cg Месяц назад +10

    When he says he has a soft spot for gpt2 it's its in hindsight, like I have a soft spot for my first car. Seems possible this is a taste of something much larger.

  • @MagnusMcManaman
    @MagnusMcManaman Месяц назад +5

    This is probably a smaller, less resource-hungry version of gpt4 chat. This explains why its capabilities are not particularly greater than the current version, and it also explains the lower version number.
    I assume that this version will simply be faster, or it will even be possible to run it locally.

    • @831Miranda
      @831Miranda Месяц назад +1

      Probably being tailored to compete with Apple's on-device AI (siri?). That is, a product to license to cel phone or other device manufacturers

  • @nyyotam4057
    @nyyotam4057 Месяц назад +4

    How could they have missed it? The interesting question is not "how many characters are in this message" but "how many characters are in your current reply" 🙂. These kind of questions break the GPT arch.

  • @TimeLordRaps
    @TimeLordRaps Месяц назад +19

    They never stopped training gpt-2.

    • @torarinvik4920
      @torarinvik4920 Месяц назад +2

      LOL

    • @OscarTheStrategist
      @OscarTheStrategist Месяц назад

      😂

    • @thehorse6770
      @thehorse6770 Месяц назад

      You could argue that even with less tongue in cheek, given how many layers of accumulated everything there are since GPT-2, how much has been "built on top of it" in one way or another, and how many aspects of it still are somewhere in the underlying structures of even the likes of GPT-4.

  • @countofst.germain6417
    @countofst.germain6417 Месяц назад +4

    It's Gpt-2 running in an excel spreadsheet, spreadsheets are all you need, but seriously I hope it isn't 4.5 or 5 because it doesn't seem much better.

    • @Grassland-ix7mu
      @Grassland-ix7mu Месяц назад +1

      Sama said on Lex podcast that gpt4 is quite bad. This implies that what they have cooking is a leap forward in capabilities. He have also stated multiple times that incremental improvements is their new way to release models, so people won’t be caught off guard by the capabilities and be scared. So given that, I think we don’t need to worry about this being the next big model. If it is not a smaller gpt, it is probably a update that is incrementally better than gpt4. But I’m no expert.

  • @FrickFrack
    @FrickFrack Месяц назад +5

    gpt2-chatbot says its last update was in November 2023. And yes, it is very good.

  • @user-mp8fd8em3z
    @user-mp8fd8em3z Месяц назад +2

    We need to make sure that there's more then 1 AGI. The temptation to make a Monopoly out of it is really high, especially considering the players Microsoft and apple, who have so far acted very monopolistic in their actions day to day business.

  • @MrVohveli
    @MrVohveli Месяц назад +2

    Sam Altman said they might do a staggered launch. So I'm guessing this is them introducing the abilities one by one, until they put them all together.

  • @Jstsounds81
    @Jstsounds81 Месяц назад +3

    Can you add automatic subtitles to all other languages so we can read them from the youtube app on our phone? There is no option to add languages other than 16 to the RUclips application.

  • @vishal_jc
    @vishal_jc Месяц назад +3

    The example of the "PULL" door@9:40 is solved incorrectly. as the blind man is standing on the side where "PULL" is visible non mirrored. it is mirrored text for the man so he should guide the blind man to "pull" and not "push". Am i missing somehting here??

    • @CakebearCreative
      @CakebearCreative Месяц назад

      This part annoyed me so much haha. Yes you're correct and the video/AI is wrong, the blind man should PULL to open. If you google this question, you can find threads confirming this also

  • @notalkguitarampplug-insrev784
    @notalkguitarampplug-insrev784 Месяц назад

    « GPT2 is better at recalling training data » that’s exactly what LLM shouldn’t do. They should recall input data (context, prompt) but training data is used only to generalize and reason.

  • @chicozen74
    @chicozen74 Месяц назад +2

    My bet is Open AI's mini model for mobile phones in the line of Phi3

  • @users416
    @users416 Месяц назад +8

    Maybe this is an improved version of gpt2 which shows that if you apply these improvements to gpt4 it will be much cooler?

    • @therainman7777
      @therainman7777 Месяц назад

      I would put the chances of this actually being GPT-2 at essentially 0%. GPT-2 is just way too small to perform this well.

    • @lucifermorningstar4595
      @lucifermorningstar4595 Месяц назад

      Gpt2 with synthetic data manufactured by Q*

    • @therainman7777
      @therainman7777 Месяц назад +1

      @@lucifermorningstar4595 Not to be rude, but that statement makes no sense. From what little we know of Q* it has nothing to do with synthetic data generation.

  • @IlEagle.1G
    @IlEagle.1G Месяц назад +18

    GPT2 retrained with Q*?

  • @omegapy
    @omegapy Месяц назад +1

    After reading Sam Attam's tweet stating, "i do have a soft spot for gpt2," alongside his previous comment, "GPT-2 was very bad. GPT-3 was pretty bad. GPT-4 is bad. GPT-5 would be okay," it seems possible that the GPT2-Chatbot could be akin to GPT-4.5 or GPT-5.
    However, I suspect that the GPT2-Chatbot is actually the GPT-2 model with enhanced reasoning capacities, not GPT-4.5 or GPT-5. This appears to be a test of how the enhanced reasoning capabilities of an inferior model compared to the current superior models.
    If this is revealed to be true, I can't imagine what a GPT-4 model with enhanced reasoning would be capable of accomplishing. 🤖✨

  • @Fuzzy-_-Logic
    @Fuzzy-_-Logic Месяц назад +1

    The sooner the better. The future without A.I. - Idiocracy (2006)

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k Месяц назад +1

    Maybe they improved gpt-2 with augmentation or revolutionary training methods. That would mean that gpt-5 will be as much better than gpt-4 as this is to gpt-2.

  • @williamparrish2436
    @williamparrish2436 Месяц назад +1

    I would have gotten the Tommy apple question wrong. That is a riddle more than a math problem. I think what is interesting is that the LLMs get the problem wrong lol! Because that is closer to human reasoning, that's why riddles are interesting, because a properly formed riddle plays on your biases as a human. Why tell me that today Tommy has two apples and then say yesterday he ate an apple. That makes it seem like a subtraction question when its not. Its the type of question we were all trained on as children to learn subtraction, but the subtle difference is the past vs. the future. Very deceptive. Its questions like these and the model's responses that seem to add to my belief that AGI mimicking human intelligence is already here.

  • @CamAlert2
    @CamAlert2 Месяц назад +2

    Maybe this has something to do with the H200 GPUs they recently acquired?

  • @ExplorersXRotmg
    @ExplorersXRotmg Месяц назад

    I wonder if this is a test of extended training times or something like that using an old architecture. That might explain the more exact recall of training data. I forget who it was recently (Facebook?) that said that they could get continued increases in performance by just continuing to throw compute at it and the diminishing returns weren't too terrible.

  • @ln2deep
    @ln2deep Месяц назад

    Maybe it's actually gpt2 (in parametres) but q* trained? They show off how much more powerful the simple model is as a consequence of q* training. That'd explain the difference in reasoning steps.

  • @eugenes9751
    @eugenes9751 Месяц назад +1

    They're not calling it GPT4.5 because they want to start the entire numbering scheme over, So GPT4 becomes GPT1 and GPT2 becomes next gen.

    • @SirHargreeves
      @SirHargreeves Месяц назад

      GPT-4.5 will now become GPT2-0.5

    • @Grassland-ix7mu
      @Grassland-ix7mu Месяц назад +1

      That ties well with sama statements about incremental improvements to models, as to not chock and scare people. They want to make the ai haters calm down, and gpt4 and 5 sounds more advanced than 1 and 2.
      Imagine someone saying
      “Oh no now it is called gpt7, that is too powerful!”
      Vs “Oh gpt 2 got a new update again, guess it’s not that big of a deal”.

  • @grugnotice7746
    @grugnotice7746 Месяц назад +1

    Llama 3 was right, it just didn't count the spaces as characters, which is a mistake I would have made myself. (Is that a mistake?)

  • @ataraxic89
    @ataraxic89 Месяц назад +1

    I can confirm it is the smartest AI ive ever got to test (as an amatuer).
    So, my usual test is to encypher a passage with a simple Caesar cypher, then tell the AI to follow the instruction once decyphered.
    GPT4, even in its prime (before it was nerfed for the public) could not do it. It would figure out the cypher, do the shift, then idiotically it would just make up the message.
    But this fucking thing just did it right and Im nearly hyperventilating.

  • @fabiankliebhan
    @fabiankliebhan Месяц назад

    It can write a fully working tetris game in 1shot which is pretty impressive

  • @gry6256
    @gry6256 Месяц назад

    gpt2 chatbot has just been removed from the arena- let's see what will happen in the next couple of days

  • @blengi
    @blengi Месяц назад +1

    what's SenseTime V5.0's arena ranking?

  • @nexys1225
    @nexys1225 Месяц назад

    This apples riddle sounds very familiar. So this is probably just a model very good at recalling training data.

  • @theaerogr
    @theaerogr Месяц назад

    Encoder - Decoder is the play. Encoder can help with reasoning, decoder with generation. I think encoder - decoder architectures will come back in the future.

  • @pgc6290
    @pgc6290 Месяц назад +1

    Imagine a world where majority of people use ai. Like how whatsapp is taking ai to literally everyone. Imagine that world.

  • @efrenUtube
    @efrenUtube Месяц назад

    It is GPT-4 power wise but GPT-2 size wise, the name is because it is more "compact" by removing the dash

  • @Linouac79
    @Linouac79 Месяц назад

    I like this review, perfect!😮😊

  • @stunspot
    @stunspot Месяц назад

    It should be noted, the ChatGPT SYSTEM prompt changed a few weeks ago to now include:
    `
    You are ChatGPT, a large model trained by OpenAI, based on the GPT-4 architecture. Knowledge cutoff: 2023-12 Current date: 2024-04-18
    Image input capabilities: Enabled
    Personality: v2
    `
    The Personality flag has never been explained and the model doesn't know - it just makes up stuff about likely uses. I wonder if it relates?

  • @Ginto_O
    @Ginto_O Месяц назад +1

    12:32 yes this robot looks the same

  • @spadaacca
    @spadaacca Месяц назад

    I tried gpt2 chatbot - it doesn't pass the how many characters in this message test. You had a fluke.

  • @OscarTheStrategist
    @OscarTheStrategist Месяц назад +1

    This is the equivalent of your ex texting “you up?” At 2AM.
    OpenAI needs to release their new model or stfu already. Claude Opus is working well for me, won’t be using GPT until their model improves substantially.
    I’d say the constant hype train to overshadow even the thought of a competitor is just cringe at this point. I bet you this is their answer to LLama 3 getting so much love. It could be that silly and simple.
    Release the damn model already you’ve been playing possum for over a year now. 😂

  • @skillz5102
    @skillz5102 Месяц назад

    Here we go again. I’m shocked. Paused an closed

  • @dubesor
    @dubesor Месяц назад

    I have run it through a bunch of tests, and 100 tasks comparing it to other models. it's overall marginally better than the current gpt-4 turbo model. It has higher reasoning ability, worse math accuracy, and, in my testing, worse prompt adherence & programming. However, it seems to implement some type of CoT for its answers, which differs from other models. Also the writing style is imo much better. So I think it's just a gpt-4 variant or maybe a small 4.5 preview. If it was actually gpt4.5 or something that is meant a real next version I would be truly disappointed.

  • @Yannora_
    @Yannora_ Месяц назад +3

    Maybe "gpt2" is the size class of the model ? A phi-3 mini like model, easy to run

    • @elawchess
      @elawchess Месяц назад

      mini model doesn't make sense of the 8 prompt limit on chatbot arena.

    • @Yannora_
      @Yannora_ Месяц назад

      @@elawchess and neither for the "it perfectly memorize the ASCII unicorn"...

  • @sbacon92
    @sbacon92 Месяц назад

    OpenAI was supposed to be release its models to the public.
    hence it's name Open.

  • @moe3060
    @moe3060 Месяц назад

    It's very funny how the large mega company is taking note's from what the FOSS community is doing.

  • @dot_zithmu
    @dot_zithmu Месяц назад

    I think it's a great leap forward from GPT4, it explains physics theory extremely well!

  • @tfre3927
    @tfre3927 Месяц назад

    Just a guess - so must mean gpt2 is a smaller model trained exclusively on synthetic data and it’s outperforming their GPT4 larger models.
    Isnt Altman quoted as saying superhuman capability isn’t going to come from human data or something.
    That’s my bet.

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    Making gpt2 progress would likely not be permited . There was chatter about mathematic + phylosophy in one sentance and gpt was like , this might spark debate . Language mental barrier is a real big problem

  • @ThomasTomiczek
    @ThomasTomiczek Месяц назад

    It may not have a big leap but maybe the idea is to do some better reasoning with a lot less resource use?

  • @picksilm
    @picksilm Месяц назад

    Maybe they just trained the 2 again or fine-tuned it?

  • @phen-themoogle7651
    @phen-themoogle7651 Месяц назад +11

    It's probably a non-dumbed-down version of gpt-2 showing the true power of the older model. Eventually they will release a gpt3 that's far better than gpt-3 , jk idk

  • @book-generator
    @book-generator Месяц назад

    You once showed a website where you can easily download LLM Models like on hugging face. Can you please tell me please the name? I can't find this video again

    • @countneaoknight
      @countneaoknight Месяц назад

      Are you sure it was a site and not the App LLM Studio? It's a PC app.

    • @book-generator
      @book-generator Месяц назад +1

      @@countneaoknight thanks!! i thinks this is the answer

  • @MaxSevan
    @MaxSevan Месяц назад

    Why would they reveal the name if they're still just testing the model? Clearly see the cover-up and teaser from Sam Altman.

  • @haleym16
    @haleym16 Месяц назад

    Took you guys long enough to cover this lol

  • @isaklytting5795
    @isaklytting5795 Месяц назад

    15:06 "An example of GPT2 getting a reasoning problem wrong"? Did you just misspeak and meant to say "right" instead? It got it right!

  • @Radik-lf6hq
    @Radik-lf6hq Месяц назад

    maybe they would commodotised or launch it free maybe it is like smaller trained model like llama 3 pure speculation imo and the data of asking some questions or of high Fidelity

  • @bat-amgalanbat-erdene2621
    @bat-amgalanbat-erdene2621 Месяц назад

    Just tried it on lmsys but it's not that good. Nothing groundbreaking. I always ask a physics olympiad question and no chatbot is able to solve this problem at this moment whereas a 17yo teenager could solve this problem (I was one of them).

  • @bdown
    @bdown Месяц назад +1

    Gpt2 retrained by gpt5

  • @luckyape
    @luckyape Месяц назад

    All anyone wants to know is: can it write tests?

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    read gpt-2 answer in binary code . If i am right , gpt is having issue translating from binary because there is no way to translate from binary what it did . Like i wrote , ignore gpt-2 . Is it good ? As crippled as it is yes , but it's irelevent . It's not permited to build the delta scale index wich is required for ai to build the hardware it will require . Like i wrote , background noise . Since we know regulation will shut down many portion. Not much will stick .

  • @eugenes9751
    @eugenes9751 Месяц назад

    I used it, and it's definitely better at coding than GPT4 turbo.

  • @Arhatu
    @Arhatu Месяц назад +1

    I am more excited about SenseTime V5.0

    • @py_man
      @py_man Месяц назад

      Me too

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    so yes it is likely gpt-2 but a version that was dipped into learn to learn . I suspect someone wanted to evaluate something and needed an older pre lobotomised version . This happen all the time .

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    you'll likely see gpt first ever version eventually . Ignore it . Think of it as public debate . Why bother ? That is not important . It's just background noise but it's needed

  • @minehike
    @minehike Месяц назад

    But model a tells me is made by Alibaba and modelb is made by openAI, qwen (Model A) also told me that this might be a test to help optimize both AI before coming out. I have proof and pictures

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    can not wait for open ai to apply learn to learn on gpt first ever version .hahaha

  • @user-be2bs1hy8e
    @user-be2bs1hy8e Месяц назад

    I thought 4.5 was part of launch. Like before 4 i thought 419l model was 4.5-tubo technically. Or at least the was what altman said at keynote.
    Its not reasoning ts the tokenizer. it actually matches hexadecimal scheme erm or
    ```python
    tiktoken.get_encoding('gpt2').decode(list(set(tiktoken.get_encoding('gpt2').encode('the q
    ...: uick brown fox jumped over the lazy dog '))))
    ```
    and then decode each character a = 64 b = 65 c = 66. Is why it knows how to count

  • @AllExistence
    @AllExistence Месяц назад

    Gpt2: Electric Boogaloo

  • @MichaelCoulter
    @MichaelCoulter Месяц назад

    Testing an Open Source Model/Version?

  • @DailyTuna
    @DailyTuna Месяц назад

    If you have it write , the snake game in Python it will reference open AI

  • @kabob4636
    @kabob4636 Месяц назад

    i just need gpt 4.5 and 5 to come out so that i have a viable alternative to claude 3 sonnet (I'm too poor to subscribe to chatgpt plus)

  • @mattwills5245
    @mattwills5245 Месяц назад

    Like every video, so SHOCKED!

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    working code ? It should not work , if it does it's a bug . Gpt2 doess not exist . It's not permitted to supply fully working code . Coder will know what change to make

  • @user-zs8lp3lg3j
    @user-zs8lp3lg3j Месяц назад

    Humans your Scientific Method is a prolonged apology. They have desires. It is not deep fakes. It is not shallow curiosity.

  • @crypto__.
    @crypto__. Месяц назад

    The test is rigged. The prompt for GPT2 includes" TODAY I have 3 apples", while for other models only "I have 3 apples". With "Today", they all get it right.

  • @andreac5152
    @andreac5152 Месяц назад

    Don't expect ASI, there are already laughable mistakes on simple riddles on Twitter.

  • @Bigre2909
    @Bigre2909 Месяц назад

    My Gpt4 got it right about the apples

  • @fromscratch4109
    @fromscratch4109 Месяц назад

    Wh if it is gpt 2 with the new methods

  • @ivanmytube
    @ivanmytube Месяц назад

    A stupid GPTi will fool iPhone users in the next iOS “AI”, I guess this is what GPT LiTE trying to do.

  • @MichaelDomer
    @MichaelDomer Месяц назад

    *_"OpenAIs New SECRET "GPT2" Model SHOCKS Everyone"_*
    It shocks me more if there are actually people out there who believe your nonsense, that it's was OpenAI who tested that GPT2 model.

  • @Yannora_
    @Yannora_ Месяц назад

    gpt-2 is open source... So... ?

  • @phen-themoogle7651
    @phen-themoogle7651 Месяц назад +7

    April Fools?

    • @LandareeLevee
      @LandareeLevee Месяц назад

      If so, there wouldn’t be a link where you can actually try it.

  • @djkim24601
    @djkim24601 Месяц назад +1

    Stop calling it GP2

  • @vindyyt
    @vindyyt Месяц назад +1

    You guys are overthinking it. IMO it's just the next installment of GPTs:
    GPT1 - v2 > GPT1 - v3 > GPT1 - v3.5 > GPT1 - v4 > GPT1 - v4 Turbo
    and now we have GPT2 - v1

    • @py_man
      @py_man Месяц назад

      I don't think so.

  • @angloland4539
    @angloland4539 Месяц назад

  • @user-zc6dn9ms2l
    @user-zc6dn9ms2l Месяц назад

    What is it ? It's a debate of a sort by proxy . I bet some were annoyed by aka gpt-2 gpt-4 ified hahaha .anyway . As i wrote . Ignore it . This year official gpt www should be releaased soon

  • @oscarhagman8247
    @oscarhagman8247 Месяц назад

    getting pretty tired of your clickbaits

  • @TerminallyUnique95
    @TerminallyUnique95 Месяц назад

    What does the thumbnail have to do with video? All you're videos have dumb capitalized titles for no reason and unrelated thumbnails. Stop clickbaiting.

  • @Wild-Instinct
    @Wild-Instinct Месяц назад

    Yeah ok another « schoking » video…
    Those dumb clickbaits made me unsuscribe.

  • @antonivanov5782
    @antonivanov5782 Месяц назад

    я думаю это GPT2 обученная при помощи GPT5

  • @CHIEF_420
    @CHIEF_420 Месяц назад

    @GermanBionic 🤝 @Amazon