Q-Star LEAKED: Internal Sources Reveal OpenAI Project "Strawberry" (GPT-5?)

Поделиться
HTML-код
  • Опубликовано: 6 сен 2024
  • An article from Reuters has new information about Q-star and project Strawberry. Let's take a look!
    Subscribe to my newsletter for your chance to win the Asus Vivobook Copilot+ PC: gleam.io/H4TdG...
    (North America only)
    Join My Newsletter for Regular AI Updates 👇🏼
    www.matthewber...
    Need AI Consulting? 📈
    forwardfuture.ai/
    My Links 🔗
    👉🏻 Subscribe: / @matthew_berman
    👉🏻 Twitter: / matthewberman
    👉🏻 Discord: / discord
    👉🏻 Patreon: / matthewberman
    👉🏻 Instagram: / matthewberman_ai
    👉🏻 Threads: www.threads.ne...
    👉🏻 LinkedIn: / forward-future-ai
    Media/Sponsorship Inquiries ✅
    bit.ly/44TC45V
    Links:
    www.reuters.co...

Комментарии • 323

  • @matthew_berman
    @matthew_berman  Месяц назад +16

    Do you think this will be GPT-5?
    Subscribe to my newsletter for your chance to win the Asus Vivobook Copilot+ PC: gleam.io/H4TdG/asus-vivobook-copilot-pc
    (North America only)

    • @Azupiru
      @Azupiru Месяц назад +1

      I am really excited for the AI to get better with Cuneiform and the various languages represented in cuneiform. It will frequently mix up signs even though it has the data concerning the correct answer from various sign lists. It's very strange that it hasn't connected them properly. I had to correct it multiple times earlier before it realized that it was wrong.

    • @LiquidAIWater
      @LiquidAIWater Месяц назад

      No, being if it is a totally different approach than transformers, then they should just barely stay ahead of the competition with iterations before rolling it out being all angles have to be addressed, one of which is the monetization. Otherwise, they just be copied by their competitors.

    • @laurennichelson7913
      @laurennichelson7913 Месяц назад

      I'm so excited for this giveaway. My grandma didn't see i had set my laptop on the hood of her car for her to grab (I was moving out of an abusive situation), and it flew off her car when she was on the highway so I literally am a tech worker with no computer lmao

    • @karenrobertsdottir4101
      @karenrobertsdottir4101 Месяц назад

      This reminds me of I had an architecture idea for how to learn via synthesis at training time:
      1) Use a multi-token prediction model, akin to that which Meta released recently, simultaneously processing the states of the current input tokens. Predict the upcoming tokens.
      2) Using the current state, in a separate batch starting from the same point, first predict a custom "deduction" token that triggers logical deduction of additional short facts derived from the info that's been provided in the current context. (To add this behavior to unsupervised training, you'll need to start with unsupervised training, then do a finetune that teaches how to respond to a deduction token, then go back to unsupervised training)
      3) Generate (in simultaneous batch) numerous short multi-token deductions - again, using the current state, so you're not having to recalculate the state (like you have to do with normal synthesis). This should be very fast.
      4) To ensure that all deductions are sufficiently different from each other rather than all being the same thing, slightly "repel" their hidden states to increase their cosine distances from each other.
      5) Now you have a big batch of deductions of upcoming tokens, as well as the non-deduction prediction of the real upcoming tokens. Establish a gradient off of the batch and backpropagate.
      The beauty part IMHO is that not only should it learn - quickly - from its (quickly generated) deductions about the world, but it should also learn to get better at deducing, because it's also learning how best to respond to the deduction token. E.g., the deduction process should self-reinforce.

    • @handsanitizer2457
      @handsanitizer2457 Месяц назад

      Until I see anything I'm going to assume it's gpt trying to stay in the news cycle. I.e. fake leak

  • @Douchebagus
    @Douchebagus Месяц назад +247

    Sounds like OpenAI is feverishly trying to generate hype while Claude 3.5 Sonnet is slapping their asses.

    • @arxmechanica-robotics
      @arxmechanica-robotics Месяц назад +5

      Very true.

    • @dot1298
      @dot1298 Месяц назад +1

      but OpenAI has one advantage: users dont need phones to access chatGPT/4o, whereas you need a phone to access Claude 3.5 at Anthropic…

    • @dot1298
      @dot1298 Месяц назад +1

      for me this is an absolute showstopper for C3.5 as lmsys doesnt support threads

    • @HIIIBEAR
      @HIIIBEAR Месяц назад +3

      But qstar is a peer reviewed paper. There is 0 chance openai didnt apply the tech

    • @HIIIBEAR
      @HIIIBEAR Месяц назад +1

      @@arxmechanica-roboticsnot at all true. Q* is detailed in a peer reviewed paper

  • @ploppyploppy
    @ploppyploppy Месяц назад +69

    Is this the new thing?
    RUclips title: Leaked
    Seems to be a lot of 'leaks'. So many in fact that they have become meaningless.

    • @misterfamilyguy
      @misterfamilyguy Месяц назад +1

      Yeah, leaks are the new content, right? Just like comments and likes, pretty soon the algorithm will be promoting for leaks.

    • @yak-machining
      @yak-machining Месяц назад +1

      Views views views, RUclipsrs want views

    • @misterfamilyguy
      @misterfamilyguy Месяц назад

      @@yak-machining I don't know, are you sure maybe you watch it one more time?

    • @HIIIBEAR
      @HIIIBEAR Месяц назад +1

      Not at all. Its only meaningless if youre anti science because Q* is a peer reviewed concept that we know open ai is adapting

    • @mpvincent7
      @mpvincent7 Месяц назад

      Hyper-meaningful but I agree, lots of background noise and lots of frauds...

  • @fynnjackson2298
    @fynnjackson2298 Месяц назад +6

    All talk - no shipping from Open AI.
    Matt, you are pumping out top-notch vids, you're efforts are much appreciated.

  • @fuzzylogicq
    @fuzzylogicq Месяц назад +12

    So basically "make some noise about us again because we are not the only big shots in the game" and all that AGI internally and redefining things without showing any new results. Will believe it when i see it (like the omni voice etc)

  • @herteert299
    @herteert299 Месяц назад +9

    I asked chatgpt the question you talking about at 6:50 and it's answer is correct, look at the answer: my question : Imagine that you are at the north pole and I ask you to walk 200m in one direction then you must turn left and walk 200m. question: have you crossed your starting point?
    Chatgpt answer: "No, you have not crossed your starting point. Here's why:
    1. You start at point A at the North Pole.
    2. You walk 200 meters in any direction (say south).
    3. Then you turn left (let's say this means you start walking east) and walk another 200 meters.
    At the North Pole, any initial direction is effectively a southward movement. So after walking south, when you turn left, you are walking parallel to a latitude (parallel circle).
    To better understand, let's consider some specific cases:
    - **If you are at the exact geographic North Pole**: All initial direction is south. After turning left, you walk 200 meters east or west around the pole, and you have not crossed your starting point.
    - **If you are not exactly at the North Pole, but close to it**: After 200 meters towards the south, you are at a certain latitude. Turning left (east) and walking 200 meters will take you to another location along this latitude. You do not cross your starting point.
    In these cases, your walk does not take you to your starting point, because the directions change depending on your initial position and the curvature of the Earth."
    Matthew did you tested the question ?

    • @lucifermorningstar4595
      @lucifermorningstar4595 Месяц назад +1

      I don't know why but Yan le cun have the ability to be always wrong

    • @paulmichaelfreedman8334
      @paulmichaelfreedman8334 Месяц назад

      The better puzzle to give is:
      1. you start at the north pole. Walk 200 meters south.
      2. then walk 200 meters east.
      3. finally, walk 200 meters north.
      Where do you end up?
      Now, starting anywhere but at the north pole you'' d end up somewhere else than where you started. That's a good nut to crack for an AI.
      Or:
      Imagine a house. Each of the 4 walls of the house faces south. A bear walks by, which color is it?
      These are examples of puzzles for a 12-year old, seems fitting for the current state of AI in general.

    • @herteert299
      @herteert299 Месяц назад +2

      @@paulmichaelfreedman8334 Ok...This is the answer chatgpt give me : "The house described in the riddle has all four walls facing south, which implies that the house is located at the North Pole. At the North Pole, the only direction away from the house is south. The bear in this location would be a polar bear, which is white. Therefore, the bear is white." 😄 Amazing !

    • @wurstelei1356
      @wurstelei1356 Месяц назад

      @@herteert299 What if the house has only windows on the other 3 sides and stands in Texas ?

  • @mrbrent62
    @mrbrent62 Месяц назад +23

    Saying you are the first comment and actually being 4 is like asking Chat GPT to tell you how many words are in your answer.

    • @panzerofthelake4460
      @panzerofthelake4460 Месяц назад +1

      bro why would you even care!!!

    • @matthew_berman
      @matthew_berman  Месяц назад +10

      even humans hallucinate

    • @ryzikx
      @ryzikx Месяц назад +3

      first

    • @kniiz
      @kniiz Месяц назад +1

      🤣

    • @vaisakhkm783
      @vaisakhkm783 Месяц назад

      not the same, because there is a delay for in the part of yt, so 4 th person couldn't see other 3 comments.... and that's not hallucination...

  • @ringpolitiet
    @ringpolitiet Месяц назад +26

    This video contains zero leaks or revelations.

    • @qwazy01
      @qwazy01 Месяц назад +1

      Forgot to preface with "Spoiler Alert"

  • @BlueBearOne
    @BlueBearOne Месяц назад +1

    The way that reads to me suggests there hasn't been a breakthrough with the LLM itself but maybe an ancilliary procress that processes the information? The way I see the brain is that it isn't one organ. Unconventional but it fits. Sections of it operate in a chorus and orchestrated dance to deliver significantly more than the sum of its parts. I've often wondered if this is the way to true AGI. An LLM for each of these "parts" and each LLM would have it's own ancillary process mimicing the brain. Of course, that is so much easier said than done now isn't it?

  • @arxmechanica-robotics
    @arxmechanica-robotics Месяц назад +7

    For AI to give you accurate information about how the physical world operates, it will need a physical robot body to experience context.

    • @NakedSageAstrology
      @NakedSageAstrology Месяц назад +2

      Or just be trained on first person view cameras that people can wear all day, and then submit at night.

    • @arxmechanica-robotics
      @arxmechanica-robotics Месяц назад +2

      @NakedSageAstrology yes, however interactivity with the physical surroundings and objects, using arms etc. will greatly increase this more than solely cameras.

    • @NakedSageAstrology
      @NakedSageAstrology Месяц назад +3

      @@arxmechanica-robotics
      Perhaps you are right. Maybe we need to go deeper, direct access to the nervous system and the information flow through it.

    • @arxmechanica-robotics
      @arxmechanica-robotics Месяц назад

      @@NakedSageAstrology it's at least the approach we are taking. Time will tell.

    • @tvolk131
      @tvolk131 Месяц назад +1

      Or a fairly accurate simulation of the real world.
      I'm not sure how computationally feasible this would be, but I find it interesting that it might be possible for an AI to have a (relatively) accurate internal model of the physical world without having ever interacted with it directly.

  • @seanmurphy6481
    @seanmurphy6481 Месяц назад +5

    When OpenAI does announce their next model, do you think they'll actually release it at this rate? Sora got announced, it was never released. GPT4o voice mode got announced and it was never released. I'm beginning to wonder if anything will actually come out from OpenAI?

    • @mikesawyer1336
      @mikesawyer1336 Месяц назад +1

      People keep saying their voice model wasn't released but I have it on my Android phone? What are people talking about the desktop version?

    • @BlayneOliver
      @BlayneOliver Месяц назад

      😂😂 well said

    • @1sava
      @1sava Месяц назад +2

      What are you talking about? OpenAI has been shipping things on time since their release. The hiccups with GPT-4o is because they rushed its announcement to steal the spotlight from Google’s I/O event. Sora was also announced for the same reason.
      Also, what they’re factoring in the US election in November. Releasing GPT-5 and Sora prior to that might be detrimental.
      GPT-5 is done training and safety testing I believe. The biggest hurdle towards its release is probably the huge computational infrastructure it required, which is probably why Microsoft is now turning to Oracle to help them with OpenAI.

    • @1sava
      @1sava Месяц назад +1

      @@mikesawyer1336The Voice mode you have access to right now is the old voice mode they’ve released since last year. People are talking about the New Voice mode with frame-by-frame video monitoring and voice/narration effects.

    • @BlayneOliver
      @BlayneOliver Месяц назад

      @@1sava hoorah!

  • @Avman20
    @Avman20 Месяц назад +2

    As Matthew mentioned, we're definitely seeing a slowing of the cadence of significant releases from OpenAI (and other frontier model companies). This is likely mostly having to do with safety. As these models get ever more complex, the concern of the developers must be shifting towards understanding emergent capabilities. When you give a machine the ability to internally reflect on problems from multiple angles by recombining trained knowledge, I think that any possibility of understandability goes right out the window. The question then is, how do you establish effective guardrails on such a system?

    • @furycorp
      @furycorp Месяц назад

      It has nothing to do with safety there is plenty of public research and models to review. The tech is what it is.

  • @justindressler5992
    @justindressler5992 Месяц назад +1

    I have been thinking about the multi shot behaviour of LLM lately. I feel like when asking a model to correct a mistake it's like it always falls back to finishing sentence or fill in the blanks behaviour. I started to think about the problem is the model has no noise eg it looks at the previous examples but because it already came up with the most probable answer from the past. Maybe what needs to happen is during multi shot operations there is a need to add intentional noise in areas where the errors are. Kind of like how stable diffusion models work start with a noisy signal then iterate to form a coherent answer.

  • @billywhite1403
    @billywhite1403 Месяц назад +1

    I'm a little confused, maybe somebody could help me understand. It seems All of the LLMs can already do a bit of reasoning, The leading ones more than the others. Many of them show the kind of initiative that suggests understanding of intention, which itself is contiguous or synonymous with planning. So I don't see what's different than what we already have - both in approach and output - except possibly increased resources to devoted to planning, memory, maybe incorporating agents. But I don't see the sea change, does anybody else?

  • @ryzikx
    @ryzikx Месяц назад +5

    strawberry reasoning 🤤

  • @MilesBellas
    @MilesBellas Месяц назад +2

    Topic idea : Ollama integration with Stable Diffusion, Comfyui , Maya/Blender, Kirita and Griptape nodes ?

  • @drlordbasil
    @drlordbasil Месяц назад +4

    Can't wait to touch this model with my keys.

  • @mesapysch
    @mesapysch Месяц назад +5

    As an annotator, this is great news. LLM are horrible with logic.

  • @gab1159
    @gab1159 Месяц назад +10

    Honestly, OpenAI is really undewhelming. They're now full-on into hype mode as they're losing market share to Anthropic. They've fallen in the typical sillicon valley trap of overhyping and underdelivering. I'm getting quite annoyed with them. Hearing them talking about what they're doing, you'd expect we'd already have AGI now, when in reality, they've made close to no progress since GPT4 (4o is worse..)

    • @4.0.4
      @4.0.4 Месяц назад +1

      4o, worse? At what?

    • @michaelwoodby5261
      @michaelwoodby5261 Месяц назад +2

      Do we hate OpenAI for shipping a product for money instead of focusing on research, or for making strides in the tech but not giving us access? Those are mutually exclusive.

    • @4.0.4
      @4.0.4 Месяц назад

      @@michaelwoodby5261 for not being *_open_* AI, personally.

  • @JG27Korny
    @JG27Korny Месяц назад

    Meta has terminology of type 1 and type 2 prompting. "Type 2 prompting" refer to techniques that go beyond simple input-output mappings, delving into more complex, multi-step problem-solving or task-completion approaches with prompts or/and agents :
    The next step is to use all of your context window to ride the wave. The AI gets at its best when you are deep into the conversation. And there llm with longer context window have their edge. Nobody is testing the models there where it matters. The reason is that towards the end of the context window the ai has already a world model made by the history of your interecttion and that is unique.
    From practical approach make and continue to get longer conversations. Use type 2 prompting that can be agents or a single prompt with reasonning.
    For that reason perplexity works very well as it uses search engine results as world model.
    And the open source perplexica with type 2 prompting even with small local llm is competing with the heavy weights perplexity copilot, bing copilot

  • @oiuhwoechwe
    @oiuhwoechwe Месяц назад +2

    Strawberry, out of the Cheech and Chong films. lol

    • @ColinTimmins
      @ColinTimmins Месяц назад

      My mom and dad loved their movies when my brother showed them all those years ago! That was a good memory for me. =]

  • @halneufmille
    @halneufmille 26 дней назад

    From an outsider's perspective, this Q-star / strawberry thing sounds like everything LLMs wish they were. Sort of saying to investors "We know Chat GPT still hallucinates and in no way justifies the billions we put in, but there's this top secret project right around the corner that will solve everything so just keep giving us money."

  • @PeteBuildsStuff
    @PeteBuildsStuff Месяц назад

    Thank you Berman. Question: are you generally afraid of or generally bullish when it comes to this tech in general. It's my theory that the more informed a person becomes the less they say 'yea, but its just weird, or just a little trippy, or some other form of being afraid, but not sure why' what do you think?

  • @ScottAshmead
    @ScottAshmead Месяц назад

    To think local models are not going to send anything back to a corp is like saying your phone will not send anything information to Google/Apple .... we can hope though

  • @zhanezar
    @zhanezar Месяц назад +1

    Open Ai need to show something really amazing , Claude 3.5 Opus is probably there on stand by

    • @1sava
      @1sava Месяц назад

      OpenAI has nothing to worry about. Anthropic is barely catching up to GPT-4, a model that was done training 2 years ago. Also, although Anthropic Artifacts is a good feature, their overall interface doesn’t measure up to OpenAI’s

    • @Pawnsappsee
      @Pawnsappsee Месяц назад

      Dude, 3.5 sonnet is far more superior than open ai's gpt 4 version.

  • @4lrk
    @4lrk 5 дней назад

    Level 3 is basically Tony Stark's Jarvis.

  • @aim2helpU
    @aim2helpU Месяц назад

    I worked in this field 30 years ago and solved most of the problems you're talking about. I walked away because I was worried about who would use it. I agree with your thoughts about keeping it local. I still think the world isn't really ready for this. Too much medieval thinking on the part of our politicians.

  • @gatesv1326
    @gatesv1326 Месяц назад

    I would think that Level 4 OpenAI is referring to should be what we expect of AGI. Level 5 to me is ASI.

  • @denijane89
    @denijane89 Месяц назад

    I really don't understand the struggle for AGI. I like Claude, it's useful, it helps you do things, that's the idea of the AI - to get an AI assistant that will save you time. The type of fully independent AGI they describe is really questionable goal. It's like having a kid and expecting it to solve all your problems. That never happens.

  • @mpvincent7
    @mpvincent7 Месяц назад

    Can't wait for it to be released! Thanks for keeping us up-to-date!!!

  • @jaymata1218
    @jaymata1218 Месяц назад

    I suspect at some point, OpenAI will no longer be open to the public. We'll continue to get basic models, but the advanced versions will be deemed too dangerous for the masses. We might enter an era where the 1% of the population has access to advanced models that solidfy their position and advance so rapidly that open source can't keep up.

  • @questionableuserr
    @questionableuserr Месяц назад

    Perplexity ai already does capabilities just like this though, except for the multi step aspect
    You should do a video testing it out

  • @user-ty9ho4ct4k
    @user-ty9ho4ct4k Месяц назад

    The scale they released for guaging the progress toward AGI doesn't make any mention of AGI. Which level is AGI...
    AGI

  • @Hailmich10
    @Hailmich10 Месяц назад

    Mathew-thanks for the video and appreciate your comments on potential math problems. Is there any hard evidence that OpenAI or others have figured out how their models currently perform on various math problems? This capability seems easy to measure (is the model performing at the college level, master's level Phd level, etc.). IMO, If we are still at the high school level in terms of math ability, we are some time away from AGI/SGI and math ability will be an important predictor of reasoning ability and where we are on the trajectory towards AGI/SGI.

  • @MichaelChanslor
    @MichaelChanslor Месяц назад

    5:22 - Thank you!

  • @user-wr4yl7tx3w
    @user-wr4yl7tx3w Месяц назад +1

    sounds like aspiration still, the same thing everyone is working on as well.

  • @brianlink391
    @brianlink391 Месяц назад +1

    I literally wrote a paper on how to reutilize existing data, but I used AI to help out. Now I'm paranoid that someone intercepted it and is using it too. I guess it's okay, but still... how ironic is that?

  • @dmitriitodorov968
    @dmitriitodorov968 Месяц назад

    I'm pretty sure that at some point we come to a point when technically AGI exists but running it is much more expensive that using a human. I mean not because it's not optimized but because of fundamental constraints and energy costs

  • @nick1f
    @nick1f Месяц назад

    Very exciting changes. I can't wait to test ChatGPT 5, when it will be released. And who knows how far will AI advance in the next five years...

  • @marcusk7855
    @marcusk7855 Месяц назад

    When the logic a reasoning is equal or better than human reasoning we will have AGI. And there is no reason a computer won't way out do human reasoning with the right algorithm.

  • @cagdasucar3932
    @cagdasucar3932 Месяц назад

    I thought Q* was supposed to be application of Alpha-go's learning algorithm to Chat GPT. I think it's basically controlled chain of thought with monte carlo tree search. Am I wrong?

    • @andrewsilber
      @andrewsilber Месяц назад +1

      That's certainly what it sounds like, though people's interps seem to suggest post-training RL to tease out more reasoning capability. But from my perspective, it seems clear that indeed what needs to be done is to build a System 2 agent using DQL which leverages LLMs as just one tool among many. As far as I am aware, current LLM inference is not recurrent or arbitrarily recursive -- it's just a straight shot through the decoder, which would put an upper bound on the amount of "work" it can get done. I believe OpenAI did mention however that whatever it was they were doing would be capable of doing protracted research projects, which certainly does point in that direction.

  • @colonistjester1552
    @colonistjester1552 Месяц назад

    I have found sonnet 3.5 and all Claude services not serving well in scale

  • @marcc0183
    @marcc0183 Месяц назад

    hey mathew, a question that has nothing to do with this video, a long time ago you said that you were going to clone yourself but I haven't found anything... would it be possible to do it with the new technologies and models that exist now?

  • @dennisg967
    @dennisg967 Месяц назад

    Thanks for the video. I just had the thought that maybe if you keep asking a model the same questions, and you do it in public, those questions are actually leaked into the training set of a model, and the model responds correctly the next time simply because it has already seen this problem along with the solution. Try rephrasing your problems each time you pose it to a model. See if it changes the model's answer.

  • @ishanparihar4032
    @ishanparihar4032 Месяц назад

    What I can think this about as is different models like MOE at each step of thinking process and working in a sequential thinking like process where each step is worked on by a single model in the MOE.

  • @oguretsagressive
    @oguretsagressive Месяц назад

    12:10 this looks severely undercooked. Level 3 can be achieved before level 2. Level 5 is claimed to be lower than AGI, but a single human is a GI that cannot perform the work of an entire organization, therefore is below level 5.

  • @rghughes
    @rghughes Месяц назад

    I don't think it will be level 2; it'll likely be "2-like" or a 'proto-2'. Something to keep the investors investing long enough for them to _actually_ reach 'proper-level-2'. Just my 2 cents.

  • @OviDB
    @OviDB Месяц назад

    I wonder if pretraining is where the going over the same data over and over and over again happens, until the transformer learns the underlying relations.

  • @WJ1043
    @WJ1043 Месяц назад

    In an agentic system of LLMs, why not have a mode where one of its LLMs is trained by the others so that the system on the whole learns?

  • @camelCased
    @camelCased Месяц назад

    Right, it is important to think in concepts, not words or tokens (especially not just spitting the next most likely word/token based on statistics alone). Also, humans learn fast because they are aware of their weak spots and can initiate self-learning. Children ask questions about things they want to know and they receive not only textual but also sensory information. And, of course, the feedback loop is important for self-critique and validation to know what I know and what I don't know. Maybe one day we'll indeed have constantly-learning AI algorithms that become smarter based on their experience with millions of users and real-world sensory input. Sensory input should always have higher priority than any textual information because physical world does not lie and come up with different interpretations, it just is what it is.

  • @hotlineoperator
    @hotlineoperator Месяц назад

    OpenAI roadmap to AGI that will be level 2 product - in scale of 1 to 5. Current GPT is level 1 product. Expect to be ready at the end of this year, and we'll see when it is released to public.

  • @misterfamilyguy
    @misterfamilyguy Месяц назад

    I'm excited for this. I really believe that this will improve everyone's lives immensely. I just don't know all of the ways that it will.

  • @leomaxwell972
    @leomaxwell972 Месяц назад

    Digging through my registry, AI be installing some weird stuff, like TrollTech, and Project QT, any chance it's related? xD

  • @jasonshere
    @jasonshere Месяц назад

    OpenAI's 5 Level Scale System seems a bit weird. It doesn't appear to be a linear scale and there appears to be a lot of crossover so that an AI can achieve different parts of multiple levels.

  • @robertheinrich2994
    @robertheinrich2994 Месяц назад +1

    is it like with the laptop? oh, it's a new AI, but only for one area on the world.
    sorry, still salty about you offering a email newsletter and a way to win a laptop, just to learn: no, you are neither allowed to participate in winning a laptop nor are you allowed to actually subscribe to the newsletter.

  • @zubbyemodi6356
    @zubbyemodi6356 Месяц назад

    I actually think OAI has developed and ready to deploy a couple models that would blow our minds, and are really just waiting to be instep with the acceptable rate of AI evolution, so it doesn’t scare people too much

  • @scottwatschke4192
    @scottwatschke4192 Месяц назад

    I think we live in very interesting times technology.

  • @gamersgabangest3179
    @gamersgabangest3179 Месяц назад

    I am still waiting for the chatGPT voice thing.

  • @ps3301
    @ps3301 Месяц назад

    We should call it raspberry

  • @Webnotized227
    @Webnotized227 Месяц назад

    Seems like Anthropic forgot they had an Android app to release until they watched your video yesterday.

  • @jarnMod
    @jarnMod Месяц назад

    I'm an investor by trade, pardon the pun, but I have no idea how energy consumption of AI development will be.

  • @yuval_kahan
    @yuval_kahan Месяц назад

    was waiting for your video about it cause i wasn't sure that Q-star leak was real , thank you

  • @rudomeister
    @rudomeister Месяц назад

    This depends on if this "Strawberry" can solve issues in my Keras prediction models, and not just pretend like it.

  • @marco114
    @marco114 Месяц назад

    OpenAI needs to release what they teased us with.

  • @Yipper64
    @Yipper64 Месяц назад

    12:35 I believe we will be forever stuck on level 3. Unless we start with something entirely different.

  • @jayv_tech
    @jayv_tech Месяц назад

    The first thing I learnt, how to pronounce Reuters 😆

  • @szghasem
    @szghasem Месяц назад

    Sound like corresponding debates in the 90s on AI as it was understood back then. Is it time for a radically new idea to get us to AGI, using LLMs ...

  • @jim-i-am
    @jim-i-am Месяц назад

    What you describe around 3:10 sounds a LOT like grokked models. i.e. overfitting models during training is a sign of memorization...then continue training until underpinning "models" emerge. (wow...that made it sound really easy to get that convergence. It's not). Just remember: 42 :D @code4AI has some really good videos on the subject if you're interested in diving down the rabbit hole.

  • @MikeWoot65
    @MikeWoot65 Месяц назад

    It's clear Reasoning will be the thing that gets us to whatever AGI is

  • @MariuszKen
    @MariuszKen Месяц назад

    yea and lvl 7 knows like a God. lvl 10 knows so much that can create new universes... every ai knows it

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 Месяц назад

    The general public, the common man on the street will never even get to use a Level 3 AI, because if they did the first thing they would use it for is writing a prompt that ordered the AI agent to make them lots of money, and if everyone did that, then it would crash the financial system and many other things that governments don't want peasants being able to do. If we are to see Level 3 AI then it will be heavily restricted and lobotomized such that it can't be used to make money, and for that reason the public will never be given access to AGI. Don't hope for it or wish for it, it's never going to happen.

  • @attilazimler1614
    @attilazimler1614 Месяц назад +1

    I agree, the current version of ChatGPT is really dumb - and it got there from being slightly less dumb prior :D

  • @underbelly69
    @underbelly69 Месяц назад

    Why not let openai access your models agentic findings? Benefits everybody down the line - evil can be audited, novel can be shared

  • @AINMEisONE
    @AINMEisONE Месяц назад

    Think strawberry fields forever. Is it not reveling the names being used, in AI.. going to back to LSD era?

  • @samueltucker8473
    @samueltucker8473 Месяц назад

    Beware of the rush to market missing unknown variables of more complexity of inter compartmental effects and the programming that sometimes taps the pendulum at the wrong moment

  • @mrinalraj4801
    @mrinalraj4801 Месяц назад

    Your video hypnotises me ☺️☺️

  • @stableArtAI
    @stableArtAI Месяц назад

    Just wonder, why one thinks reporting about Leadked information is acceptable. By the nature of information being "leaked" everyone pretty much knows that is information that the owner of such data has not given consent or permission for the information or item to released to the public.
    So when again you have data being scrapped from a company or people how knowingly got the data without permission and then often sold that information as a breaking news or story about something that the owner of that data has not been willing or ready to release to the public as information or a product for what every reason they may feel it not ready to share or disclose.
    The arguments of permission and data scrapping and learned/training machines(or people) is not an easy subject to cover, especially if one leaves out all the type of case which it can crop up in. It is not just in the universe of AI.

  • @OnigoroshiZero
    @OnigoroshiZero Месяц назад

    With OpenAI's scale, their Level 5 is actually ASI, AGI is their Level 3, and we will have it before the summer of 2025 at the latest.

    • @executivelifehacks6747
      @executivelifehacks6747 Месяц назад

      Yes. They have a large vested interest in delaying when their AI is classed as AGI. Exercise in narrative control

  • @hypersonicmonkeybrains3418
    @hypersonicmonkeybrains3418 Месяц назад

    GPT-5 doing deep research on wikipedia = AGI......

  • @szebike
    @szebike Месяц назад

    An openai "Level 3" bot running locally sounds dangerous in my opinion if it can reason and is smart don't know if its a good idea... That being said having a giant levle 3 AI in the hands of OpenAI or Microsoft or Meta doesn'T sound like fun either.

  • @mickelodiansurname9578
    @mickelodiansurname9578 Месяц назад

    Can someone correct me if I got this wrong.... So my understanding is this is an alteration to the standard methodology of fine tuning right, and they own the algo's so they do the fine tuning compute, okay fine... Got it.... now will they allow me to use my own dataset fine tuning GPT4o? Or is it just a one off fine tuning of their choice take it or leave it? I'm going to assume this is an alteration to the algo's used in fine tuning which til now are the regular Adam and AdamW (transformer) and gradient decent... mostly... So this is something new? Have I got this right?

  • @TheBann90
    @TheBann90 Месяц назад

    So its not really leaked. Just another rumor of a product in development that is still 16-18 months away. And by then it will already have 10 competitors... OpenAI have lost it...

  • @ronaldpokatiloff5704
    @ronaldpokatiloff5704 Месяц назад

    our universe is total AI

  • @MyLittleBitOfEverything
    @MyLittleBitOfEverything Месяц назад

    I said take as much time as you need. So the agent just told me it needs 7.5 million years to answer the Ultimate Question Of Life, The Universe, And Everything.

  • @maudentable
    @maudentable Месяц назад

    OpenAI's already testing GPT5. Havent you realized GPT4o is occassionally supe-rsmart and super-slow?

  • @swagger7
    @swagger7 Месяц назад

    Am I gonna have to get good gear to get to the Level 5 boss? 😉😁

  • @rakoczipiroska5632
    @rakoczipiroska5632 Месяц назад +1

    So Strawberry makers could find practising as an element of learning. Who would have thought that?😉

  • @gitgudchannel
    @gitgudchannel Месяц назад

    LLMs have already peaked

  • @djayp34
    @djayp34 Месяц назад

    People complaining about Sora or Voice update. I can't wait for the "But what the h* is doing Anthropic ?" era. Hype machine at its best.

  • @JRS2025
    @JRS2025 Месяц назад

    I feel like this is named after LLM struggle to recognise that Strawberry has 3 R's in it.

  • @aditya_p_01
    @aditya_p_01 Месяц назад +1

    Strawberry 🍓

  • @josephmcleod2220
    @josephmcleod2220 Месяц назад

    Yes Please run a tutorial

  • @Zalktislaima
    @Zalktislaima Месяц назад

    5 sounds like early ASI rather than AGI to me. AGI seems like somewhere between 3-4. Most humans seem to be more in the 3 with occasional glimmers into 4 though they also don't have the knowledge base of 2 or most often even 1. Something that was regularly truly innovating would at worst be in the top tiny percentile of humanity though I question how much of that in many cases is more taking other innovations and having awareness on how to market (with potentially rare exceptions like perhaps Tesla or Leonardo or what have you). There is no human that can do 5 so I don't see why that would be AGI rather than early stage ASI.

    • @Hohohohoho-vo1pq
      @Hohohohoho-vo1pq Месяц назад

      People move goal posts and overestimate the abilities of the average human

  • @baheth3elmy16
    @baheth3elmy16 Месяц назад

    I really enjoyed this video! Thank you very much!

  • @Sorpendium
    @Sorpendium Месяц назад

    Strawberry is a really good name and they should use it for their most advanced model. If strawberry comes out and it's not very good I'm going to be disappointed. 😂

  • @stanpikaliri1621
    @stanpikaliri1621 Месяц назад

    What a strange name for AI model: Strawberry 🤣

  • @FriscoFatseas
    @FriscoFatseas Месяц назад

    Looking forward for you to have something to make a video that isn’t bullshit weak rebranded rumors or some weak AI collaborative marketing video

  • @DanTheBossBuster
    @DanTheBossBuster Месяц назад

    Does anyone appreciate the irony here... the "hard part" is getting the computers to be logical :)

  • @MatDGVLL
    @MatDGVLL Месяц назад

    Where is Sora ? 👀

  • @victorbecerra5700
    @victorbecerra5700 Месяц назад

    ASI on 2025 is my prediction