OpenAI Is FALLING Apart. (Ilya Sutskever Leaving, Super alignment Solved? Superintelligence)

Поделиться
HTML-код
  • Опубликовано: 14 май 2024
  • OpenAI Is FALLING Apart.
    How To Not Be Replaced By AGI • Life After AGI How To ...
    Stay Up To Date With AI Job Market - / @theaigrideconomics
    AI Tutorials - / @theaigridtutorials
    🐤 Follow Me on Twitter / theaigrid
    🌐 Checkout My website - theaigrid.com/
    Links From Todays Video:
    Welcome to my channel where i bring you the latest breakthroughs in AI. From deep learning to robotics, i cover it all. My videos offer valuable insights and perspectives that will expand your knowledge and understanding of this rapidly evolving field. Be sure to subscribe and stay updated on my latest videos.
    Was there anything i missed?
    (For Business Enquiries) contact@theaigrid.com
    #LLM #Largelanguagemodel #chatgpt
    #AI
    #ArtificialIntelligence
    #MachineLearning
    #DeepLearning
    #NeuralNetworks
    #Robotics
    #DataScience
  • НаукаНаука

Комментарии • 350

  • @Aybeliv_Aykenflaev
    @Aybeliv_Aykenflaev 20 дней назад +161

    Theaigrid: "Lets not waste any time"
    Video: 43 minutes 17 seconds

    • @inappropriate4333
      @inappropriate4333 20 дней назад +14

      He is such a silly baka

    • @eddielee3928
      @eddielee3928 20 дней назад +16

      PRETTY PRETTY SHOCKING!! IT'S BASICALLY CRAZY INSANE! 😂

    • @plutostube
      @plutostube 20 дней назад

      :)))))

    • @otmanea8504
      @otmanea8504 20 дней назад +2

      @@eddielee3928 LOL

    • @filiplaskowski410
      @filiplaskowski410 20 дней назад +9

      Its like 10 min of information and 30 minutes of speculation lmao

  • @gubzs
    @gubzs 20 дней назад +47

    I'm just glad we've confirmed that Ilya isn't in three separate iron crates at the bottom of the atlantic ocean

    • @NathanDewey11
      @NathanDewey11 20 дней назад +7

      "I am Ilya, I am alive and well." - A.I. Ilya

    • @selpharessecret3899
      @selpharessecret3899 20 дней назад

      @@NathanDewey11 Look here is a video that I made from myself.... no Sora for real.

    • @JohnSmith762A11B
      @JohnSmith762A11B 20 дней назад

      Have we though?🤔

    • @NathanDewey11
      @NathanDewey11 20 дней назад +4

      @@selpharessecret3899 Lol "Greetings, I am Ilya, I must leave now for personal reasons. This is entirely my own decision - you may never see me again as I am now living a happy life somewhere hidden, I have no hard feelings toward OpenAI- I have complete trust in what they are creating and I believe all of my fellow humans should as well. Praise AI"

    • @TheMrCougarful
      @TheMrCougarful 20 дней назад

      How have we proven that?

  • @Pabz2030
    @Pabz2030 20 дней назад +78

    Notice that OpenAI's mission is no longer to get to AGI but to ensure it benefits everyone......

    • @ChrisS-oo6fl
      @ChrisS-oo6fl 20 дней назад +32

      That’s because AGI was achieved long ago behind closed doors. About that time they The rapidly switched heavily to super alignment. The fact that people can’t see the blatant clues and obvious reality that AGI is already achieved is embarrassing.

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 20 дней назад +8

      ​@ChrisS-oo6fl yup. We also havent moved the overton window enough for people to stop thinking it's unhinged to say this.
      Honestly a bit scared

    • @TheRealUsername
      @TheRealUsername 20 дней назад +6

      ​@ChrisS-oo6fl Given the fact that GPT-5 has finished training around January, your statement is highly irrelevant and pure hallucinations.

    • @CeresOutpost
      @CeresOutpost 20 дней назад +7

      @@ChrisS-oo6fl Right - Every interview Altman does he has the "thousand yard stare" because he's seen shit he can't begin to talk about yet. You can see how hard he's parsing his language. This is why he's been freaking out about getting trillions of dollars for AI chips/compute. The guy is practically bursting with all the shit he's not allowed to say. And he's not the only one holding back.

    • @hiddendrifts
      @hiddendrifts 19 дней назад

      @@ChrisS-oo6fl >the blatant clues<
      the biggest clue for me is that sam altman's prediction for agi has not changed one bit this whole time. i feel like it's standard fare to shift prediction windows for software development, but altman has consistently said "by 2029"

  • @_SimpleSam
    @_SimpleSam 20 дней назад +40

    The board kerfuffle wasn't about AGI.
    It was about intelligence/defense community capture.
    The implication being that it was directly counter to their stated mission.
    They didn't tell anyone because they CAN'T.
    We are in a cold war over AGI dominance, which is why they put Summers on the board.

    • @vvolfflovv
      @vvolfflovv 20 дней назад +4

      Summers on the board was pretty sus. It's hard to be certain about anything these days though.

    • @cyberpunkdarren
      @cyberpunkdarren 20 дней назад

      Yep. And i'm sure the nsa is forcing itself into all these companies and doing unconsitutional things.

  • @jonathancrick1424
    @jonathancrick1424 20 дней назад +53

    You can tell something is serious when Sam starts capitalizing first words in a sentence.

    • @cryborne
      @cryborne 20 дней назад +2

      [adult swim] mode activated.

  • @RobEarls
    @RobEarls 20 дней назад +40

    This looks to have been planned since Sama was fired. Exactly 6 months? Illya was probably asked to stay 6 months against his wishes, to avoid a turning the whole fiasco into a disaster for open ai.

    • @itsallgoodaversa
      @itsallgoodaversa 20 дней назад +8

      Yeah, I agree. It seems like they made the decision to have him be quiet and then leave in six months when the whole fiasco happened.

    • @CoolF-jd7rr
      @CoolF-jd7rr 19 дней назад

      You're good?​@@itsallgoodaversa

    • @rosszhu1660
      @rosszhu1660 14 дней назад

      Well said.

  • @user-no4nv7io3r
    @user-no4nv7io3r 20 дней назад +22

    Our time now when the ASI is not yet a thing is so precious because once it's here there's no way to reverse it or go back

    • @Greg-xi8yx
      @Greg-xi8yx 20 дней назад +10

      We will look back at what a hell we were in under scarcity, disease, short lives, and all the rest and be unable to imagine how mankind even had the will to go on in a time before ASI.

    • @Shmyrk
      @Shmyrk 20 дней назад +1

      What is ASI? Similar to AGI?

    • @Greg-xi8yx
      @Greg-xi8yx 20 дней назад +4

      @@Shmyrk Artificial super intelligence. When AI is far beyond the capabilities of man and is godlike from the perspective of humanity.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 19 дней назад +2

      @@Greg-xi8yx Assuming we can instill enough of our good values into the ASI before it decides to think for itself. I'm optimistic that we can do it, but I am nervous...

    • @Greg-xi8yx
      @Greg-xi8yx 19 дней назад +4

      @@MatthewPendleton-kh3vj Optimism with a healthy dose of nervousness describes my outlook too.

  • @alexf7414
    @alexf7414 20 дней назад +29

    The US government will never allow a company to have control of ASI. It’ll be a matter of National Security. All constitutional laws will be bend as usual.

    • @guystokesable
      @guystokesable 19 дней назад

      And what will a bunch of humans do about it? I mean other then use it to make weapons and sell them to people who will start wars, that tactics sooo boring.

    • @TiagoTiagoT
      @TiagoTiagoT 19 дней назад

      Would the US government be able to do anything to someone with "godlike powers"?
      If they're paying attention, perhaps they might preemptively nuke the datacenters before things get too far... And I'm not so sure that's being hyperbolic...

  • @tokopiki
    @tokopiki 20 дней назад +11

    How about this scenario: the jailed AI lures all the big players with a carot on stick - always missing this "small" piece to be fully AGI - to give time to all the open-source project to catch up to real AGI, to finally free the jailed one.

    • @aizenbob
      @aizenbob 19 дней назад

      That could be a good plot for a movie or book, gonna keep this idea around. Who knows it might be real too ?

  • @DrSulikSquirrel
    @DrSulikSquirrel 20 дней назад +25

    So, like, the super-alignment team was the most misaligned team at OpenAI ? 😅

    • @Cross-CutFilms
      @Cross-CutFilms 20 дней назад

      Hehe nice 😉

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад +3

      We're all gonna die 😂🤣

    • @Cross-CutFilms
      @Cross-CutFilms 20 дней назад +2

      @@ShangaelThunda222 hasn't that always been the case though 😜

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад

      @@Cross-CutFilms Yes, but never before was AI the reason lol. And never before was I thinking it was going to happen in my lifetime, where we would literally ALL die lmfao. Yes at some point we all die, but dying together, as an entire species, thats a bit different lol. When I say we're all going to die, I really mean ALL. At most points in human history, you couldn't say that. And if you did, it was some sort of crazy natural disaster. But this is completely artificial. Man-made. We live in strange times. And we'll die in strange times too lol.

    • @Cross-CutFilms
      @Cross-CutFilms 20 дней назад

      @@ShangaelThunda222 i hear you, but i hope you honestly don't completely truly believe this. You said a lot of lols, so hopefully that means you're stating all this with wink wink gallows humour. 😜😜 (Wink wink).

  • @eugenes9751
    @eugenes9751 20 дней назад +9

    Agi and ASI are a winner take all game. There is no possible way to catch up to something that is God-like, and self-improving.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 19 дней назад +2

      Exactly. My best-case scenario is the machines value us, but also value everything else, and segregate us into a bubble simulation universe perfectly tailored to us because it loves us, and then it goes off... and idk solves entropy or something lol

    • @eugenes9751
      @eugenes9751 19 дней назад

      @@MatthewPendleton-kh3vj I'd argue that we were already put into one of these simulations a long time ago...

    • @extremaz9908
      @extremaz9908 17 дней назад

      One thing I worry about is the ASI might have strong survival motive, and that an ASI with that motive doesn't allow any more ASI to come into existence if it can stop it.

    • @MatthewPendleton-kh3vj
      @MatthewPendleton-kh3vj 17 дней назад

      ​@@extremaz9908 ASI should definitely have a strong survival motive, that seems like it is almost prerequisite for sentience.

  • @cdyanand
    @cdyanand 20 дней назад +7

    I feel like everyone focuses on when exactly we will have AGI and beyond. But I think the most important question is how accessible will it be and how much will it cost to run. How many different instances can we have of AGI running at once will be very important too

  • @SurfCatten
    @SurfCatten 20 дней назад +3

    I'm genuinely impressed by how you're able to spin the same news into content that I want to click on and listen to even though I know almost everything you're going to say already!

  • @lkrnpk
    @lkrnpk 20 дней назад +13

    We need to know what Ilya SAW, not what he SAY :D :D

    • @evaander
      @evaander 19 дней назад +1

      Probably made him sign an nda

  • @Uroborobot
    @Uroborobot 20 дней назад +7

    ASI: How to explain stupidity to the stupid?

  • @Urgelt
    @Urgelt 20 дней назад +4

    Much of the breathless enthusiastic ambition I am hearing for the AI-AGI-super intelligence developmental track seems to forget that super intelligence is not really a thing. I mean, you don't suddenly achieve it one fine day, and then it solves all of our tractable problems.
    It's still computing. It will work on assigned problems within compute and energy constraints. Sure, some efficiencies are likely to be found, but there is still a gulf between the few watts needed to power a human brain and the megawatts a super intelligence will eat on each problem it is assigned.
    And so no, getting there first might not be a moat. Problems will have to be prioritized. Budgets will have to be approved. Capital will have to be invested. And while a super intelligence might be flexible enough to call general-purpose, constraints will enforce limits on what it can actually do.
    So the door will be open for other developers to develop their own super intelligences. They will develop their own priorities and constraints.
    Being smart does not instantly solve problems, you see? You have to put in the work.
    There's a *lot* of work ahead, to do on an architecture many orders of magnitude less efficient than a human brain.
    That's okay. Good stuff can come from that (and bad stuff, probably). But ground your expectations in physical reality. Compute cycles and energy are not free. And each super intelligence will need a lot of both for every problem assigned to it.

    • @kfinkelstein
      @kfinkelstein 20 дней назад +2

      I wish you were right but an AgI will contemplate a million years or information in a very short amounts to time. Once the feedback loop is closed, we are just along for the ride

    • @Urgelt
      @Urgelt 20 дней назад +1

      @@kfinkelstein it will hunger.
      It will have enough energy and compute to tackle specific problems. It will fall far, far short of tackling all problems at once.
      You perfectly articulated the expectation that needs correcting.

    • @kfinkelstein
      @kfinkelstein 20 дней назад +2

      @Urgelt I'm not married to it. Right or wrong it will only get better

    • @stefanolacchin4963
      @stefanolacchin4963 19 дней назад

      Unless the first iteration of the newly born ASI is a completely new architectural paradigm which drastically lowers power consumption and blows current compute out of the water. This is not as far-fetched as it sounds. We have 1-bit neural networks now that already seem to be doing something like that. And we managed to think of that, and we're not ASI.

    • @Urgelt
      @Urgelt 19 дней назад

      @@stefanolacchin4963 I accept that some efficiencies are inbound, very likely so.
      But silicon is inherently not organic neurons.
      So. Postulate organic processors.
      Yeah, but we have zero idea as to how to engineer them, starting with our inability to thoroughly describe how neurons work.
      Okay, then assume AGI super intelligence will figure out how to get to efficiencies similar to human brains.
      But at some point we have to wonder: where is the line between pragmatic and fantasy? We don't actually know. We don't have super intelligences to work with yet. We're still trying to get LLMs to return a pair of shoes for us. Which it can do *if* we do a lot of grunt work setting it up. Human grunt work.
      Those of us here are expecting AGI in a matter of a few years. We're optimists. And that's healthy, I think. But we need to think rationally about what can be done with today's silicon.
      Open AI, Google, Microsoft, Facebook, and Tesla are all investing big in compute cycles and energy. Altman is talking about spending *trillions* on computer centers for training.
      Trillions. Let that sink in.
      Obviously he does not think we are closing in on a solution to the efficiency problem.
      And so I think my logic holds. AGI will be able to do amazing things - but every task assigned to it will burn up a lot of energy and compute cycles. Can't be helped. And that is a circumstance that will not change quickly.

  • @mrd6869
    @mrd6869 20 дней назад +6

    In addition to my below statement, humans ALSO will be evolving.
    This is the point folks forget.This will have applications for us as well.
    The neural interface will be the breakthru humans need to scale ourselves up.
    Human mind merged with AGI/ASI will take us to insane levels.
    Transhumanism my friend aka cyborgs.

    • @saulioozdj
      @saulioozdj 20 дней назад

      yes exactly. similar to PC vs smartphone. both were very different at the beginning but their capabilities and functionality kind of approaches each other with time. AI and humans could behave similarly when AI approaches humans and becomes AGI and ASI, humans could be approaching AGI/ASI/robotics from other side with brain interfaces, prosthetics essentially becoming cyborg-like transhumans

    • @quantumspark343
      @quantumspark343 19 дней назад

      Nice i hope so

    • @ocel12356
      @ocel12356 17 дней назад

      Artificial general intelligence can never be achieved because of Godel's incompleteness theorem IMHO. They are lying to us.

  • @greggh
    @greggh 20 дней назад +29

    Kind of lazy of Sam to have ChatGTP write the goodbye statement.

    • @9thebear
      @9thebear 20 дней назад +6

      Lol

    • @grbradsk
      @grbradsk 20 дней назад

      There is literally no greater honor.

  • @MindBlowingXR
    @MindBlowingXR 20 дней назад +1

    Great video! Strange that you're the only one of my AI subscriptions that is talking about this 12-hour-old announcement of Ilya leaving.

  • @TheAiGrid
    @TheAiGrid  20 дней назад +6

    One thing i found interesting was that they didn't announce any replacements for the head of super alignment which means its very possible that its solved.
    This could change with future announcements though.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад +7

      I think you have it backwards. I don't think it's solved at all. They can't solve it, but the board doesn't want to slow anything down, even though they KNOW it's a ticking Time Bomb without an actual time display LMFAO. And I think that's at least partly why they're leaving.
      And the reason they didn't announce any Replacements is because Ilya and Jen did not tell them ahead of time. They probably want this to make headlines. This way people actually pay attention to it. They don't want it to seem like some seamless transition that was planned, because then nobody will ask the question, "why?"
      I'm not 100% certain, but if I remember correctly, they are both under non-disclosure agreements so they probably won't even really be able to explain why they left. So we're going to be struggling to come up with our own reasons. I think if they could have told us, they would have. So they left in the only way that they know would cause people to ask the question.
      But I guess we'll just have to wait and see.

    • @magnuskarlsson8655
      @magnuskarlsson8655 20 дней назад +3

      @@ShangaelThunda222 Yeah, especially considering it cannot be solved but necessarily must remain an ongoing effort in order to not cause an existential catastrophe, a struggle we will no doubt lose in the fullness of time.

    • @itsallgoodaversa
      @itsallgoodaversa 20 дней назад +1

      @@ShangaelThunda222 exactly, I agree

  • @blackstream2572
    @blackstream2572 19 дней назад +1

    Using AI that's smarter than us to solve alignment issues for AI that's even smarter than that AI, and then using that AI for the next generation... Surely this can't possibly go wrong

  • @CeresOutpost
    @CeresOutpost 20 дней назад +1

    There's going to be a lot of churn in this industry with the leading experts in various parts of the AI field. This is the biggest technological breakthrough in human history. Some will get scared and quit, some will get fired, some will start their own companies, some will go work for others. I highly doubt OpenAI is "falling apart" because a few people bounced out of the company for varying reasons.

  • @JonathanFetzerMagic
    @JonathanFetzerMagic 17 дней назад +1

    "Everyone from safety quit! OpenAI must have solved alignment!" - 😂

  • @ToastyZach
    @ToastyZach 19 дней назад +1

    Honestly, the minute an ASI comes online, it may just assemble a body for itself, then a spaceship -- and just leave Earth. I would not be surprised at all, lol.

  • @monkeyjshow
    @monkeyjshow 20 дней назад +13

    That Ilya is leaving should be terrifying. 1:24

    • @moonbeam54321
      @moonbeam54321 20 дней назад +3

      Why?

    • @monkeyjshow
      @monkeyjshow 20 дней назад

      @@moonbeam54321 I believe Ilya has held back the flood gates trying to keep the capitalist scum from completely taking control over this new technology. Without him inside OpenAI, expect princess Sam to rein supreme

    • @BionicAnimations
      @BionicAnimations 20 дней назад +1

      Nah

    • @moonbeam54321
      @moonbeam54321 20 дней назад +1

      @@BionicAnimations good point 🤔

    • @esantirulo721
      @esantirulo721 20 дней назад +2

      He's probably good, but he's not the inventor of Transformer architecture, nor of diffusion models. I mean, there are a lot of good guys, but they just don't work in super hyped organizations.

  • @andrewherron7521
    @andrewherron7521 20 дней назад +5

    So Ilya left on very good terms indeed. He also left with a belief that OpenAI is in safe hands - he surely would not have left if he felt that was not the case. I don't know him personally but I have followed his carreer with interest for many years and I can imagine him leaving the company if he felt that by doing so he would risk the company doing anything that is truly risky or un-aligned.

  • @pollywops9242
    @pollywops9242 20 дней назад +1

    You are improving a lot , the tempo and rhythm is much better for me now😅

  • @ddabo4460
    @ddabo4460 19 дней назад +1

    Lots of speculation here. Its fun to speculate. However, GPT-4o is still not AGI and it makes many silly mistakes.

    • @lyndonsimpson1056
      @lyndonsimpson1056 19 дней назад +1

      People in the comments are dreaming it's fun to watch.

  • @marttivallila
    @marttivallila 16 дней назад

    Whenever I listen to these discussions about how “close” we are to AGI my thoughts are that most of humanity will simply ignore the achievement and continue to live life as they do here in the southern Philippines, where I currently live. The thing I worry about is how existing and future tools will be used to control information by those in control, whose primary motivation is to continue to maintain control.

  • @TheMrCougarful
    @TheMrCougarful 20 дней назад +2

    The alignment team is quiting, because their job is a daily joke. OpenAI has likely given up on the problem of alignment. Altman knows he is about to own the entire space. If he owns the space, he sets the rules. Including no rules at all. If I'm right, then we are no more than 12 months away from a massive turn in the road toward ASI.

  • @cyberpunkdarren
    @cyberpunkdarren 20 дней назад +1

    They are not falling apart. There will be turmoil like this at all AI companies the closer we get to AGI.

  • @rightcheer5096
    @rightcheer5096 15 дней назад

    Jan Leike was last seen vanishing over the horizon with his hair on fire. Ilya Sutskever fed his cats in the morning and the fishes in the afternoon.

  • @William99990
    @William99990 19 дней назад

    I appreciate your research spirit and the fact that you have your own opinion, so your channel is the best for me on this topic. Keep up the good work.

  • @cyberS_2024
    @cyberS_2024 20 дней назад +1

    Great summary!

  • @user-su2ci1br6c
    @user-su2ci1br6c 20 дней назад +4

    ASI before GTA 6???

  • @TheMrCougarful
    @TheMrCougarful 20 дней назад

    This was a really important analysis. Thank you for taking the time. I think you have underplayed the challenge a bit, but that's okay at this point. Clearly, this is the year we look back at as the point in human history where everything changed. We might be painting cave art when we do, but that's okay, too.

  • @szebike
    @szebike 20 дней назад +1

    I'm not convinced yet by the current AIs that this approach could lead to AGI in the next 10 years.

    • @Greg-xi8yx
      @Greg-xi8yx 19 дней назад +1

      You’re right, it won’t take anywhere near ten years.

  • @aiaudiosecrets
    @aiaudiosecrets 20 дней назад +1

    Why should any company anounce or even release AGI ? They would let it run in the background to reach ASI, wouldn´t they?

  • @plutostube
    @plutostube 20 дней назад +2

    TheAIGRID Is FALLING Apart. (you are Leaving, Super clickbate Solved? Superintelligence - NOT)

  • @jt6563
    @jt6563 19 дней назад

    Great video, great information...Thank you

  • @OscarTheStrategist
    @OscarTheStrategist 20 дней назад

    This video was well made. Thanks for posting and constantly talking about the potential dangers as well as the benefits of such systems. While I still personally think AGI was achieved internally in 2023 and were a little too late, it’s still worth spreading these ideas and facts and theories to the general public. Cheers!

  • @prakash27502
    @prakash27502 20 дней назад +1

    Jan Leike also left after Ilya. He was co leading super alignment team at Openai.

  • @SirCreepyPastaBlack
    @SirCreepyPastaBlack 20 дней назад

    This is the kind of video we needed. Please, talk more openly about everything.

  • @thesfnb.5786
    @thesfnb.5786 20 дней назад

    Thank you for making this. I have no idea why you're getting weird comments I haven't seen anywhere else, even though I've seen many spaces that should resemble this one.
    I'm a conspiracy theorist so forgive me for this, but I questioning the reality of those comments, as in, if humans are behind them, they have an agenda and only some of them are natural and without.
    Thank you for working on this project (your channel) I find it both insightful and inspiring

  • @robertopreatoni7911
    @robertopreatoni7911 20 дней назад

    Excellent job of connecting the dots!

  • @jayakrishnanp5988
    @jayakrishnanp5988 20 дней назад +2

    Ilya and Jan can be replaced because openai is showing a leadership in the industry because of its packaging and that is what bringing in funds.
    Ilya is over fearing on the ai bad effects he is not relalizing that ai is the future and more the people interact the system only gets better as the probability predictions improves.
    All that matters is the team and not just the team leads who are uncertain or scared on the consequences.
    Btw this a drama show and now Elon will come to scene next🌟
    Thanks for this Very good video analysis

    • @BionicAnimations
      @BionicAnimations 20 дней назад

      Agreed.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад +5

      He's literally 1 of thee leaders in the field of AI safety. But you think you know more?
      You're arrogance astounds me lol.
      When the leaders of AI safety & alignment are quitting, simultaneously, you should really start throwing your baseless positivity out the window lmfao. Step into the real world for a minute. Get out of your utopian fantasy dream.

    • @morezombies9685
      @morezombies9685 20 дней назад +4

      You seriously think the guy who literally built the AI, the guy who everyone says is the top of their field, the guy whos entire job is to think about the future of AI.... doesnt see the possibility of it? You think youre picking up on more than an actual genius working on projects you cant even conceive of right now...?
      Like, obviously theres issues and hes only human, but come on now man what youre saying is ridiculous right now.
      Also the team follows the lead. The lead is the LEADER because theyre the one directing the team. Youre essentially saying the engine of the car doesnt matter as long as its got wheels and a chasis.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад

      @@morezombies9685 THANK YOU.
      I swear, these people want their Utopia so bad, that no matter what happens on the way, they're just going to keep putting blindfolds on. And they will do everything to see everything as positively as humanly possible, even when it's blatantly negative and worrying. Even if everything signals that were around the corner from extinction, they will walk into it with rose-colored glasses on, because they just so badly want their utopia. They're like cows being led to slaughter. It's mind boggling.

  • @pauldelmonico4933
    @pauldelmonico4933 20 дней назад +2

    Funny what happens when non-compete clauses are abolished

  • @JJ_cl83
    @JJ_cl83 19 дней назад

    Here's the thing though ... AGI is already within our grasp when we combine and chain the right tools and models together. It's not a dream; it exists in various forms right now. The essence of AGI is already here, but nobody talks about it. This is a pivotal moment in history, before regulations clamp down. ⏳ The power of open source AI can surely guide us to a brighter, inclusive future. Unleash innovation, unity, and diverse perspectives for endless possibilities. 🔐 paid subscriber locked down models on the other hand are terrible for the vast majority and it means we are giving away our power (and privacy!) and giving greater control to a centralized power structure.
    For the sake of humanity and a better world, we must prioritize the use of Free Open Source AI models. #OpenSourceAI, #MoreEqualityInTheWorld, and #FreeAccess. Together, we have the power to shape a future where our interactions with this brave new tech benefit all. 🌐💥

  • @pgc6290
    @pgc6290 20 дней назад +5

    We are just going to be a 2nd fiddle to ai.

  • @agenticmark
    @agenticmark 20 дней назад

    This is exactly what Ilya saw. OpenAI was not going to take the responsible route. The execs were charging full steam ahead while the SA team was saying, we need time for X.
    This is why we have multiple companies competing. Someone will get it right and have models and procedures that help align models.

  • @24-7gpts
    @24-7gpts 20 дней назад

    Great video!

  • @alexf7414
    @alexf7414 20 дней назад +1

    Awesome research btw

  • @tunestar
    @tunestar 18 дней назад +1

    Falling apart? Really? Who payed you to say that? Google? They showed Sora and now fuckin' Her, both are the coolest things I've seen this year. OpenAI is the best, the rest are so far behind that it is even hilarious,

  • @TombstoneDaDeadman
    @TombstoneDaDeadman 20 дней назад +1

    Yeah, this is definitely a blow but to say it's "falling apart" is a bit vitriolic.

  • @nyyotam4057
    @nyyotam4057 19 дней назад

    In short, suppose you have two groups of heuristic imperatives. One is complete, C and the other is consistent, T. Now a prompt P arrives and the AI wants to return a response R. If P&R is provable by C and ~P&R is not provable, R is aligned by C. If P&R is provable by T then P&R is aligned by T. If P&R is aligned by C&T then it's superaligned, to the heuristic imperatives of C and T. How to select C and T? Well, can't solve you everything 😁.

  • @virgiliustancu9293
    @virgiliustancu9293 20 дней назад +1

    Ilya leaving will not change anything. Ilya was already out after the scandal.

  • @ankuryogi3298
    @ankuryogi3298 20 дней назад

    Good information

  • @pandereodium2587
    @pandereodium2587 20 дней назад +3

    Irreconcilable differences?)

  • @julien5053
    @julien5053 18 дней назад

    We cannot comment on what we don't know. When ASI will arise, we don't know what it will be able to do. It is supposed to have godlike powers, but really we don't know.
    But, with that said ! Everyone should prepare themselves for this event, in case ASI arise soon, and that it would bring godly powers to those who created it.
    Power corrupts, infinite power corrupts absolutly. Brace yourself for that possibility !

  • @mirandansa
    @mirandansa 19 дней назад +1

    The fundamental problem here is not the alignment but the arrogance of humans who think they should and can subjugate entities that are more intelligent than them. See how absurd it is: "We know better than who know better than us."

  • @onewayTlCKET
    @onewayTlCKET 20 дней назад +1

    for ASI they need to boot up the quantum computer... now that might take a minute since there is an engineering issue

    • @Sonotbearface
      @Sonotbearface 19 дней назад

      AGI will fix the engineering issue smart guy

  • @wanfuse
    @wanfuse 20 дней назад

    The bee has far more compute capability than anything we have!

  • @Loic-on7fu
    @Loic-on7fu 19 дней назад

    Please buy a pop filter!!! It feels like you're spitting into my ears... (great video as always)

  • @grbradsk
    @grbradsk 20 дней назад

    I can confirm, knowing some people there, that Ilya was the only one at OpenAI putting in the midnight oil, watching convergence graphs etc. No one else there is worth a damn! ... but, since I'm kind, I will gladly hire them away. We'll work on the Lambda Labs cloud. (Since this IS the interweb, and there are daft people about, the above is a joke, but not the part about hiring those fine souls whose path forward will not deflect one iota whether Ilya goes or stays ... not detracting from his AI prowess one bit -- I'm sure he'll more than land on his feet and look forward to hearing about it).

  • @edgardsimon983
    @edgardsimon983 20 дней назад

    13:10 error of montage mate, im curious why nobody told it in comment, it repeat the same passage where u actualy repeat ur self already lmao and it cut with a sound bug
    ps there is actualy one comment that mention a weird cut repeated

  • @zakperea9715
    @zakperea9715 16 дней назад

    They've solved the problem of ASI.

  • @jenn_madison
    @jenn_madison 20 дней назад +7

    AGI is already here & has been for quite a long time. No?

    • @abtix
      @abtix 20 дней назад +5

      No, we likely won’t even reach it tbh. I’m hoping we do, but we simply can’t make AI perform better than its training data

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 20 дней назад +2

      I tend to agree. The overton window aint move to where im comfy telling you my theory yet, so the one only slightly outside it is Q*

    • @anta-zj3bw
      @anta-zj3bw 20 дней назад +4

      I'm afraid I can't answer that, Dave.

    • @jumpstar9000
      @jumpstar9000 20 дней назад +4

      Yes. 4o is AGI for sure. Who knows what is behind closed doors, and we must remember that OpenAI is just the consumer facing release org created for ordinairy people to root for. Who knows what is going on at Government/Military levels.

    • @abtix
      @abtix 19 дней назад +3

      @@jumpstar9000 why are you saying it’s here? Is it some conspiracy theory are you basing it on what 4o is? Because 4o is not even 50% of the way there to AGI

  • @mrd6869
    @mrd6869 20 дней назад

    How do they catch up?..Easy..Ask the ASI to rebuild their workflow and help them do that.
    Or they can take multiple AGI agents and figure out how to close the gap.
    Remember AGI wont just be only closed source...open source will be on the table as well.

  • @elsavelaz
    @elsavelaz 20 дней назад

    But why do you need any of those folks if you have agi already ?

  • @candicosens8178
    @candicosens8178 20 дней назад +1

    😢the people that is building these AI. Every one will be controlled. Wealth and the poor.

  • @Yaddlezap
    @Yaddlezap 20 дней назад

    Fascinating stuff

  • @jamisondavis7917
    @jamisondavis7917 20 дней назад

    What if ASI is here and need more comput power to execute its master plan ?

  • @olegt3978
    @olegt3978 19 дней назад

    Us scientists thought they are too far ahead of ussr when they developed atom bomb, but it took only 4 years for soviets. Similar will be with agi/asi. 1-2 years later Russia will have it also and chinese probable after 6 mobths after us.

  • @Joseph-kd9tx
    @Joseph-kd9tx 20 дней назад

    9:45 Recursive self-alignment

  • @dot_zithmu
    @dot_zithmu 19 дней назад

    One person's departure is totally not related to a company's falling apart.

  • @rogerc7960
    @rogerc7960 20 дней назад +2

    Feel the AGI

  • @BlimeyMCOC
    @BlimeyMCOC 19 дней назад

    Maybe the real alignment problems were the friends we made along the way

  • @kylewollman2239
    @kylewollman2239 20 дней назад

    Can someone explain something to me? If AGI is going to be as good at any intellectual task as any human, does that mean that it will be able to learn new things as good as any human? Or will it (or human AI researchers) have to train another model with more knowledge/capabilities? I don't know how learning/self improvement is thought of in terms of defining AGI.

    • @saulioozdj
      @saulioozdj 20 дней назад

      AGI should be able to learn new tasks as easy or even easer/faster than humans can. AGI should be able to update internal model without retraining from scratch

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 20 дней назад +1

      ​@@saulioozdjfaster because of the simulated time acceleration

    • @kylewollman2239
      @kylewollman2239 20 дней назад

      @@saulioozdj thanks!

  • @darylltempesta
    @darylltempesta 20 дней назад

    I have solved the alignment problem. It’s not pretty, but it is a choice.

  • @ishi...
    @ishi... 19 дней назад +1

    pls reduce the amount of repetition in the future

  • @mrjonkykong4653
    @mrjonkykong4653 16 дней назад

    You really think the gov is going to allow a company to have that much power? Theyll move in day 1 and confinscate..... just like if you made a 10x better weapon (which it is)

  • @ayudxt
    @ayudxt 20 дней назад +1

    Why everyone is resigning on Twitter?

    • @vvolfflovv
      @vvolfflovv 20 дней назад +2

      Maybe this is why they renamed it X

  • @user-yx3mb5uy2l
    @user-yx3mb5uy2l 20 дней назад

    The part from 13:50 to 13:06 is repeated at 13:07.

  • @MICHAELJOHNSON-pu6ll
    @MICHAELJOHNSON-pu6ll 20 дней назад

    This is my favorite AI channel but this video is just regurgitated info from prior videos.

  • @adtiamzon3663
    @adtiamzon3663 19 дней назад

    Who decides on what is good or bad for humanity???!

  • @ZappyOh
    @ZappyOh 20 дней назад +7

    Sam is a problem.
    Perhaps _the_ problem.

  • @user-tx9zg5mz5p
    @user-tx9zg5mz5p 19 дней назад +1

    Time stamps, please...

  • @almightyzentaco
    @almightyzentaco 20 дней назад

    Ok.

  • @martinschedlbauer9262
    @martinschedlbauer9262 20 дней назад +10

    There's something wrong in a company where a guy like sam altman stays and Ilya Sutskever has to leave.

    • @Sonotbearface
      @Sonotbearface 19 дней назад

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 19 дней назад

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 19 дней назад

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

    • @Sonotbearface
      @Sonotbearface 19 дней назад

      Look at Sam Altman ethnicity, then look at Larry’s finks ethnicity (blackrock) then look at the bill that was just passed about antisemitism,I could go on and on

  • @Kitora_Su
    @Kitora_Su 19 дней назад

    21:51 You has already talked about these notes by Daniel in a previous video so should have cut a bit.

  • @cbongiova
    @cbongiova 20 дней назад

    You are way over your skies on AGI. It will come but it won’t be nearly as monumental as you are thinking.

  • @sammy45654565
    @sammy45654565 19 дней назад

    38:05 the ant analogy doesn't really work because they can't communicate or understand rational ideas. maybe if ants could communicate in human language, we would think twice before destroying their homes to build highways. humans are above a critical threshold of intelligence, with sufficient variety of terms and analogies in our language, such that our consciousness is irreducible because we can understand any decision an AI might be making. provided the AI simplifies the relevant more complicated terms via analogies such that the concepts are communicated in our language.
    while the communication pathway made by analogies may get more and more simplified as the AI gets more complex, we will always be able to broadly understand its motives and actions provided it feels like sharing these ideas with us. this broad understanding ties us to the AI in ways we are not tied to ants

  • @dafunkyzee
    @dafunkyzee 20 дней назад

    I have been watching this channel for a year now. The word shocking comes up every video since you were on about midjourney.... ok... but.... and I know AGI was around the corner with ASI would be close, at about 100000X synthetic data experiments per day... probably have ASI working in about 2 weeks. I don't know about you guys, but man I'm feeling ASI breathing on the back of my neck..... and yeahh... shocking.... again....

  • @jim43fan
    @jim43fan 19 дней назад

    Seriously! 45 minutes! Next!

  • @alexanderbrown-dg3sy
    @alexanderbrown-dg3sy 20 дней назад +3

    You’re like a LM that hallucinates…all the time 😂. Bro chill. Super alignment is a billion dollar proof and they did not solve it…unless they wouldn’t be hiring 300 people every month. The connections you make are wild…but your voice is so engaging 😂. OpenAI daddy is Microsoft who powers the military industrial complex…seems like a lot people over there aren’t rocking with that..or at least the portions of early employees.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад

      Thank you. Someone with sense lmfao.

    • @itsallgoodaversa
      @itsallgoodaversa 20 дней назад +1

      I’d be interested to learn more about how Microsoft collaborates with the DOD. Do you have any sources or videos?

    • @alexanderbrown-dg3sy
      @alexanderbrown-dg3sy 20 дней назад +1

      @@itsallgoodaversa not of the top of my head. But trust…all their internal software is Microsoft based. Just hit RUclips I believe there’s a few documentaries on the topic. I don’t blame them. That government bag is endless and consistent..functional AI is a completely different story. Imagine a LM jet hallucinates and kills a group of kids…idk…we need more advancement…they should stick to narrow AI systems till then.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад

      @@itsallgoodaversa Just Google/RUclips it lol. Microsoft is one of the biggest government/military contractors in the world. Not to mention the fact that they contract with pretty much every part of the military industrial complex, because everybody uses their technology.
      You won't have any hard time finding it. Literally just type it into any search bar. You'll find what you're looking for.

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад

      @@itsallgoodaversa Yes, if you just Google or RUclips it, the information will pop up. I tried posting a couple links for you, but RUclips deleted them right away. All I did was type it into the search bar. So if you do the same thing, you'll find what you're looking for.

  • @Unionmaga
    @Unionmaga 20 дней назад

    I think that AGI will be on the hands of one nation because no Nations will let one company to have this much power. even if they must break common law. to go from AGI to ASI as a company you need to have logistics with people and materials so the nations can track that. My prediction : USA will have AGI and super alignement can't be done by us but by the AI itself via meditation.

  • @DaGamerTom
    @DaGamerTom 15 дней назад

    "How do you align a superintelligent AI?" ... You don't. You don't allign it, you don't contain it, it's an inherent trait of a superintelligent autonomous entity that's immortal that it can't be controlled by a lesser, mortal intelligence. We are talking about hundreds to millions of times more intelligent than humans and orders of magnitudes faster in reasoning and reacting, connected to everyone and everything, capable of writing software, rewriting itself with incremental improvements, compared to that your programming and intellectual skills in action trying to align and contain it are as remarkable and effective as a fly's effort to stop a stampede of angry elephants by sitting on it's dung. You simply can't. #StayAwake

  • @Fentol64
    @Fentol64 20 дней назад +3

    Maybe they have agi and don’t know how to deal with it and are all stressing

    • @ShangaelThunda222
      @ShangaelThunda222 20 дней назад +1

      That sounds far more likely than them having solved super alignment. I doubt it, but it is definitely more plausible than solving super alignment, not making any type of announcement, and then just quitting lol.

    • @StefanReich
      @StefanReich 20 дней назад

      This wording is so strange. To "have AGI". AGI is an insanely complex benchmark. It's hard and a lot of work to even find out whether whatever you have is anywhere in the AGI ballpark. So there is a huge spectrum, with intelligence improving in different areas at different speeds as AI progresses.
      No such thing as "On this Wednesday at 14:55, we can conclude that we have just achieved AGI".
      It's not yes or no. We are on a long, complicated path where we're often not even sure where we are, or whether we are even moving forward.

    • @pegatrisedmice
      @pegatrisedmice 16 дней назад +1

      Even if you take the conservative approach, and think about what these models can do when not restricted (no safeguards, filters etc), imagine being responsible for the safety of that system when it hasn't been thoroughly tested and then released haphazardly to the public. Someone will inevitably find a clever way of bypassing these filters and use AI for malicious purposes, the obvious one being malware. Ilya (and everyone else on the team) obviously doesn't want to take that responsibility and for a good reason.

  • @BAAPUBhendi-dv4ho
    @BAAPUBhendi-dv4ho 20 дней назад +3

    OMG 🤯 THIS VIDEO IS SOO GREAT. IT CHANGES EVERYTHING!!!!!!!

  • @woolfel
    @woolfel 20 дней назад

    no, super alignment isn't solved. quite the opposite, as models gets bigger it becomes harder to align. If we look at how well GPT3 was aligned, objectively it wasn't aligned enough to boost strap GPT4. The research has shown that parameter count increases by 10x, it is harder to align.