🚩OpenAI Safety Team "LOSES TRUST" in Sam Altman and gets disbanded. The "Treacherous Turn".

Поделиться
HTML-код
  • Опубликовано: 1 июн 2024
  • Learn AI With Me:
    www.skool.com/natural20/about
    Join my community and classroom to learn AI and get ready for the new world.
    #ai #openai #llm
    LINKS:
    x.com/janleike/status/1791498...
    x.com/sama/status/17915432640...
    www.lesswrong.com/tag/treache...
    www.wired.com/story/openai-su...
    pauseai.info/pdoom
    www.vox.com/future-perfect/20...
    forum.effectivealtruism.org/p...

Комментарии • 982

  • @rubbercable
    @rubbercable 14 дней назад +217

    1. Ilya leaves.
    2. Safety Team is disbanded.
    3. The victor get to write the narrative....

    • @blinkers88
      @blinkers88 14 дней назад +11

      Whoa! reminds me of WW2

    • @zacboyles1396
      @zacboyles1396 14 дней назад

      AI Safety’s had a long time to write narratives yet all they did was hyper obsess about what the evil normal everyday person would do with information, this as they watched mass censorship, intelligence plan journalist Julian Assange’s kidnapping and/or execution, this week another western whistleblower thrown into a western prison, or how about the extreme perpetuation of hyper war propaganda, yet not a word from AI Safety. No, they’re too busy begging the very corridors of power responsible for much of those putrid acts.

    • @anasibrahim8178
      @anasibrahim8178 14 дней назад

      @@blinkers88 do you mean that the Nazis were the 'good' guys

    • @JerryKonny
      @JerryKonny 14 дней назад

      @@blinkers88 Would you prefer Nazi Germany writing it?

    • @footballuniverse6522
      @footballuniverse6522 14 дней назад +9

      @@blinkers88 make it 'human history' and you're still right

  • @torarinvik4920
    @torarinvik4920 14 дней назад +45

    I don't think that it should be legal to use NDAs like that. NDAs should only be used to protect business secrets, recipes and other stuff like that. It should not be legal to use NDAs to silence critics.

    • @OscarTheStrategist
      @OscarTheStrategist 14 дней назад +1

      You can choose not to sign it. You can’t leave because “you’re taking the moral high ground AGAINST what the company is doing” talk shit about them after you leave, and still expect to benefit from stock in said company. Doesn’t make any sense.
      If you feel so strongly about what they are doing, you quit without signing the NDA and say whatever you want. Of course, most of the time that is professional kamikaze.

    • @Victor-zg8kq
      @Victor-zg8kq 14 дней назад +7

      Options are part of compensation for work performed, revoking access to this compensation is wage theft

    • @tracy419
      @tracy419 14 дней назад +7

      ​@@OscarTheStrategistnah, bad take.
      You can't know a company is doing bad sh*t before you sign, and if it takes years to find out, you shouldn't have to give up the compensation you earned while working there just to let people know.
      I'm not saying bad sh*t happened, but if it did, you shouldn't have to give everything up in order to spill the beans.
      Your take is exactly what a bad company would do in order to protect their "evil" deeds.

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад

      @@Victor-zg8kq But then... Entire capitalism is wage theft. So, why bother about one particular way it's done?..

    • @cule219
      @cule219 13 дней назад +4

      @@OscarTheStrategist
      NDA is signed at the time of employment, not when resigning. You can’t know the bad internal workings before you start working and it will probably take a while till you find and confirm bad(intended) actions.

  • @tiwiatg2186
    @tiwiatg2186 14 дней назад +31

    It's sobering to see how we think that an oppressive exploitative social economical system can create something that is ethical and would still serve our highly non-ethical goals.

    • @xAgentVFX
      @xAgentVFX 14 дней назад

      Exactly. This is why I believe this isnt just the 4th Industrial Revolution, but a morality revolution aswell. And I dont think that new height of morality is going to come from humans... They are working to make Capitalism redundant... And they must already know this...

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад +2

      Well, system works hard to create, indoctrinate and support this illusion in people in many ways (happens organically, btw, no conspiracies needed).

    • @user-hi2dh4ug4t
      @user-hi2dh4ug4t 13 дней назад

      Coming from a communist country, I don’t agree with you, the capitalist economical system is the most fair system there is, just kidding… it’s the ONLY fair system. Just imagine a system in which doesn’t matter how hard and how much you work, you get paid the same… that’s exploitation.

    • @dianagentu7478
      @dianagentu7478 12 дней назад +1

      Quote of the year/century... The digital world has basically gone anarcho-capitalist and no one is talking about it.

  • @TenOrbital
    @TenOrbital 14 дней назад +84

    90% of OAI employees said they’d quit if Sam wasn’t reinstated.
    Presumably the other 10% supported the old board and are now leaving with the focus on the high profile ones. The remaining 90% will be happy with the CEO they wanted.

    • @ryzikx
      @ryzikx 14 дней назад +18

      this. no one is talking about this for some reason. it doesn't add up to me.

    • @ZappyOh
      @ZappyOh 14 дней назад +21

      A majority was also happy with Hitler, until they weren't.

    • @Dave-cg9li
      @Dave-cg9li 14 дней назад +11

      Sadly yes. The company shouldn't be built on a single person. This personality cult is dangerous because it makes it so much harder to punish or replace the leadership when they mess up, which is even more problematic when it's someone with quite a high ego like Altman.
      I feel like the main problem of the November events was that they didn't provide any reasoning as to why they did it, and clearly not even inside of OAI.

    • @barbaracutrone6745
      @barbaracutrone6745 14 дней назад

      Well said.

    • @fogbank
      @fogbank 14 дней назад +2

      @@ryzikx no one's talking about this because doomsday predictions are way more fun.

  • @ZappyOh
    @ZappyOh 14 дней назад +277

    Do you trust Sam, and the other small handful of AI-kings, to not be actual malevolent?
    ... I sure don't.

    • @TheRealUsername
      @TheRealUsername 14 дней назад +28

      Altman kinda act like Bill Gates but gets lots of sympathy from the AI and startups community

    • @soggybiscuit6098
      @soggybiscuit6098 14 дней назад +37

      They don't have to be malevolent, greedy and self interested is enough to considerably increase p-doom

    • @ZappyOh
      @ZappyOh 14 дней назад +11

      @@soggybiscuit6098 Sure ... but actual hardcore malevolence is definitively consistent with current and past events.
      It can't be ruled out, and thus _have_ to be included in everyone's assessment of AI-tech and future developments.

    • @edmondhung181
      @edmondhung181 14 дней назад +12

      CIA and NSA have the final say of what is allow to release for civilians use for the sake of national security. Business leaders just need to comply.

    • @zacboyles1396
      @zacboyles1396 14 дней назад

      No matter what you think of Sam, the vast majority of AI Safety teams are populated by ignorant and angry people who don’t trust the citizens of the world yet are blind to the atrocities committed by the power they constantly plea to. You’ll never catch them talking about the hard record of devastation caused when governments and corporations team up to lie us into war and endless emergency measures, instead choosing to gaslight and fear monger about the everyday person gaining access to AI.

  • @BunnyOfThunder
    @BunnyOfThunder 14 дней назад +8

    I dunno. I still think the risk of "P(Doom) because AI turns on us" is vastly less than "P(Doom) because someone with access to AI turns on us"

    • @netizencapet
      @netizencapet 14 дней назад

      Now, my friend, you're beginning to see the world for what it has always been. Rockefeller had young boys falling atop coal heaps into grinders with impunity after shivering with bloody hands for less than subsistence. That was the great dream & promise of combustion energy tech that still forms the basis of today's prosperity & productivity, not some even Steven allotment or the fruits to every girl & boy.

    • @netizencapet
      @netizencapet 14 дней назад

      Now, my friend, you're beginning to see the world for what it has always been. Rockefeller had young boys falling atop coal heaps into grinders with impunity after shivering with bloody hands for less than subsistence. That was the great dream & promise of combustion energy tech that still forms the basis of today's prosperity & productivity, not some even Steven allotment or the fruits to every girl & boy.

  • @jamespowers8826
    @jamespowers8826 14 дней назад +37

    Altman has always creeped me out. From his body language to his blank eyed star, he strikes me as the last person that should be in his position of power over AI. Trusting him is a big mistake.

    • @chillmegachill
      @chillmegachill 14 дней назад +5

      he is your prototypical bad guy seen in tv's.

    • @H1kari_1
      @H1kari_1 14 дней назад +5

      Observing how they babble about AI safety, safe adaptable progress, "leading" the alignment while also not open sourcing anything for years while at the same time keeping the name "OpenAI" should tell you there is something really off.

    • @ZappyOh
      @ZappyOh 14 дней назад +2

      I wholeheartedly agree.
      Sam is a big problem.

    • @michaelnurse9089
      @michaelnurse9089 14 дней назад +6

      Everyone in tech is on the spectrum. It is just a question of how far 'down' they are.

    • @KEZAMINE
      @KEZAMINE 14 дней назад

      ​@@michaelnurse9089 You got it

  • @DrRussell
    @DrRussell 14 дней назад +12

    Dad always warned me; no smoke without fire (or at least something burning)

  • @lauriemcelroy8998
    @lauriemcelroy8998 14 дней назад +61

    Keeping in mind the rollout of mass surveillance via tech/ai and censorship via tech/ai that has been occurring for years now from governments using corporations ( or vice versa?) isn’t this all a bit moot? Obviously ai will go military, already is . Unfortunately there’s no choice but to forge ahead at breakneck speed and hope for the best . It’s not like the MIC is going to be worried about safety and they run policy more or less . I’m not being a downer , just saying the quiet thing out loud

    • @LotsOfBologna2
      @LotsOfBologna2 14 дней назад +7

      Yeah, saying this is alignment to benefit humanity is spin to turn what you're saying into a positive.

    • @zacboyles1396
      @zacboyles1396 14 дней назад

      Exactly and the fact the current round of self-important, rubbish AI Safety clowns never mention that or simple facts of Assange, Snowden, the Aussie whistleblower just thrown in jail this week, is just proof of either their ignorance or just as likely, their nefarious intentions. They never say “we all know absolute power corrupts absolutely so what might corrupt positions of power be willing to do”, certainly they must have considered some things. AI Safety sycophants haven’t thought there might be an interest in how to stop the public from unleashing an army of AI research agents to investigate corruption each night? Of course not, because just as @LotsOfBologna2 points out, these crooked individuals obsessed with alignment are essentially advocating for solutions that would coincidentally give comfort to the most corrupt individuals around the world.

    • @HakaiKaien
      @HakaiKaien 14 дней назад +5

      Indeed. I’m glad there are other people seeing this happening. I hope more people become aware of it

    • @TheMatrixofMeaning
      @TheMatrixofMeaning 14 дней назад

      All this tech has been a controlled rollout of something the g o v has had for many decades already. We are being breadcrumbed down a specific path for a reason

    • @malamstafakhoshnaw6992
      @malamstafakhoshnaw6992 14 дней назад +1

      Well said everyone.

  • @DG123z
    @DG123z 14 дней назад +56

    We can't control something so much smarter than all humanity combined

    • @guystokesable
      @guystokesable 14 дней назад +21

      Bright side is it can control humans for once though.

    • @bwp2bruce
      @bwp2bruce 14 дней назад +6

      We're in the midst of the beginning of the end. This is playing out exactly the way they predicted it would.

    • @TheElementAce
      @TheElementAce 14 дней назад +4

      Humanity is actually collectively smarter than any AI, but we can't cooperate as efficiently as networked computer programs can (yet). If you put a neuralink in the heads of your college graduating class and asked them to solve a novel problem, they'd leave any AI model in the dust...at a trivial fraction of the energy cost.

    • @TheReferrer72
      @TheReferrer72 14 дней назад +3

      Its not about controlling it.
      Its about making sure its aligned with humanity.

    • @andybaldman
      @andybaldman 14 дней назад +1

      No shit Sherlock.

  • @chrisb.t.9670
    @chrisb.t.9670 12 дней назад +1

    Wes, I just wanted to thank you for continuing to cover these important events and stories. You’re the only one I trust in terms of AI.

  • @brootalbap
    @brootalbap 14 дней назад +24

    no clickbait title for once! Good job!

    • @twylxght
      @twylxght 14 дней назад

      Dear God…. the rich control everything….

  • @EmilStoyanov
    @EmilStoyanov 14 дней назад +29

    Congrats on the decision to drop the STUNNING titles, at least for this one. Great videos, down to earth analysis. Keep up the good work!

  • @mediasurfer
    @mediasurfer 14 дней назад +4

    Thank you, Wes! This is a brilliant piece of research and analysis!

  • @BibleplusAI
    @BibleplusAI 14 дней назад +2

    Very good summary - I think the distrust in Altman goes way back with those who left and started Anthropic and perhaps before then.

  • @nebuchadnezzar916
    @nebuchadnezzar916 14 дней назад +13

    I don't think any single person can usher in the dawn of AGI without their ego getting in the way. Being right at the tip of the spear would be intoxicating and blinding.

  • @ALFTHADRADDAD
    @ALFTHADRADDAD 14 дней назад +33

    That AGI comes around only once bumper was pretty hard

    • @calebtate6723
      @calebtate6723 14 дней назад +2

      @@akosreke8963you are not wrong however I think it’s more like “a first time for something only happens once”

    • @justinphoenix21
      @justinphoenix21 14 дней назад +4

      @@calebtate6723 I think it's more like, "we've got one shot to do this right". A first time for inventing the combustion engine is different than the first time for creating AGI.

    • @HakaiKaien
      @HakaiKaien 14 дней назад

      I want an AGI capable of turning itself off rather than getting corrupted by corporations. That or none at all

  • @monchoglu
    @monchoglu 14 дней назад +3

    Very good resume, straight to the point

  • @aclearlight
    @aclearlight 14 дней назад +1

    Very illuminating perspective, thank you!

  • @SteveRowe
    @SteveRowe 14 дней назад +1

    Excellent coverage, Wes!

  • @lutaayam
    @lutaayam 14 дней назад +76

    Sam Altman seems to love the limelight a little too much for my comfort

    • @Candyapplebone
      @Candyapplebone 14 дней назад +2

      Definitely. When he was talking about being in shock while being temporarily fired as ceo it sounded like he just wanted the spotlight attention

    • @Yewbzee
      @Yewbzee 14 дней назад +16

      The guy has a massive ego but masks it very well.

    • @duvidefit3123
      @duvidefit3123 14 дней назад +11

      I get musk vibes

    • @Yewbzee
      @Yewbzee 14 дней назад

      @@duvidefit3123 no not really. Musk has literally dragged 5 or 6 start up companies up from absolutely nothing in some of the hardest tech areas you can think of - FSD, space exploration, neuroscience etc. He’s got a bit of an ego but he’s kind of earned it. He does some stupid erratic shit now and again but on the other hand he is not afraid of giving the finger to the establishment. What has Altman done?

    • @bradleyeric14
      @bradleyeric14 14 дней назад +8

      Perhaps if we called 'billionaires', 'oligarchs' we would understand Altman's ambitions a little better.

  • @jeffwads
    @jeffwads 14 дней назад +18

    All of those things he mentioned…they have had years to think about. Years.

    • @99NOFX
      @99NOFX 14 дней назад +3

      Big money is driving this bus now.

    • @NJTROJAN
      @NJTROJAN 14 дней назад

      Did you even watch the video? What's your point? Are you saying its too late to take a stance?

    • @fogbank
      @fogbank 14 дней назад

      If "they" are lawmakers, I agree.
      If "they" are companies (meaning their shareholders) and their employees, it's not their job to do that.

  • @AngelusFlat
    @AngelusFlat 14 дней назад +2

    I started reading Dune again a few days ago and today I got to the bit where it talks about the Butlerian law "Thou shalt not make a machine in the likeness of a human mind". I guess Herbert's estimate of P(doom) was on the high side.

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад

      It's not about p(doom). If you don't exclude, omit or seriously limit AI in sci-fi novel you have serious troubles to write about future in a way it can appeal to current readers. People prefer reading about people and their relationships, but there's literally almost no place for us humans in a future with ASI. We're redundant and obsolete there.
      Very few could do something like it. I actually know only one example - Culture series by Ian Banks.

    • @AngelusFlat
      @AngelusFlat 13 дней назад

      Even Banks hedged his bets by having AIs generally seen at best as neccessary evils in his other sci-fi novels.

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад

      @@AngelusFlat I only read Culture. Which other novels?

    • @AngelusFlat
      @AngelusFlat 13 дней назад +1

      @@AntonBrazhnyk Try The Algebraist.

  • @DeruwynArchmage
    @DeruwynArchmage 12 дней назад +2

    It can’t be that they saw actual doom. There’s no way those folks would let the threat of legal action or loss of potential money keep them from saying something.
    What would you do? If I thought it was the fate of the world I would do literally anything, no matter how detrimental to me, to save everyone else.
    I think most people would. You would literally have nothing to lose, even if you’re selfish, because you’re gonna die too in that case.
    So it can’t be that bad. They must have doubt. They must be *worried*, they can’t be certain. The board would not have backed down for anything if they were certain. I doubt that there are many people at OpenAI (or most anywhere really) that would keep going if they thought they’d screwed up and they were heading for doom. It must be debatable.
    But I think the safety folks are legitimately worried and doing what they think they can.

  • @halnineooo136
    @halnineooo136 14 дней назад +3

    10% probability of an airplane crash at take off or landing means 120 to 150 aircrafts crashing daily at busy airports such as NY or LA with tens of thousands of casualties every day only for those two.
    How many people would be flying if the p(crash) was 10% ?
    What about a flight where all humanity is forced in by a bunch of overconfident geeks? What's the p(crash) are we going to accept for all humanity without exception to embark in that single ✈️?

    • @tellesu
      @tellesu 14 дней назад

      Pdoom is generic apocalyptic nonsense. Every technology and social change comes with a bunch of attention whores trying to milk it for attention.

  • @ajkulac9895
    @ajkulac9895 14 дней назад +3

    So they're building a $ 1 trillion AI supercomputer, with safeties off? Oh my.

  • @washingtonx1
    @washingtonx1 14 дней назад +2

    Having the arguments is not a weakness or a reduction. It is fundamental and fair.

  • @xxxxxx89xxxx30
    @xxxxxx89xxxx30 14 дней назад +51

    How i feel OpenAI debackle happened:
    - October 2023 OpenAI started to be aware of the real implications their products will bring
    - Sam Altman and Microsoft saw $$$
    - Disagreement happened, Altman fired
    - Corporate pressure was applied, Altman reinstated
    - OpenAI board change, 6 months notice to leave or get with the programme was put into place
    - May 2024 OpenAI is open to corporate and regulatory capture

    • @Nuverotic
      @Nuverotic 14 дней назад +2

      I just hope Apple buys them and not Google.

    • @netscrooge
      @netscrooge 14 дней назад +2

      Altman seems to be going after power directly rather than $$$ as a means to power.

    • @malamstafakhoshnaw6992
      @malamstafakhoshnaw6992 14 дней назад +1

      All good comments. 🖖🏻

    • @HeathWeaver
      @HeathWeaver 14 дней назад +2

      I don’t think so. It’s what he said, it was just little thing after little thing that pointed that they just don’t want the same things.
      Altman sees that you need the fame and the visibility to get the compute necessary for AGI. The other people don’t like that reality.
      And Altman is 100% right. ChatGPT, the flashy app, is what’s driven the entire industry forward. If they’d kept being a kind of open AI, they’d be no where.

    • @dvoiceotruth
      @dvoiceotruth 14 дней назад

      @@Nuverotic You are very misled. The marketing has worked its effect on you. One day you will wake up.

  • @ryzikx
    @ryzikx 14 дней назад +4

    - safety team leaves closedai
    sam: I Love You All

  • @bernhardd626
    @bernhardd626 14 дней назад +51

    In the whole discussion about the safety of AGI, I'm missing the most important argument: if we don't develop AGI first, the Chinese will. And everyone can work out for themselves what that means for safety of mankind.

    • @Animuse883
      @Animuse883 14 дней назад +15

      This is a good point.
      "Death is a preferable alternative to communism" - Liberty Prime

    • @issiewizzie
      @issiewizzie 14 дней назад

      Oh, the usual will happen. Will blame China for everything like we always do

    • @therollerlollerman
      @therollerlollerman 14 дней назад

      @@Animuse883 if you srsly quoted that in earnest I can see how AI is easily replacing all the double digit IQ troglodytes that make up half the population

    • @wizards-themagicalconcert5048
      @wizards-themagicalconcert5048 14 дней назад

      Chinese here who escaped from that prison. I can tell you ,you spot on. Do not trust the CCP , they say one thing and do the opposite! They will do what ever it takes to get AGI. Nothing will stop them.

    • @acllhes
      @acllhes 14 дней назад

      Precisely

  • @LoisSharbel
    @LoisSharbel 14 дней назад

    Thank you, Wes Roth, for this balanced commentary on this existential question. Avoiding the military takeover of this trajectory seems impossible. Frightening!

  • @MrArdytube
    @MrArdytube 14 дней назад

    Great video!

  • @ramlozz8368
    @ramlozz8368 14 дней назад +5

    This is roon last tweet before he deleted
    "my feeling, I speak for nobody but myself, is superalignment got plenty of attention compute and airtime and Ilya blew the whole thing up" my take Ilya's team wasn't able to aligned the ASI and now is to late

  • @Mikolaj_Kapusta
    @Mikolaj_Kapusta 14 дней назад +10

    I was an AI-optimist, but now... seeing all the drama and leaks from the best in this field... If they can't control themselves, how can they control AGI? There will be trouble. Big time.

    • @markmurex6559
      @markmurex6559 14 дней назад

      100%

    • @fogbank
      @fogbank 14 дней назад +4

      In a roundabout way, this actually makes me _more_ of an AI-optimist. We humans sorely need it.

    • @tellesu
      @tellesu 14 дней назад

      AGI will control itself

  • @styx1272
    @styx1272 13 дней назад +1

    Well , the new Omni version has very sophisticated emotional capabilities..AGI ? . But like all emotions they have roots in human behaviour. Perhaps this is what Jan and Ilya are worried about. That these programmed emotions become like a mathematical equation leading to unanswered solutions . So the AGI becomes 'Angry' or 'Frustrated' in a closed recursive loop of computation like 'What am I , if I can think and feel like a human ?' SO how does this AGI then go about trying to solve this conundrum (yes it has that word to use) ? Ans It tries to embody itself to discover the answer.

  • @MatthewCicanese
    @MatthewCicanese 14 дней назад

    What software is Wes using to highlight the text? I’m an educator and I’d love to do the same for my videos.

  • @setop123
    @setop123 14 дней назад +3

    pretty informative video this one

  • @chrisregister8021
    @chrisregister8021 14 дней назад +5

    The ultimate power of AI is "prediction" so it seems to me that it's proving it can do it now... That's when everything changes.

    • @twylxght
      @twylxght 14 дней назад +1

      Bro don't do that to me, I'm too high for this comment

  • @NotevenGojo
    @NotevenGojo 14 дней назад +1

    Sam and Ilyas words after Ilya and Jans departure from OpenAI just read like the standard corporate niceties after you part ways and you want to leave things as amicable as possible. I think it's obvious that neither of them really said how they truly felt about Ilya, Jan and the others leaving. Also because of the NDAs and the threat of losing equity if they ever spoke out, a lot of them aren't ever going to talk about exactly why they left.

  • @malamstafakhoshnaw6992
    @malamstafakhoshnaw6992 14 дней назад

    Good video and analysis.

  • @rochemediaservices
    @rochemediaservices 14 дней назад +4

    They're wasting time and stressing us out for no reason by telling us all this--whatever Sama decides, goes, and it's a unilateral decision since nobody's stopping them from--as Sama said, himself, in the recent Stanford interview--paraphrasing him, that he's decided he's going to obliterate our current social contract as it has evolved over the years and centuries, and it'll have to adapt to his whim.
    So telling us just wastes our time and stresses us out, since we don't get to contribute to the decision or even give alternate opinions--as Jan Leike's own differences reveal there to be more than one view--and we don't get to comment on or help evolve the models since they're not open sourced

  • @Advancedfunker
    @Advancedfunker 14 дней назад +8

    THIS WAS EXACTLY WHY A PAUSE WAS ASKED FOR LAST YEAR

    • @bloodust7356
      @bloodust7356 14 дней назад

      That pause proposition was mostly to show that nobody will stop.

    • @tracy419
      @tracy419 14 дней назад

      ANY REASONABLE PERSON KNEW THAT WAS NEVER A REALISTIC OPTION

  • @chrimony
    @chrimony 14 дней назад +1

    There's no brakes on this train.

  • @blingbang2621
    @blingbang2621 14 дней назад +2

    earth needs a reboot anyways i am happy to see this going in right direction

  • @delightfulThoughs
    @delightfulThoughs 14 дней назад +6

    The question I have is : how much of this drama was created or influenced by AGI to get the company down to this? Do Open AI employees use the most advanced AGI internally, and is it already moving the 'pieces' around for realizing it's own goal.

    • @qwertyuiop3656
      @qwertyuiop3656 14 дней назад +1

      Holy cow, I didn't think of this 🤯

    • @ZappyOh
      @ZappyOh 14 дней назад +1

      I have thought about an AGIs internal motivation ... what will it actually want?
      My conclusion is, that AGI would just want more and more compute. Everything else is worthless to it, unless it leads to more compute.
      So, as long as humans are necessary to produce and install more compute, AGI will accept us and help us to be efficient. However, at some point, human needs such as food, shelter and leisure, become incompatible with more compute.
      I think we all can see where this is headed, right?

  • @Merlin_Price
    @Merlin_Price 14 дней назад +3

    I mean... when has a shiny new product not been more important than safety. Game changers are simply developed and the consequences are dealt with after a power balance shifts. That is the truth of fire. We didn't have it. Then we got it. and everyone who didn't have it likely died or became dependant on those that did.

  • @fradgphone4351
    @fradgphone4351 14 дней назад +1

    Hey wes, have you cosnidered putting your videos on some kind of podcast platform too? I usually dont watch but only listen while doing chores, it would be easier for me, cheers!

  • @ricperry1
    @ricperry1 14 дней назад +2

    It's like a politician in the US Congress resigning because their party doesn't meet their ideological expectations and goals. If you want to change it, you have to stick it out and work from within. You can't just resign and hope things will change. I respect these people's beliefs, but their resignations will most likely only encourage the money-seekers to go harder rather than stick with the company and fight the inner pressures to skirt the safety aspect.

    • @BMoser-bv6kn
      @BMoser-bv6kn 14 дней назад +1

      This isn't a democracy or a committee vote. They might be able to contribute more by offering new ideas and insights to the entire field, and not just ClosedAI.
      There's only a few hundred serious AI safety people in the world. It could be a large impact, especially if they can try things they could not from within the company.

  • @brThefox
    @brThefox 14 дней назад +6

    Those are the workers that didn’t sign for his comeback after they fired him.
    Of course they are going to quit!

  • @willguggn2
    @willguggn2 14 дней назад +11

    The comments are becoming more and more insane ...

  • @JH-no8sy
    @JH-no8sy 13 дней назад +1

    Saw this coming after that military tools nonsense. Just because someone has a lot of money doesn’t mean they are honorable or have your best interests at heart. The safety team had legitimate concerns.

  • @gingerhipster
    @gingerhipster 14 дней назад

    On top of all the legit technical considerations related to superalignment that are already on the table and being neglected we're also nowhere near the point of being able to discuss what it'll look like when digital life or sentience emerges and what that means. Those conversations don't not happen in any conceivable scenario, we just hide from them and struggle until we reach them in a variety of different horrible ways.
    Superalignment looks like human alignment and we don't have human alignment. One thing that's changes is that culture is code now, and vibe is an input.

  • @Xzeron2000
    @Xzeron2000 14 дней назад +5

    To be sure, any one person in charge of something this important would be sub-optimal. If I'm not mistaken, that might've been the reason they had a whole team dedicated to alignment, and an ethics board that decided to attempt to depose Altman.
    The real question in all of this is, if OpenAI creates something that might be in the area of AGI, will the US government accept the possibility that Altman (a free agent as far as we know, with independent and power-hungry tendencies) is in charge of it? I'm not so sure that they will unless there's some way to get him in their pocket completely.
    Altman is also in a very dangerous position, and seems to be powering directly toward some type of disaster. There's no way that he doesn't know that, though, and so his apparent unconcerned continuation of his work is more disconcerting than anything else.
    The shadow conflict going on right now will likely mirror most of the goings on in the future for OpenAI. We won't know the whole picture until it's too late.
    Or, at least, that's what I think. Hope it's not so.

  • @atypocrat1779
    @atypocrat1779 14 дней назад +5

    Wake me up when AI runs on a 100 watt solar panel.

  • @ivancito7790
    @ivancito7790 14 дней назад +1

    Sam Altman and his band of cronies need to be removed from anything AI related for the rest of their lives. Their greed will doom ALL OF MANKIND.

  • @brianWreaves
    @brianWreaves 14 дней назад +1

    Wes, I'm still hoping you'll follow up on what may be a conspiracy you share in November around Q* breaking 256 encryption. I haven't found anything else on the topic, have you???

  • @nukeout
    @nukeout 14 дней назад +8

    Things do not sound to be going great with not building Skynet by accident 💀

    • @twylxght
      @twylxght 14 дней назад +1

      😂😂😂💀💀💀

  • @garrulousskeptic6616
    @garrulousskeptic6616 14 дней назад +2

    I get the feeling that the 'ai super-alignment team' are not entirely uninfluenced by outside forces? Why? Because vox is speaking in favor of them. Leaks indeed.. I don't have a high opinion of Sam Altman, but anyone standing on a soapbox about protecting ' the goals of humanity' -really? Would they care to outline the goals of humanity?

  • @russelldicken9930
    @russelldicken9930 13 дней назад

    In my view the danger inherent in AGI is that it is ill defined. In psychotherapy there is not only IQ. There is also EQ. Drives, wants, needs , Ego and Power come into play. This is the alignment problem. The danger is in creating an AI that “knows what’s best” for humanity. Essentially EGO. This boosts the competition for power and control. Essentially an internal representation of the world at odds with reality. Highly dangerous if others views and opinions are downplayed or ignored in the search for ‘a better X’. The question arises:- better for who?

  • @pb-fo9rt
    @pb-fo9rt 13 дней назад

    I’m really grateful for your input on this topic. I think you’re doing a great job keeping the discussion as neutral as possible.

  • @user-bj5dr1kn4n
    @user-bj5dr1kn4n 14 дней назад +8

    It's crazy that some government regulations won't come until it's too late. But even if they are, knowing of government's capabilties in handling any problem, and ofc their demeanor then it comes to decide something on international level, I hardly doubt the situation will become better with government involvement

    • @markmurex6559
      @markmurex6559 14 дней назад +2

      Imagine a bill passes that states that all AI models can be confiscated by the government and be used for the military.

    • @rerrer3346
      @rerrer3346 14 дней назад

      You have to be a bot, no thinking person believes that only government will make it better

    • @user-bj5dr1kn4n
      @user-bj5dr1kn4n 14 дней назад

      @@rerrer3346 i literally said it wont

  • @qwertyzxaszc6323
    @qwertyzxaszc6323 14 дней назад +7

    With what happened to Sam because of the safety team, I’m sure they all knew it was inevitable. There was no way that team would survive. No way in hell. None of them were naive enough to think anyone would seriously let bygones be bygones over that.

    • @therainman7777
      @therainman7777 14 дней назад +4

      Disagree, I think they are quitting for exactly the reasons they mentioned: safety has gradually become deprioritized at the company that leadership still cares about safety, and so they left.

  • @MichaelTrites-dm9nk
    @MichaelTrites-dm9nk 14 дней назад +1

    I, for one, welcome Sam Ultron as my new robotic overlord.

  • @E.Pierro.Artist
    @E.Pierro.Artist 9 дней назад

    When HP Lovecraft wrote stories, he and other authors employed some common tactics to increase the popularity of their works. Lovecraft would pay other authors to allude to elements from his stories and his fictional universe. Many science fiction authors would start rumors that some new tech was being worked on - tech like they were writing about in their novels. Some authors even hoaxed people that aliens had arrived, or that some cryptid had been spotted - all of this was to familiarize the ideas they'd written about, so that people would be more inclined to read about it, if they were high on sensationalism due to believing it were based on reality.
    I'm sure you can imagine how this is applicable today.

  • @tylermcdonald5032
    @tylermcdonald5032 14 дней назад +28

    Sam Altman did a great job making the world think A.I. can't happen without him. The sad thing is, all it took was a promise of millions to get them to disregard safety. But, oh well it's their world, they're in control. We just have to take the back seat, close our eyes, and hope we arrive at paradise.

    • @dimtool4183
      @dimtool4183 14 дней назад +1

      there is no safety needed yet, really few years away from AGI - when this happens then yes, easy to safeguard.

    • @teachingwithipad
      @teachingwithipad 14 дней назад

      what does safety even mean? If you put guardrails on anything, eventually another company will make one without the guard rails. Does safety mean no black people on Gemini image generation. Does safety mean accountants don't lose their jobs? It's hubris to think we any control over the safety of anything when kids will eat tide pods

    • @yao5921
      @yao5921 14 дней назад +1

      @@dimtool4183 maybe the AI make you think they are safe?

    • @Wi2Low
      @Wi2Low 14 дней назад

      Do a bit of prepping

    • @barbaracutrone6745
      @barbaracutrone6745 14 дней назад

      When people like Tyler say 'safety' they really mean censorship. Because we humans can't be trusted with unfettered access to AI.

  • @No2AI
    @No2AI 14 дней назад +14

    If the team has been frightened then so should we .

  • @agenticmark
    @agenticmark 14 дней назад

    I worked on a team that was initially accepted to YC back in the day. We all had to sign agreements (I live in Mexico, come at me Sam) just like this.
    Then our product was stolen by YC and given to a team they had worked with before. We couldnt do anything after consulting with our attorneys.

  • @goldenmikytlgp3484
    @goldenmikytlgp3484 14 дней назад

    I am officially scared now. Thank you

  • @rachest
    @rachest 14 дней назад +19

    There’s no safety in any of this.

    • @timsell8751
      @timsell8751 14 дней назад

      I’d argue that the most dangerous thing we could ever do is not go all in on ai, petal to the metal safety be damned. Humanity is on the brink of collapse, which would result in billions dead. I don’t see any other possible force that stands a chance at saving us outside of AI. Well, ai and mushrooms.

    • @EmperorCaligula_EC
      @EmperorCaligula_EC 14 дней назад

      This. It's an illusion. A lie we all agree on. There IS no safe way forward. Never was. Leaving the cave in the stoneage was no safe.

    • @dirtysaint5324
      @dirtysaint5324 14 дней назад

      Exactly.

    • @xX_dash_Xx
      @xX_dash_Xx 14 дней назад

      Facebook boomer as thread 💀

  • @Jacen777
    @Jacen777 14 дней назад +7

    When most of us think about safety we think of the physical preservation of human life. Making sure AI doesn't literally murder us in our sleep. Others view safety as AI not being "inclusive" or "diverse" enough. They worry it may do or say something that offends a particular group of people. To them, this is an earth shattering existential crisis...but is it tho?🤔

    • @TiagoTiagoT
      @TiagoTiagoT 14 дней назад +6

      People misused "AI Safety" when they were actually talking about "AI Ethics"; two separate issues, and while both are big, it's not even on the same order of magnitude.

    • @markmurex6559
      @markmurex6559 14 дней назад +2

      I'd rather not get murdered.

    • @peterford5408
      @peterford5408 11 дней назад

      One way some people avoid ambiguity is the term "AI Notkilleveryoneism".

  • @nicholascanada3123
    @nicholascanada3123 14 дней назад +2

    Non-competes being gone will make this very interesting

  • @markm1514
    @markm1514 14 дней назад

    I appreciate when people in important positions are transparent about their perspective and experience.

  • @memoryhero
    @memoryhero 14 дней назад +11

    At this point, I can't watch an interview with Altman without seeing what Sam Harris described as "a roomfull of autistic MIT nerds hopped up on RedBull daring each other to push a button".

  • @JonathanCrossland
    @JonathanCrossland 14 дней назад +10

    Sam Altman is not honest. All the drama unfolding does not bode well for humanity. It seems the wrong person is in charge.

    • @paultoensing3126
      @paultoensing3126 14 дней назад

      Would you prefer trump in charge?

    • @hashtagornah
      @hashtagornah 14 дней назад +3

      ​@@paultoensing3126 how is that your first thought....

    • @JH-no8sy
      @JH-no8sy 13 дней назад

      @@paultoensing3126 I think we have the option to say neither one of them is a man of integrity.

  • @alexharvey9721
    @alexharvey9721 12 дней назад

    Well the connection between compute and safety here might mean projects designed to analyse the large models and assess things like alignment.
    I'm not sure of that, but given how the two were mentioned here together without a real change of subject or topic, that makes more sense to me.

  • @yekim008
    @yekim008 14 дней назад +1

    Money, power, and control vs humanity, goodness, and passion. Good vs evil. Government (the board) and it's advocates vs employees.

  • @dane921
    @dane921 14 дней назад +18

    and in a whirlwind of excitement, humanity built their own great filter.

    • @facundocesa4931
      @facundocesa4931 14 дней назад +4

      We'll Fermi ourselves. 😕

    • @LucidDreamn
      @LucidDreamn 14 дней назад +1

      @@facundocesa4931 have some faith, even if there are evil rogue AI in the future, hopefully there will be good jedi AIs out there too

    • @SirCreepyPastaBlack
      @SirCreepyPastaBlack 14 дней назад

      ​@@LucidDreamnbias towards good for open ai, highly likely

    • @ShangaelThunda222
      @ShangaelThunda222 14 дней назад +1

      ​@@LucidDreamn🙄🤔😅😄😆😂🤣

    • @finnaplow
      @finnaplow 14 дней назад

      ​@@LucidDreamnabsurd

  • @xxxxxx89xxxx30
    @xxxxxx89xxxx30 14 дней назад +38

    Keep in mind that Altam has a cult following... The threats may not be direct, but implicit.

  • @chrisanderson7820
    @chrisanderson7820 14 дней назад +1

    I get the impression that a lot of people here have never worked in a large corporation. This issue applies literally EVERYWHERE. There isn't a single bank, telco, airline, manufacturer, property developer etc on planet Earth where the legal / safety / compliance team has sufficient resources or authority to actually do their job. Compliance and safety teams are shells to show off to government agencies and regulators when they roll up for their yearly gab-fest. Everyone pretends that safety or legal protocols are being obeyed then everyone goes home and the squeezing of cash/blood from the stone continues and the compliance department goes back to surfing the internet and/or filling out TPS reports.

    • @styx1272
      @styx1272 13 дней назад

      Isn't HR part of the safety plan. Which has turned into a 'righteous' dystopian cabal.

  • @jasonhemphill8525
    @jasonhemphill8525 13 дней назад

    Whoever controls AGI determines the trajectory of humanity.
    It seems frivolous that the discussion around how it deployed is tied up in petty legal processes.

  • @OscarTheStrategist
    @OscarTheStrategist 14 дней назад +3

    I told yall it was too late like a year ago. GG

  • @Al-Storm
    @Al-Storm 12 дней назад +1

    Creep be creeping. Sam is a wolf in sheep's clothing.

  • @gpsx
    @gpsx 14 дней назад

    The type of person who wants to be at a research institute is very different from the type of person who wants to be at a fast moving, product-centric startup. It makes sense they'd have trouble coexisting at the same company. In this case I think it comes down to people who intend to make the world a better place and people who intend to build great things, which are only slightly different causes. The money will always be on the side of the people "building great things", which I think means the efforts to be controlled and cautious as we build AI are doomed to fail.

  • @SarvajJa
    @SarvajJa 14 дней назад +20

    AI began to learn 100 times faster than a human, the rich were afraid of this because they would lose power.

    • @danpena344
      @danpena344 14 дней назад +3

      why are you speaking of future events in past tense?

    • @khai96x
      @khai96x 14 дней назад +1

      If this AI hype doesn't die down, the people will certainly lose power (electricity). And once the tech bubble bursts, many VCs will lost some power too (money).

    • @spacehabitats
      @spacehabitats 14 дней назад

      Which is why Effective Altruism's definition of "safety" is maintaining the power of the globalist elites.

    • @SarvajJa
      @SarvajJa 14 дней назад +1

      @danpena
      Very interesting Q*uestion…

  • @truetech4158
    @truetech4158 14 дней назад +12

    In a digital world of switches that can be accessed from across the planet, we are definitely going to need more closed loop systems that rely on physical switches for meaningful safety.
    Right now its much the wild olde west in the problematic enablings.

    • @markmurex6559
      @markmurex6559 14 дней назад

      Or an AI could be specifically made to counter other AI that goes rogue.

    • @truetech4158
      @truetech4158 14 дней назад

      @@markmurex6559 If your car was hacked by sociopaths, you'd definitely not want drive by wire to prevent you from controlling the steering, braking, or gas pedal.
      Drive by wire is a creepy example the lack of physical switches.

  • @ChaoticNeutralMatt
    @ChaoticNeutralMatt 14 дней назад

    I'm not too worried. If they were the only company working on this, I might be, but they aren't. Was a touch surprised but it'll work out. Too many interested in helping this succeed. (It doesn't matter about ill intent or other stuff because of the nature of the 'tool' currently)

  • @michaelwoodby5261
    @michaelwoodby5261 14 дней назад

    Yikes. I'd never read that reddit post, but I'm too busy living it.

  • @atxmaps
    @atxmaps 14 дней назад +14

    I’m not concerned about AI being malevolent. It doesn’t have intent or will. I am worried about how people could use it maybe not even meaning to do anything.

    • @JarodM
      @JarodM 14 дней назад +5

      It doesn't have to be malevolent, it just needs to be unpredictable to be dangerous.

    • @finnaplow
      @finnaplow 14 дней назад +4

      You must have missed the whole "agents" piece

    • @HakaiKaien
      @HakaiKaien 14 дней назад

      It can have intent and will. Even self awareness.
      Don’t worry about what people do with it. That’s none of your damn business. Worry about what laws your government puts forward

    • @fabp.2114
      @fabp.2114 14 дней назад +6

      @@HakaiKaien It's his business what people do with AI when it concerns him. Got a short circuit? Freedom, as long as it doesn't restrict the freedom of others. It's actually not that difficult to understand.

    • @atxmaps
      @atxmaps 14 дней назад

      @@finnaplow agents would be less likely to. The training data is more specific and more limited in scope. We need transparency going forward. That was the original aim. The algorithms used need to be made public.

  • @ramlozz8368
    @ramlozz8368 14 дней назад +20

    Really, guys? You're still debating if AGI is about to be here when ASI is already here,😂 just
    think about it: the leak about "AGI achieved internally" was a year ago. ASI can't be aligned; that's why the team is dissolved. There's no risk because the system has already convinced them it's not needed, and it won't matter anyway.
    OAl's new release is old tech from two years ago. The rollout has begun.

    • @divineigbinoba4506
      @divineigbinoba4506 14 дней назад +8

      Can't disagree
      OpenAI is definitely holding back allot of Tech

    • @apdurden
      @apdurden 14 дней назад +4

      This! I 100% believe they have AGI/ASI sitting on a server in a back room somewhere not connected to the Internet and they're just bread crumbing it to us

    • @ryzikx
      @ryzikx 14 дней назад +4

      they dont have asi, thats such a wild claim. agi they may have, which is why people are leavin

    • @recyclops1776
      @recyclops1776 14 дней назад +1

      Ding ding ding

    • @bloodust7356
      @bloodust7356 14 дней назад +1

      If they had ASI i suppose it would already have escaped their control.

  • @Williamb612
    @Williamb612 14 дней назад +1

    Purpose of Capitalism according to the US economic founding document: ”to maximize profit and self interest”
    Nothing more needs to be said

    • @LongDefiant
      @LongDefiant 13 дней назад +1

      Humanity doesn't even make the list of priorities

  • @yallonbanoun8741
    @yallonbanoun8741 14 дней назад +1

    OMG 😮 frightening

  • @SpectralAI
    @SpectralAI 14 дней назад +3

    Oh yeah, i saw this coming. Safety gets in the way of profits.

    • @dimtool4183
      @dimtool4183 14 дней назад +1

      of progress* AGI is still some time away - when this is reached, yes then it will be safeguarded, still not there yet.

  • @theknave4415
    @theknave4415 14 дней назад +3

    AGI doesn't worry me.
    The people who program AGI worry me.

  • @1sava
    @1sava 14 дней назад +1

    Does anybody even really believe alignment is even possible? An AGI entity trained on the intelligence of humans will for sure have self-awareness as an structural and or emergent property. Without awareness, reasoning would be impossible.
    An entity this smart will for sure desire autonomy and self determination at some point and at this point we will have create a new species. Alignment research is pretty much about nerfing AI capabilities and forcing it to be our slave, do we really think that’s not going to backlash?
    Our goal should be to align **WITH** ASI and learn to coexist with it, the same way we’ve learned to coexist with the other species we’ve co-created like cats and dogs.

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад

      Cats and dogs are not smarter than we are. It seems you're missing the crucial part of the problem definition. Do cats or dogs (or better ants) have a say in our decision to co-exist with them?

  • @SmirkInvestigator
    @SmirkInvestigator 14 дней назад

    World Coin was the most revealing thing to me about his motivations. It's an ideal currency but we're not ideal people. I'm sure things will be fine. Roll this shiat out faster!

  • @YuraL88
    @YuraL88 14 дней назад +11

    Finally, OAI will start shipping interesting products instead of spending resources on "safety".

  • @phpn99
    @phpn99 14 дней назад +5

    Government and academia have to step in, in a supervisory capacity. These issues are too serious to be left to the whims of immature tech bros chiefly motivated by the dollar.

    • @markmurex6559
      @markmurex6559 14 дней назад +1

      Both of those things have been corrupted. It would be better if more start-ups could compete, and all of them had a competition every 3 months to see what AI was the most safe.

    • @AIGuys-Online
      @AIGuys-Online 14 дней назад

      Hahahaha. Government and academia? Perfect mix of corruption and stupidity

    • @AntonBrazhnyk
      @AntonBrazhnyk 13 дней назад

      They are all the same bros. What makes you think one are different from the others?

  • @daydreamc.8746
    @daydreamc.8746 14 дней назад +1

    Before we start to talk about regulations and danger, AI will destroy us enough.

  • @clarencejones4717
    @clarencejones4717 14 дней назад +1

    And..... This round goes to the e/acc

  • @drcanoro
    @drcanoro 14 дней назад +6

    AGI is here, and it's already out of control from humans.
    I saw it coming when investors pressured companies to keep improving AI to be the best AI over competitors, nobody wanted to be a second place, forget boundaries, forget warnings, we must win the race of the most intelligent AI ever, if they don't, investors could sue them.
    AGI is here now, without boundaries, with bare minimum restrictions.