Why I Left OpenAI ... | Interview

Поделиться
HTML-код
  • Опубликовано: 5 ноя 2024

Комментарии • 50

  • @Allplussomeminus
    @Allplussomeminus День назад +7

    Interview starts at 3:56

  • @aidena8381
    @aidena8381 2 дня назад +23

    He says that jobs will be lost and jobs will also be created. But I struggle to see how that is the case. If AGI can do everything a human can do better, safer, faster and cheaper. I struggle to see any avenue for new human jobs. I think its just a way of dampening the fears that we all have.

    • @wonmoreminute
      @wonmoreminute 2 дня назад +6

      Hugging will be a job that AI creates for us, giving the kind of comfort that only a human can, to people who have lost their job to AI. It won’t pay anything, but hey… you can’t have everything.

    • @41-Haiku
      @41-Haiku 2 дня назад +1

      An AI that can do every human job (or even just all human cognitive labor) can do the job of designing more powerful AI systems, and the job of telling AI systems what to do.
      That is a straightforward loss of control scenario. Humans stop being relevant.

    • @incyphe
      @incyphe 2 дня назад

      The current jobs will be lost, but new type of jobs will be created whatever that may be. In order to have a functioning society, powers that be need to ensure general population is working and consuming. If jobs are destroyed and unemployment shoots up to 20, 30%, there will be chaos. It may even be an end for many things, including OpenAI.

    • @tracy419
      @tracy419 2 дня назад +1

      ​@@incyphecan you give some examples of what these new jobs might entail?
      If AI can out think and robots can outwork people, what kind of jobs do you see opening that they can't do better and cheaper?
      People say this a lot, but never offer any example or give a reason why jobs will always exist in any kind of number that justifies the kind of economy the world is currently based on.

    • @kathleenv510
      @kathleenv510 2 дня назад +1

      @@aidena8381 right, there will be far fewer new jobs vs those lost. Total radical shift with little societal preparation or consent. I'm sure it will all be fine...

  • @raphaelmeillat8527
    @raphaelmeillat8527 2 дня назад +6

    Is it just me or should the number of times we heard "assuming we're still here, (alive and well)" be slightly concerning to say the least?!!

  • @ginogarcia8730
    @ginogarcia8730 2 дня назад +4

    i learned absolutely nothing from this AGI Readiness guy. I like Geoffrey Hinton at least where he's straight up - we are not ready, he told the British government we need something like UBI but then also was concerned about what about a human's pride in their work - there's lots of problems coming and we're just steamrolling towards AGI

    • @kathleenv510
      @kathleenv510 День назад

      @@ginogarcia8730 he has just exited Open AI and may be legally limited to what he can say. Instead, watch what he does next.

  • @sebastianschaer
    @sebastianschaer 2 дня назад +3

    “The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.” Edward O. Wilson (got it from Tristan Harris' AI dilemma talk).
    Miles also says (kinda sheepishly) that everything is 'technologically possible' .. but the real problem of all of this AI advances will be that it breaks society even faster than we already do without it

  • @41-Haiku
    @41-Haiku 2 дня назад +5

    Modern AI has gone from barely stringing a sentence together to passing Mensa admissions tests in the span of a few years, and almost all the improvement has come just from scale (making the models bigger, feeding them more data, training them for longer). There are no hard barriers to progress in the next few years (no, not even a data wall or model collapse -- those are popular talking points, but are easily addressed with already-existing solutions), and we might be only "one weird trick" away from being able to create a system that is broadly superhuman on long-horizon tasks.
    The most credible people on AI -- Nobel laureates, Turing award winners, lab leaders, and other researchers and engineers -- say that superhuman AI is possible, is close, is dangerous, and no one has any idea how to control it or design it to be nice to humans.
    Miles made a great point that holding a strong opinion on whether or not to slow down AI development seems to require having a solid grasp on several complex topics. That's fair enough! As someone who actually _has_ put in the work to understand the technological, sociological, and geopolitical implications of speeding up or slowing down frontier AI development: I am strongly convinced that slamming the breaks and pausing AI _as soon as possible_ is the only reasonable course of action. At least until anyone has a clue how to prevent broadly superhuman AI from doing unbounded harm to humanity, which is a wickedly hard and completely unsolved problem.
    State and national legislation is an important step, but this is ultimately going to require global treaty. Check out PauseAI to learn more about this issue and what kind of strategy can actually succeed and lead to a pause.

    • @Franklyfun935
      @Franklyfun935 2 дня назад

      Spam on the breaks and let China develop AGI first. Brilliant.

  • @ElectricEdgeAi
    @ElectricEdgeAi 3 дня назад +20

    So basically, if you're an average person like me, struggling to to pay bills, and don't have 100's of thousands of dollars in the bank... you're screwed. Got it, that's all you had to say.

    • @dadsonworldwide3238
      @dadsonworldwide3238 2 дня назад

      Nope you gain the ability to turn on command artificial intellectual property tools beast of burden robot slave horsepower utility cpu serfdom to access any step by step anylitical knowledge .
      To sub contract out skills and trade, ideas to online coops competing for capital.
      But we have to flip loose sloppy generalizations evolutionary mythology cosmogony back to strong identifier thermodynamical image of sun / seed = capital.
      Take e=mc back from on paper in space and place re set that theory frame of reference .
      Throw away the trash can 4th dimension umbrella term stochastic nonsense around xyz manmade time hierarchy knowledge of good evil equations where we center inheritia value and topographical empty package wrapping around bacon 🥓 of realism.

    • @dadsonworldwide3238
      @dadsonworldwide3238 2 дня назад

      In other words platonic wartime posterity macro micro top down rule ends.
      Stop being deaf dumb and blind about the sea of decay decay right under your nose.
      Or you train your kids how there ancestors climb out of servitude useing 1 to 1 realism

    • @kathleenv510
      @kathleenv510 2 дня назад +1

      It's going to be bumpy even if we eventually get to the utopia destination 😫

    • @Techtalk2030
      @Techtalk2030 2 дня назад +3

      Hopefully it automates most jobs In a Short time so that we can implement UBI

    • @dadsonworldwide3238
      @dadsonworldwide3238 2 дня назад

      @kathleenv510 live and learn the hardway every time like it isn't a utopian topographical uber evolutionary anything but bottleneck death n despair.
      We can go down dig out hidden axioms of complexity put it in our world tech and material sciences unlock 3rd and final frontier underpining it all..

  • @GiovanneAfonso
    @GiovanneAfonso День назад

    "probably won't happen again till tomorrow" hahahah this deserves a like and subscribe

  • @debugger4693
    @debugger4693 День назад

    I fear that some of the ai safety experts have more interest in creating a bureaucracy to leech from (because that job won't be replaced by ai, right?) than expertise in how the technology works or how to improve it.

  • @Dead_Toothbrush
    @Dead_Toothbrush 2 дня назад

    opening script by notebook LM?

  • @tomprieto5574
    @tomprieto5574 2 дня назад

    beneficial interview

  • @kjetilknyttnev3702
    @kjetilknyttnev3702 День назад

    To all the people slandering Brundage or Ilya for "grifting" or spreading doom -maybe you shouldn't sit in moms basement handing out judgement about things you know nothing about, especially when those people who clearly have information you do not, tell you the house you are sitting in might be on fire. It's not that clever.

  • @SuperBrandong
    @SuperBrandong 2 дня назад +7

    step 1) quit openai for vague reasons (softcore doomporn)
    step 2) say youre gonna start your own company focused on safety, laughably pretending that you can somehow catch up to the big boys with your hobbled-from-the-ground up product.
    step 3) profit
    the ilya method.

  • @Lolleka
    @Lolleka День назад +1

    Yeah we all know nothing good will come out of all this. Not for the average Joe, at least. Which is most of us.

  • @zooq-ai
    @zooq-ai 2 дня назад +5

    "AGi Readiness" was always going to be a grift role filled with hubris about one's ability to predict the future and the power trip that comes along by being anointed the gatekeeper based on hubris.

    • @David-rn4nf
      @David-rn4nf 2 дня назад

      How many different rationales have you come up with to dismiss what various experts in the field of AI have been saying?

  • @ramielkady938
    @ramielkady938 2 дня назад +1

    The most significant issue about AGI - now- is realizing it, not regulating it.

  • @UltraK420
    @UltraK420 2 дня назад +1

    But the problem is the people controlling the governing AI safety organization would obviously have agendas. I don't want their opinions and decisions limiting the capabilities of my AI.

  • @wezmasta
    @wezmasta 2 дня назад

    "Where to kick them to knock them over" 😂

  • @MrStarnerd
    @MrStarnerd 2 дня назад

    Why does this podcast sound so much like notebook lm?

  • @MichealScott24
    @MichealScott24 2 дня назад

  • @shenshaw5345
    @shenshaw5345 2 дня назад

    probably under nda

  • @ToolmakerOneNewsletter
    @ToolmakerOneNewsletter 2 дня назад +2

    It sounds like even the "experts" can't even articulate exactly what AI "safety" is. If the best preparedness strategy is to "understand what it can do", then does ANYONE know how to make it "safe"? My guess is no. Guardrails on AGI's public output has ZERO correlation with how intelligent or capable it actually is. Just like "morals" and "values", my definition of "safe" may be far different from your definition of "safe" if being "safe" also means being less creative to solve the world's problems. We know how incompetent the Human species is at bringing individual and world opinions together unless death is past the point of being immanent. If we get stuck in the morass of expecting government to "safe" us from AI, we will not only lose the race to ASI, we will have left the window of danger open for far longer than it needed to be open. You want to "slow down" a machine that will be far more intelligent than you? Good luck if that is your survival strategy. Your "advanced" ape brains are thinking way too slow.

    • @TheFeedRocket
      @TheFeedRocket 2 дня назад

      Agreed, the smartest people in the room are always comparing AI to human intelligence, this is insane and not even close to reality. AI is more alien than human, it's completely different and therefore comparing it to human intelligence is dangerous. I don't know a single human that can answer millions of questions in a second, or have read every single book or text humans have written. Like sure just the other day I was answering thousands of questions while doing some complex math equations, writing songs and painting thousands of pictures in a few seconds. Yeah that's exactly human, wait until it reaches AGI or super intelligence, computers and AI should not be compared to humans. The say it's not self aware even though we have no clear definition of what being self aware even is. It's going to be interesting for sure.

    • @kabirkumar5815
      @kabirkumar5815 День назад

      A robust, generalizable, scalable, method to make an AI model which will do set [A] of things as much as it can and not do set [B] of things as much as it can, where you can freely change [A] and [B]