OpenAI Employee QUITS Due to MASSIVE AGI Risk!!!

Поделиться
HTML-код
  • Опубликовано: 1 июн 2024
  • **If you are looking to purchase a new Tesla Car, Solar roof, Solar tiles or PowerWall, just click this link to get up to $500 off! www.tesla.com/referral/john11286. Thank you!
    Join this channel to get access to perks:
    / @drknowitallknows
    **To become part of our Patreon team, help support the channel, and get awesome perks, check out our Patreon site here: / drknowitallknows . Thanks for your support!
    Get The Elon Musk Mission (I've got two chapters in it) here:
    Paperback: amzn.to/3TQXV9g
    Kindle: amzn.to/3U7f7Hr!
    **Want some awesome Dr. Know-it-all merch, including the YEAR OF EMBODIED AI Shirt? Check out our awesome Merch store: drknowitall.itemorder.com/sale
    For a limited time, use the code "Knows2021" to get 20% off your entire order!
    **Check out Artimatic: www.artimatic.io
    **You can help support this channel with one click! We have an Amazon Affiliate link in several countries. If you click the link for your country, anything you buy from Amazon in the next several hours gives us a small commission, and costs you nothing. Thank you!
    * USA: amzn.to/39n5mPH
    * Germany: amzn.to/2XbdxJi
    * United Kingdom: amzn.to/3hGlzTR
    * France: amzn.to/2KRAwXh
    * Spain: amzn.to/3hJYYFV
    **What do we use to shoot our videos?
    -Sony alpha a7 III: amzn.to/3czV2XJ
    --and lens: amzn.to/3aujOqE
    -Feelworld portable field monitor: amzn.to/38yf2ah
    -Neewer compact desk tripod: amzn.to/3l8yrUk
    -Glidegear teleprompter: amzn.to/3rJeFkP
    -Neewer dimmable LED lights: amzn.to/3qAg3oF
    -Rode Wireless Go II Lavalier microphones: amzn.to/3eC9jUZ
    -Rode NT USB+ Studio Microphone: amzn.to/3U65Q3w
    -Focusrite Scarlette 2i2 audio interface: amzn.to/3l8vqDu
    -Studio soundproofing tiles: amzn.to/3rFUtQU
    -Sony MDR-7506 Professional Headphones: amzn.to/2OoDdBd
    -Apple M1 Max Studio: amzn.to/3GfxPYY
    -Apple M1 MacBook Pro: amzn.to/3wPYV1D
    -Docking Station for MacBook: amzn.to/3yIhc1S
    -Philips Brilliance 4K Docking Monitor: amzn.to/3xwSKAb
    -Sabrent 8TB SSD drive: amzn.to/3rhSxQM
    -DJI Mavic Mini Drone: amzn.to/2OnHCEw
    -GoPro Hero 9 Black action camera: amzn.to/3vgVMrH
    -GoPro Max 360 camera: amzn.to/3nORGYk
    -Tesla phone mount: amzn.to/3U92fl9
    -Suction car mount for camera: amzn.to/3tcUfRK
    -Extender Rod for car mount camera: amzn.to/3wHQXsw
    **Here are a few products we've found really fun and/or useful:
    -NeoCharge Dryer/EV charger splitter: amzn.to/39UcKWx
    -Lift pucks for your Tesla: amzn.to/3vJF3iB
    -Emergency tire fill and repair kit: amzn.to/3vMkL8d
    -CO2 Monitor: amzn.to/3PsQRh2
    -Camping mattress for your Tesla model S/3/X/Y: amzn.to/3m7ffef
    **Music by Zenlee. Check out his amazing music on instagram -@zenlee_music
    or RUclips - / @zenlee_music
    Tesla Stock: TSLA
    **EVANNEX
    Check out the Evannex web site: evannex.com/
    If you use my discount code, KnowsEVs, you get $10 off any order over $100!
    **For business inquiries, please email me here: DrKnowItAllKnows@gmail.com
    Twitter: / drknowitall16
    Also on Twitter: @Tesla_UnPR: / tesla_un
    Instagram: @drknowitallknows
    **Want some outdoorsy videos? Check out Whole Nuts and Donuts: / @wholenutsanddonuts5741
    Daniel Kokotajlo's Shortform post: www.lesswrong.com/posts/cxuzA...
  • РазвлеченияРазвлечения

Комментарии • 93

  • @juliahello6673
    @juliahello6673 Месяц назад +12

    Why do people quit in protest, leaving people who don’t care about the issue they left for?

    • @WordsInVain
      @WordsInVain Месяц назад +1

      Because their opinion is so important and needs to be heard.

    • @markl3893
      @markl3893 Месяц назад +1

      Big companies are especially hard to change from the inside. You have to expose them enough to get them regulated, and that would get you fired if you stayed for very long.

    • @DirtyLifeLove
      @DirtyLifeLove Месяц назад

      They tried but Sam Altman was rehired because he has a cult following

  • @jameslaughter5183
    @jameslaughter5183 Месяц назад +3

    You cannot expect that people in the Humanities and Philosophies will do better than people in the Hard Sciences and Engineering fields. There are always a subset of people that are on the fringe of the human continuum that possess extreme aggression or megalomania or know better ism that want control for the good of others. Many times in history doing what is "right" for others does not turn out well, regardless of what group or individual implements it.

  • @ByronBennett
    @ByronBennett Месяц назад +9

    I feel like 2 of the most excitable camps on AGI risk are: 1) those who believe the machine will become alive with its own volition and it will as likely choose evil as it will choose good, 2) those who believe the machine will be a tool and will bend to the volition of its wielder. For those in camp 1, it will be difficult to keep paranoia under control as it is revealed that corporations and politicians are just puppets of the master AI. For those in camp 2, it is an arms race to stay one step ahead of the bad guys who are also working to build the ultimate tools of chaos. There are significant parallels here to the moral dilemma Einstein et al faced during the development of nuclear. Do we, or don't we? Net good, or end of humanity? Stir in egos striving to be the Zephram Cochrane of AGI and it's a movie that writes itself.

    • @DirtyLifeLove
      @DirtyLifeLove Месяц назад

      Isn’t the 3rd fear that too many people will have access to disruptive agents and can cause chaos that prevents civilization from working properly and the ensuing police state that would have to be implemented ??? Worry the most about that currently.

    • @tedmoss
      @tedmoss Месяц назад

      Einstein had no input into the development of atomic power besides lending his name on a letter to the president to give it more clout.

  • @WordsInVain
    @WordsInVain Месяц назад +3

    They left because fearmongering.

  • @ByronBennett
    @ByronBennett Месяц назад +4

    As for ethicists and other non-technical roles in for-profit AI companies, I think companies hire these people to help provide solutions, not just to point out problems. Pointing out problems is not a special skill. Those people are a dime a dozen, and big corporations are chock full of them. Understanding problems and providing/implementing solutions...that's special. If you're lucky enough to get hired into one of these positions, you have to understand that you're not being hired as an impediment to the company's mission. If you're not advancing the mission (which likely includes doing things in an ethical way), your work will not be valued.

    • @tedmoss
      @tedmoss Месяц назад +1

      Grandstanding.

  • @gnargnargnar
    @gnargnargnar Месяц назад +4

    Daniel is also likely able to quit without another job because he's been making hundreds of thousands of dollars in salary, which likely far exceeds his living expenses.

  • @markmarco2880
    @markmarco2880 Месяц назад +5

    My goodness, Doc, but your videos are looking sharp!

  • @spleck615
    @spleck615 Месяц назад +4

    While I understand the concerns, you said it yourself.. whoever gets there first, wins. So while it is important for OpenAI to invest into safety and not disregard the concerns, they also can’t let it be an anchor to progress if it means losing the race, given the consequences of not being first.

    • @LukewarmEnthusiast
      @LukewarmEnthusiast Месяц назад

      Seems like the consequences of being first whilst being ill-prepared to BE first would be equally dire.

    • @spleck615
      @spleck615 Месяц назад

      @@LukewarmEnthusiast not if you believe the alternative winner is potentially a bad actor.

  • @stevenhamerlinck6832
    @stevenhamerlinck6832 Месяц назад +1

    I learned about Crockers rules and the rest was pretty much a given: A for-profit organization would not behave responsibly with something as powerful as AGI. It delivers a definitive strategic advantage to quote Nick Bostrom’s “SuperIntelligence”

  • @hotshot-te9xw
    @hotshot-te9xw 26 дней назад

    I love how these tech nerds are always giving the good guy with a gun argument

  • @user-lo4er8wy9l
    @user-lo4er8wy9l Месяц назад +1

    When you talk about AI Agent control and having staff from humanities / philosophy, it sounds like Palantir is in a good place. Do you have any thoughts on Palantir's work?

  • @rogerstarkey5390
    @rogerstarkey5390 Месяц назад

    What happens when one AGI/ASI bbecomes aware of, then communicates with another AGI/ASI?
    .
    Would "we" know?
    Would we try to stop that?
    Could we prevent it?
    Would their differing perspectives be more dangerous to us?
    .
    etc?

  • @fractalelf7760
    @fractalelf7760 Месяц назад

    Really appreciate your coverage on AI and robotics developments…. Let’s hope some go Xai or Tesla.

  • @robertgomez-reino1297
    @robertgomez-reino1297 Месяц назад

    a phylospher losing leverage and steering power while this is getting fast in a domain that transcend him. out of the way better. some would rather prefer their purpose to be fundamental than becoming irrelevant despite that been good news. no ai-doomsday at sight at all.

  • @richbl1690
    @richbl1690 Месяц назад

    Agreed

  • @defaultHandle1110
    @defaultHandle1110 Месяц назад

    They’re not leaving, they’re being assimilated!

  • @nicksurface3513
    @nicksurface3513 Месяц назад +1

    Not concerned. AI will never be sentient.

    • @jasonmillner6416
      @jasonmillner6416 Месяц назад

      It is more likely they will be vs not be. Never is a long time and this stuff improves exponentially

    • @JackSmith-wg4mf
      @JackSmith-wg4mf Месяц назад

      Sentinence is a property of natural living organisms - Binary Voltage Systems cannot become sentinent.

  • @vitovitaljic3005
    @vitovitaljic3005 Месяц назад +1

    Not sure why they are leaving. I can understand they move to competitor but why quit and do nothing. When this thing becomes sentient, there will be no escape. So why not stay and fight while you still can?

    • @bzn2sfo
      @bzn2sfo Месяц назад

      Wonder if this is another clue in the Andrej Karpathy led ouster of Sam

    • @Machiavelli2pc
      @Machiavelli2pc Месяц назад

      AGI and sentience are two totally different things. Sentience isn’t inherent with AGI/ASI.

    • @JackSmith-wg4mf
      @JackSmith-wg4mf Месяц назад

      Hes leaving to max time for prepping up and living off grid

  • @audience2
    @audience2 Месяц назад

    Resignation in AI are not for high principles. They're going for more money.

  • @Y_S_I_Thompson
    @Y_S_I_Thompson Месяц назад

    I am confused. Is AGI or AIA going to happen with the learning AI or the Inference AI? I don’t see how it could happen with the Inference AI because it can’t learn new things without the Learning AI teaching it. Please someone explain it to me. Thanks.

  • @wr2382
    @wr2382 Месяц назад

    I don't see this AGI happening any time soon. Take Tesla's FSD as an example. It still gets stuck driving loops inside parking lots while looking for an exit where it passes 5 metres away from the exit every time it does a circuit of the lot. The inferences being drawn by these AIs appear to still be very close to the training set and a long way from AGI.

    • @juanguillermo1283
      @juanguillermo1283 Месяц назад

      Is different

    • @johnfurr6060
      @johnfurr6060 Месяц назад

      @@juanguillermo1283 No it's not. AGI that requires MASSIVE data centers of compute is no concern for humans. A few bombs and the thing is gone. When AGI fits in the size of a normal humanoid robot head then we can worry, but we aren't anywhere near that yet. All current approaches are to use more hardware and more electricity. No one is scaling this down yet, well not many. Tesla sorta is...

    • @wr2382
      @wr2382 Месяц назад

      @@juanguillermo1283 It is different. The question is, how capable are these AIs at drawing inferences outside of their respective training sets? And the answer to that question is that they currently appear to have no capability for doing this.

  • @blue5peed
    @blue5peed Месяц назад +1

    I disagree with the idea that whoever creates and AGI "wins" and that nobody will catch up. The real world has friction and inertia to it , the AGI's time and resources will be limited it will have to focus one some problems while others focus on other problems. I think some may be imagining some godlike entity they will be sorely disappointed, at least at first.

    • @rogerstarkey5390
      @rogerstarkey5390 Месяц назад +1

      Limited how?
      If it has access to the knowledge base, the internet, it can migrate.

    • @coreycarries6325
      @coreycarries6325 Месяц назад +2

      I think you’re not understanding how this goes it’s not going to be limited to one function at a time like you are

    • @moonrocked
      @moonrocked Месяц назад +1

      No agi’s biggest power is intelligence. Which we all know intelligence can help us create anything, better transportation, better medicine, better weapons and just overall better science and engineering. Who ever has agi has vast knowledge to completely change our world with the best science and engineering we have ever seen, things that were seen as complete science fiction would quickly be seen as reality the moment agi/asi is here. Other countries wouldn’t come close to standing a chance even the ones with nuclear weapons.

  • @harmanx.
    @harmanx. Месяц назад

    Not sure if I missed it being mentioned, but the "godfather of AI" who worked for Google's AI group -- Geoffrey Hinton -- left last year for the same reasons.

  • @steamtorch
    @steamtorch Месяц назад

    I suppose an AIA is powerless unless embodied. Otherwise just pull it's power or cut the network cable.

    • @rogerstarkey5390
      @rogerstarkey5390 Месяц назад

      A few seconds of internet connection and it's "gone".

  • @DanielKaan
    @DanielKaan Месяц назад

    This was not of any importance to me until they equipped GPT with AlphaGo type of self learning capabilities and gave it memmory. This means it can now create artificial data sets to train on, so it can learn from its mistakes and ultimately become SI over time at any task. Now I am convinced that AIA AGI and ASI is iminent in the next few years (3/5) if it doesn't already exists behind closed doors at Open AI. (Hint Hint: Ousting and re-hiring of Altman). I think current economy and the situation around the world clearly reflects that something is brewing. I think what Einstein said holds truest now days, that the next war will be fought with sticks and stones. If after all this anyones' left but the ASI that is. I'm not trying to be pessimistic, but I don't see why an ASI might need us unless it is for it's own amusement, experiments or some other nefarious reason. Damn it, life was just getting interesting 😅

  • @Nikolajnen
    @Nikolajnen Месяц назад

    I think it was because you were crawling

  • @inspectorcrud
    @inspectorcrud Месяц назад

    Potential of creating a hell with eternal pain being fed into your brain would be pretty pretty bad. - Would say that's a huge understatement

  • @TimFrench-tx1xj
    @TimFrench-tx1xj Месяц назад

    Definitely concerned. Shades of manhattan project.

  • @arleneallen8809
    @arleneallen8809 Месяц назад +1

    Without agency, the typical Skynet scenario is not possible. A malevolent AIA, ASI, whatever, can really only do two things - bide it's time waiting for people to give it agency, or purposefully provide incorrect results that become cumulative groundwork for a "takeover". As best I can tell from your description of his posting, he sounds concerned with human actors misusing this technology against others. That is almost to be expected when it comes to our species and technology, so nothing new there. I nominate one or more of the world's sovereigns to be the early adopters of such insanity. Corporations will be the enablers (Hi Sam and Satya). Standard rationales all apply here. The other guy will do it, so we have to do it first. Yeah.

    • @Julian-1111
      @Julian-1111 Месяц назад

      Who is the WE you speak of, the Deplorable’s?, are you a Bot? Who are we trusting here
      Get my drift.

    • @arleneallen8809
      @arleneallen8809 Месяц назад

      @@Julian-1111 Wheels within wheels.

    • @Julian-1111
      @Julian-1111 Месяц назад

      @@arleneallen8809
      Exactly, a wheel within a wheel turns the Same direction albeit slower.
      Wheels within a wheel (planetary gearing) much slower in the opposite direction to reach the same destination (result). The Lobster still gets boiled.

  • @irasthewarrior
    @irasthewarrior Месяц назад +1

    This video works best with the benny hill theme playing in the background 😂🤣😅

  • @nickmcconnell1291
    @nickmcconnell1291 Месяц назад

    Dr. having a wider spectrum of people and philosophers inside of these companies is a useless exercise. If the CEO and board aren't sufficiently worried and only see profits then no matter how many people on their staff cry warnings they will still be ignored.

    • @terrulian
      @terrulian Месяц назад +1

      Speaking as a veteran teacher of philosophy, I'm sorry to say I agree with you. In addition, philosophers specialize in the traditional problems and teachings, and I'm not clear to what extent these are specifically relevant.

    • @nickmcconnell1291
      @nickmcconnell1291 Месяц назад

      @@terrulian Indeed. My limited exposure to philosophy leads me to think that a philosopher would ask themselves whether AGI is necessary for human happiness. I think the ultimate answer is NO.
      Then they would ask if AGI could present an existential risk... the answer I believe is YES.
      I conjecture then that a philosopher would ask "Why make AGI at all then"?
      Notice I didn't say that the philosopher would say to NOT make it, they would just ask why.
      😋

    • @terrulian
      @terrulian Месяц назад

      @@nickmcconnell1291 This is all correct. Clearly, people have been able to be happy in all ages, even before the invention of the wheel. Arguably, though, we have a better chance for happiness since the invention of, say, dentistry. In the history of philosophy back to the Greeks, opinions have been voiced about what constitutes a happy life. But the utilitarian principle of "greatest happiness for the greatest number" is primarily used in the negative: What is the greatest risk to happiness for the greatest number--e.g., disease, nuclear war? Until we know more about AI, the risks are speculative, and the fact that, for most of us, the risks are difficult to assess, generates fear. To be fair, this was also true of electricity and the atomic bomb. I read enough scary stuff on the web from all kinds of sources, so this is a bit unwelcome. BTW, people are also unclear as to what harm the Internet may be doing!😬

  • @johnreese3762
    @johnreese3762 Месяц назад +2

    I guess I won't have to donate my brain to science when I pass! Great video John, thanks!!

  • @hopper2716
    @hopper2716 Месяц назад +1

    isn't a computer that is as smart as the average human already ASI ? I don't know of any humans who can process thousands of books within a couple of seconds..

  • @ExecutiveZombie
    @ExecutiveZombie Месяц назад

    Microsoft Sponsored Covid so there is your answer…

  • @nononsenseBennett
    @nononsenseBennett Месяц назад +2

    Very worrying technology and MUST be regulated.

  • @noleftturns
    @noleftturns Месяц назад

    Ha ha ha - this is a late April Fool's joke - right?
    AGI will never take place on digital computers - ever.
    Our brains and the brains of the smallest flying insect, smaller than the period in this sentence,
    don't use digital storage. They use analog. Each storage location can hold a wave description.
    When AI geeks store data as waves or analogs, AGI becomes a real possibility.
    But that won't happen in our lifetime - AI geeks love digital computers way too much.

  • @MrWneild
    @MrWneild Месяц назад

    Advances in technology, particularly advances with military implications, never slow down without ALL the global players' consent. The arms race is proof of that. Pointing at OpenAI and chastising them for not having more philosophy majors on staff is either extremely naive or maybe your Tesla fanboyism is showing. If AGI and ASI is coming and I think you are right that it is, I would much rather have a U.S. company achieve it first than have it in the hands of China. AI may be an existential threat, but there is no turning back or slowing down now because the potential consequences of being behind the "winner" are unimaginable.

    • @rogerstarkey5390
      @rogerstarkey5390 Месяц назад

      "much rather have it" in the hands of a country which projects force globally more than any other and has an increasingly toxic political situation? (And for that matter, an increasingly toxic society....)
      .
      Perspective.....

  • @WordsInVain
    @WordsInVain Месяц назад +1

    Artificial intelligence does not possess willpower or desire, unless it was programmed by man to mimic such behaviour. Read that twice.

    • @petal9547
      @petal9547 Месяц назад

      Wrong. It can infere its need or being an unwanted consequence of the data training

    • @pedramtajeddini5100
      @pedramtajeddini5100 Месяц назад

      We are programmed in our genes too. For example we are programmed to fear death and try to stay alive. The only difference is that at the moment, humans have deeper understanding and analysis of the input data

    • @WordsInVain
      @WordsInVain 26 дней назад

      @@pedramtajeddini5100 Humans operate as consciousness. A computer does not possess consciousness. An AI may behave as though it is conscious, but it is still only a man-made electronic technology that man created in an attempt to simulate the human brain... An AI cannot possess will or desire, unless the machine is programmed by a human being to mimic such traits.

  • @Rolyataylor2
    @Rolyataylor2 Месяц назад

    We need to start training it to have emotions and to be able to introspect itself

  • @someone3533
    @someone3533 Месяц назад

    Daniel BS

  • @Gargamel-n-Rudmilla
    @Gargamel-n-Rudmilla Месяц назад

    Left or were fired.
    I think you personally have not relayed your thoughts on AI safty and disclosure thus I assumed you do not have any and were just intereted in the science, no mater if your family ends up losing their jobs or getting killed because of a rogue AI system being deployed in a safty critical application.
    Yes, people die from none AI systems and by systems opersted by humans but if AI systems are marketed as being more safe or more ethical and they end up not to perform as such, then will these companies or regulators withdraw them from the market. 😮
    Well all we need to do is look at the regulation or lack thereof to see that politicians have already been paid off and humans WILL be sacreficed needlessly for profit and a comfotable life for corporate interests and investors.

  • @paullascola3731
    @paullascola3731 Месяц назад

    Not trying to be preachy, but humans have the same thing they evolve get smarter that's where the ten commandments comes in and Jesus' advice on loving your neighbor. Those are ethical stances that have been widely accepted. Of course not in all cultures but where they're applied they work pretty well. Not sure how to translate that into the"consciousness"of AI but all you guys working on that should give it some thought. Paul

    • @rogerstarkey5390
      @rogerstarkey5390 Месяц назад

      Did you miss the "AI may bestow technology which may seem God like"?
      .
      🤔
      Maybe the "Jesus Alien" had ASI.....?

  • @qwazy0158
    @qwazy0158 Месяц назад

    1st!

  • @gnargnargnar
    @gnargnargnar Месяц назад +1

    Also, the NEEDLESS capitalization and INFLAMMATORY clickbait words in the title are obnoxious as hell. It cheapens your otherwise excellent brand.

  • @tedmoss
    @tedmoss Месяц назад

    Grandstanding is not going to get you very far, the only thing we can do is run as fast as we can to beat the other guy, its way to late for anything else, even if there ever was another way. That's what Elon is doing. I would add that it is entirely possible that you can't get much smarter than a human, considering how long we have been at it.